- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
-
- com.pervasive.datarush.operators.graph.SubJobExecutor
-
- All Implemented Interfaces:
LogicalOperator
,PipelineOperator<RecordPort>
,RecordPipelineOperator
public class SubJobExecutor extends AbstractExecutableRecordPipeline
The SubJobExecutor operator can be used to execute JSON serialized subgraphs within the current workflow. This can allow you to dynamically run a subgraph within the currently executing graph with alternative parameters or configuration. Within the subgraph itself two special operators can be used to dynamically configure the graph or to extract data produced by the the subgraph. A MockableExternalRecordSource explicitly named 'Start Node' and a MockableExternalRecordSink explicitly named 'Stop Node' can be used to utilized this feature. The input port should consist of a record containing one or more string fields with the first field always consisting of the path to the JSON serialized graph that should be executed. Any remaining fields in the input are considered override properties for the 'Start Node' with the field name being the key and the contents of the field used as the value. The override port can be used to override any applicable value within the graph and additionally contains several special overrides for altering JDBC connections. The first field should contain keys and the second field should contain the associated values. These work the same as when defining override values for a graph on the command line interface.
-
-
Field Summary
Fields Modifier and Type Field Description static String
DB_DRIVERNAME
static String
DB_PASSWORD
static String
DB_TABLENAME
static String
DB_URL
static String
DB_USER
static String
DBPREFIX
-
Fields inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
input, output
-
-
Constructor Summary
Constructors Constructor Description SubJobExecutor()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
execute(ExecutionContext ctx)
Executes the operator.RecordTokenType
getOutputType()
Get the configured output type of the subgraph.RecordPort
getOverrides()
Get the optional overrides data port.boolean
getUseCustomConfig()
Get whether a custom engine configuration included in the serialized graphs should be used.void
setOutputType(RecordTokenType outputType)
Set the configured output type of the subgraph.void
setUseCustomConfig(boolean useCustomConfig)
Set whether a custom engine configuration included in the serialized graphs should be used.-
Methods inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
getInput, getOutput
-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Field Detail
-
DBPREFIX
public static final String DBPREFIX
- See Also:
- Constant Field Values
-
DB_URL
public static final String DB_URL
- See Also:
- Constant Field Values
-
DB_USER
public static final String DB_USER
- See Also:
- Constant Field Values
-
DB_PASSWORD
public static final String DB_PASSWORD
- See Also:
- Constant Field Values
-
DB_DRIVERNAME
public static final String DB_DRIVERNAME
- See Also:
- Constant Field Values
-
DB_TABLENAME
public static final String DB_TABLENAME
- See Also:
- Constant Field Values
-
-
Method Detail
-
getOverrides
public RecordPort getOverrides()
Get the optional overrides data port. If used should contain a record with two String fields representing the key, value pairs of the overrides and any additional DB specific overrides.- Returns:
- input data port for override properties
-
getUseCustomConfig
public boolean getUseCustomConfig()
Get whether a custom engine configuration included in the serialized graphs should be used. Otherwise defaults to using the engine configuration of the parent graph.- Returns:
- true if a custom engine configuration will be used
-
setUseCustomConfig
public void setUseCustomConfig(boolean useCustomConfig)
Set whether a custom engine configuration included in the serialized graphs should be used. Otherwise defaults to using the engine configuration of the parent graph.- Parameters:
useCustomConfig
- if a custom engine configuration should be used
-
getOutputType
public RecordTokenType getOutputType()
Get the configured output type of the subgraph.- Returns:
- outputType of the subgraphs executed by this operator
-
setOutputType
public void setOutputType(RecordTokenType outputType)
Set the configured output type of the subgraph. Should match the output type of the "Stop Node" if included in the subgraphs.- Parameters:
outputType
- of the subgraphs executed by this operator
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
-