java.lang.Object
com.pervasive.datarush.operators.AbstractLogicalOperator
com.pervasive.datarush.operators.StreamingOperator
com.pervasive.datarush.operators.ExecutableOperator
com.pervasive.datarush.ports.record.ExternalRecordSource
- All Implemented Interfaces:
LogicalOperator,RecordSourceOperator,SourceOperator<RecordPort>
Defines an external source of record data. External record sources
are used in cases where an external, non-DataRush, application needs to
produce data for a DataRush graph. Note that because external
ports are not managed by the standard DataRush deadlock detection
mechanisms, care must be taken to avoid deadlock. Specifically,
consumers need to ensure that there is a thread per-source producing
the input data. Moreover, that thread cannot be the same thread
that is running the LogicalGraph. The most common usage pattern
is to start the LogicalGraph in the background using
the method
LogicalGraphInstance#start(). For example:
//create the graph
LogicalGraph graph= LogicalGraphFactory.newLogicalGraph();
//the type of the source
RecordTokenType type=
record( TokenTypeConstant.DOUBLE("field1"),
TokenTypeConstant.DOUBLE("field2"));
//create the external source, adding to the graph
ExternalRecordSource source= graph.add(new ExternalRecordSource(type));
//add some other operator
op= graph.add(...);
//connect source to another operator in the graph
graph.connect(source.getInput(),op.getInput());
//compile the graph
LogicalGraphInstance instance= graph.compile();
//*always* call start to start in background, not run since run will deadlock!!
instance.start();
//produce some data in this thread
RecordOutput rout= source.getInput();
while (...) {
...
rout.push();
}
//push end of data
rout.pushEndOfData();
//after calling pushEndOfData, join on the graph
//don't join before since it will deadlock!
instance.join();
NOTE: this operator is non-parallel
-
Constructor Summary
ConstructorsConstructorDescriptionDefault constructor; prior to graph compilationsetOutputType(RecordTokenType)must be specified.ExternalRecordSource(RecordTokenType outputType) Creates a new record source of the given type. -
Method Summary
Modifier and TypeMethodDescriptionprotected ExecutableOperatorPerforms a deep copy of the operator for execution.protected voidImplementations must adhere to the following contractsprotected voidexecute(ExecutionContext ctx) Executes the operator.getInput()Returns the external port that an external application can use to send data to this sink.Gets the record port providing the output data from the source.Returns the output record type of this sinkprotected voidCalled to notify the operator that the graph terminated abnormally either before the operator had a chance to run or while the operator is running.voidsetOutputType(RecordTokenType outputType) Sets the output record type of this sinkMethods inherited from class com.pervasive.datarush.operators.ExecutableOperator
getNumInputCopies, getPortSettings, handleInactiveOutputMethods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutputMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
Constructor Details
-
ExternalRecordSource
public ExternalRecordSource()Default constructor; prior to graph compilationsetOutputType(RecordTokenType)must be specified. -
ExternalRecordSource
Creates a new record source of the given type.- Parameters:
outputType- the record type of the source
-
-
Method Details
-
getInput
Returns the external port that an external application can use to send data to this sink.- Returns:
- the external port.
-
getOutput
Description copied from interface:RecordSourceOperatorGets the record port providing the output data from the source.- Specified by:
getOutputin interfaceRecordSourceOperator- Specified by:
getOutputin interfaceSourceOperator<RecordPort>- Returns:
- the output port for the source
-
getOutputType
Returns the output record type of this sink- Returns:
- the output record type of this sink
-
setOutputType
Sets the output record type of this sink- Parameters:
outputType- the output record type of this sink
-
computeMetadata
Description copied from class:StreamingOperatorImplementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy).
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution, otherwise data will arrive in anunspecified partial distribution.RecordPort#getSourceDataDistributionandRecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering - Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler.
MergeModelis a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec.
- Specified by:
computeMetadatain classStreamingOperator- Parameters:
ctx- the context
-
notifyError
Description copied from class:AbstractLogicalOperatorCalled to notify the operator that the graph terminated abnormally either before the operator had a chance to run or while the operator is running. If this is aCompositeOperator, this method will be invoked if any of the components- Overrides:
notifyErrorin classAbstractLogicalOperator- Parameters:
e- the error that occured
-
cloneForExecution
Description copied from class:ExecutableOperatorPerforms a deep copy of the operator for execution. The default implementation is implemented in terms of JSON serialization: we perform a JSON serialization followed by a JSON deserialization. As a best-practice, operator implementations should not override this method. If they must override, though, then they must guarantee that cloneForExecution copies any instance variables that are modified by execute.- Overrides:
cloneForExecutionin classExecutableOperator- Returns:
- a deep copy of this operator
-
execute
Description copied from class:ExecutableOperatorExecutes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
executein classExecutableOperator- Parameters:
ctx- context in which to lookup physical ports bound to logical ports
-