Class ExternalRecordSource

All Implemented Interfaces:
LogicalOperator, RecordSourceOperator, SourceOperator<RecordPort>

public final class ExternalRecordSource extends ExecutableOperator implements RecordSourceOperator
Defines an external source of record data. External record sources are used in cases where an external, non-DataRush, application needs to produce data for a DataRush graph. Note that because external ports are not managed by the standard DataRush deadlock detection mechanisms, care must be taken to avoid deadlock. Specifically, consumers need to ensure that there is a thread per-source producing the input data. Moreover, that thread cannot be the same thread that is running the LogicalGraph. The most common usage pattern is to start the LogicalGraph in the background using the method LogicalGraphInstance#start(). For example:
       //create the graph
       LogicalGraph graph= LogicalGraphFactory.newLogicalGraph();
       
       //the type of the source
       RecordTokenType type=
           record( TokenTypeConstant.DOUBLE("field1"),
                   TokenTypeConstant.DOUBLE("field2"));
       
       //create the external source, adding to the graph
       ExternalRecordSource source= graph.add(new ExternalRecordSource(type));
       
       //add some other operator
       op= graph.add(...);
                       
       //connect source to another operator in the graph
       graph.connect(source.getInput(),op.getInput());
       
       //compile the graph
       LogicalGraphInstance instance= graph.compile();
       
       //*always* call start to start in background, not run since run will deadlock!!
       instance.start();
       
       //produce some data in this thread
       RecordOutput rout= source.getInput();
       while (...) {
           ...
           rout.push();
       }
       
       //push end of data
       rout.pushEndOfData();

       //after calling pushEndOfData, join on the graph
       //don't join before since it will deadlock!
       instance.join();
       
 

NOTE: this operator is non-parallel

  • Constructor Details

    • ExternalRecordSource

      public ExternalRecordSource()
      Default constructor; prior to graph compilation setOutputType(RecordTokenType) must be specified.
    • ExternalRecordSource

      public ExternalRecordSource(RecordTokenType outputType)
      Creates a new record source of the given type.
      Parameters:
      outputType - the record type of the source
  • Method Details

    • getInput

      public RecordOutput getInput()
      Returns the external port that an external application can use to send data to this sink.
      Returns:
      the external port.
    • getOutput

      public RecordPort getOutput()
      Description copied from interface: RecordSourceOperator
      Gets the record port providing the output data from the source.
      Specified by:
      getOutput in interface RecordSourceOperator
      Specified by:
      getOutput in interface SourceOperator<RecordPort>
      Returns:
      the output port for the source
    • getOutputType

      public RecordTokenType getOutputType()
      Returns the output record type of this sink
      Returns:
      the output record type of this sink
    • setOutputType

      public void setOutputType(RecordTokenType outputType)
      Sets the output record type of this sink
      Parameters:
      outputType - the output record type of this sink
    • computeMetadata

      protected void computeMetadata(StreamingMetadataContext ctx)
      Description copied from class: StreamingOperator
      Implementations must adhere to the following contracts

      General

      Regardless of input ports/output port types, all implementations must do the following:

      1. Validation. Validation of configuration should always be performed first.
      2. Declare parallelizability.. Implementations must declare parallelizability by calling StreamingMetadataContext.parallelize(ParallelismStrategy).

      Input record ports

      Implementations with input record ports must declare the following:
      1. Required data ordering:
      2. Implementations that have data ordering requirements must declare them by calling RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.
      3. Required data distribution (only applies to parallelizable operators):
      4. Implementations that have data distribution requirements must declare them by calling RecordPort#setRequiredDataDistribution, otherwise data will arrive in an unspecified partial distribution.
      Note that if the upstream operator's output distribution/ordering is compatible with those required, we avoid a re-sort/re-distribution which is generally a very large savings from a performance standpoint. In addition, some operators may chose to query the upstream output distribution/ordering by calling RecordPort#getSourceDataDistribution and RecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.

      Output record ports

      Implementations with output record ports must declare the following:
      1. Type: Implementations must declare their output type by calling RecordPort#setType.
      Implementations with output record ports may declare the following:
      1. Output data ordering: Implementations that can make guarantees as to their output ordering may do so by calling RecordPort#setOutputDataOrdering
      2. Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output distribution may do so by calling RecordPort#setOutputDataDistribution
      Note that both of these properties are optional; if unspecified, performance may suffer since the framework may unnecessarily re-sort/re-distributed the data.

      Input model ports

      In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:
      1. Merge handler: Model reducers must declare a merge handler by calling AbstractModelPort#setMergeHandler.
      Note that MergeModel is a convenient, re-usable model reducer, parameterized with a merge-handler.

      Output model ports

      SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare. PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:
      1. pmmlModelSpec: Implementations must declare the PMML model spec by calling PMMLPort.setPMMLModelSpec.
      Specified by:
      computeMetadata in class StreamingOperator
      Parameters:
      ctx - the context
    • notifyError

      protected void notifyError(Throwable e)
      Description copied from class: AbstractLogicalOperator
      Called to notify the operator that the graph terminated abnormally either before the operator had a chance to run or while the operator is running. If this is a CompositeOperator, this method will be invoked if any of the components
      Overrides:
      notifyError in class AbstractLogicalOperator
      Parameters:
      e - the error that occured
    • cloneForExecution

      protected ExecutableOperator cloneForExecution()
      Description copied from class: ExecutableOperator
      Performs a deep copy of the operator for execution. The default implementation is implemented in terms of JSON serialization: we perform a JSON serialization followed by a JSON deserialization. As a best-practice, operator implementations should not override this method. If they must override, though, then they must guarantee that cloneForExecution copies any instance variables that are modified by execute.
      Overrides:
      cloneForExecution in class ExecutableOperator
      Returns:
      a deep copy of this operator
    • execute

      protected void execute(ExecutionContext ctx)
      Description copied from class: ExecutableOperator
      Executes the operator. Implementations should adhere to the following contracts:
      1. Following execution, all input ports must be at end-of-data.
      2. Following execution, all output ports must be at end-of-data.
      Specified by:
      execute in class ExecutableOperator
      Parameters:
      ctx - context in which to lookup physical ports bound to logical ports