Class DeriveFields

All Implemented Interfaces:
LogicalOperator, PipelineOperator<RecordPort>, RecordPipelineOperator

public final class DeriveFields extends AbstractExecutableRecordPipeline
Applies one or more functions to the input record data. One output field is generated per function. The result is an output record flow that contains the input data plus the function results. It is possible to overwrite existing fields with derived values. It is also possible to omit input fields in the result, effectively applying a complete transform to the record.

Applying multiple functions to an input record flow within a single dataflow process can be more efficient than applying each function in its own process. This is due to many factors, but mainly: preventing processor cache thrashing, saving data copies and lowering thread context switching.

  • Constructor Details

    • DeriveFields

      public DeriveFields()
      Applies no functions to the input records. This effectively copies records from the input to the output. Use setDerivedFields(FieldDerivation...) to set the functions to apply.
    • DeriveFields

      public DeriveFields(String derivationExpression)
      Applies the specified derivations to all input records. All input fields are present in the output, containing the same value unless explicitly replaced by a derivation. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the expression containing field derivations to apply
    • DeriveFields

      public DeriveFields(List<FieldDerivation> derivations)
      Applies the specified derivations to all input records. All input fields are present in the output, containing the same value unless explicitly replaced by a derivation. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
    • DeriveFields

      public DeriveFields(FieldDerivation... derivations)
      Applies the specified derivations to all input records. All input fields are present in the output, containing the same value unless explicitly replaced by a derivation. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
    • DeriveFields

      public DeriveFields(String derivationExpression, boolean dropUnderived)
      Applies the specified derivations to all input records. If requested, input fields will not be automatically copied to the output. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivationExpression - the expression containing field derivations to apply
      dropUnderived - true if input fields should be dropped; false otherwise
    • DeriveFields

      public DeriveFields(List<FieldDerivation> derivations, boolean dropUnderived)
      Applies the specified derivations to all input records. If requested, input fields will not be automatically copied to the output. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
      dropUnderived - true if input fields should be dropped; false otherwise
  • Method Details

    • getInput

      public RecordPort getInput()
      Description copied from class: AbstractExecutableRecordPipeline
      Gets the record port providing the input data to the operation.
      Specified by:
      getInput in interface PipelineOperator<RecordPort>
      Overrides:
      getInput in class AbstractExecutableRecordPipeline
      Returns:
      the input port for the operation
    • getOutput

      public RecordPort getOutput()
      Description copied from class: AbstractExecutableRecordPipeline
      Gets the record port providing the output from the operation.
      Specified by:
      getOutput in interface PipelineOperator<RecordPort>
      Overrides:
      getOutput in class AbstractExecutableRecordPipeline
      Returns:
      the output port for the operation
    • setDerivedFields

      public void setDerivedFields(String derivationExpression)
      Set the list of field derivations to apply, using a field derivation expression. If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
    • setDerivedFields

      public void setDerivedFields(List<FieldDerivation> derivations)
      Set the list of field derivations to apply. Derivations can be easily constructed using the convenience method FieldDerivation.derive(String, ScalarValuedFunction). If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
    • setDerivedFields

      public void setDerivedFields(FieldDerivation... derivations)
      Set the list of field derivations to apply. Derivations can be easily constructed using the convenience method FieldDerivation.derive(String, ScalarValuedFunction). If multiple derivations apply to an output field, the last one defined is used.
      Parameters:
      derivations - the field derivations to apply
    • getDerivedFields

      public List<FieldDerivation> getDerivedFields()
      Get the list of derivations that will be applied.
      Returns:
      the field derivations to apply
    • getDropUnderivedFields

      public boolean getDropUnderivedFields()
      Indicates whether input fields are dropped from the output. That is, whether the output consists solely of derived fields.
      Returns:
      true if non-derived fields are dropped; false otherwise
    • setDropUnderivedFields

      public void setDropUnderivedFields(boolean dropUnderived)
      Set whether input fields are dropped from the output. If set to true only derived fields are included in the output.

      This value is false by default.

      Parameters:
      dropUnderived - indicates whether to drop input fields from the output
    • computeMetadata

      protected void computeMetadata(StreamingMetadataContext ctx)
      Description copied from class: StreamingOperator
      Implementations must adhere to the following contracts

      General

      Regardless of input ports/output port types, all implementations must do the following:

      1. Validation. Validation of configuration should always be performed first.
      2. Declare parallelizability.. Implementations must declare parallelizability by calling StreamingMetadataContext.parallelize(ParallelismStrategy).

      Input record ports

      Implementations with input record ports must declare the following:
      1. Required data ordering:
      2. Implementations that have data ordering requirements must declare them by calling RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.
      3. Required data distribution (only applies to parallelizable operators):
      4. Implementations that have data distribution requirements must declare them by calling RecordPort#setRequiredDataDistribution, otherwise data will arrive in an unspecified partial distribution.
      Note that if the upstream operator's output distribution/ordering is compatible with those required, we avoid a re-sort/re-distribution which is generally a very large savings from a performance standpoint. In addition, some operators may chose to query the upstream output distribution/ordering by calling RecordPort#getSourceDataDistribution and RecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.

      Output record ports

      Implementations with output record ports must declare the following:
      1. Type: Implementations must declare their output type by calling RecordPort#setType.
      Implementations with output record ports may declare the following:
      1. Output data ordering: Implementations that can make guarantees as to their output ordering may do so by calling RecordPort#setOutputDataOrdering
      2. Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output distribution may do so by calling RecordPort#setOutputDataDistribution
      Note that both of these properties are optional; if unspecified, performance may suffer since the framework may unnecessarily re-sort/re-distributed the data.

      Input model ports

      In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:
      1. Merge handler: Model reducers must declare a merge handler by calling AbstractModelPort#setMergeHandler.
      Note that MergeModel is a convenient, re-usable model reducer, parameterized with a merge-handler.

      Output model ports

      SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare. PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:
      1. pmmlModelSpec: Implementations must declare the PMML model spec by calling PMMLPort.setPMMLModelSpec.
      Specified by:
      computeMetadata in class StreamingOperator
      Parameters:
      ctx - the context
    • execute

      protected void execute(ExecutionContext ctx)
      Description copied from class: ExecutableOperator
      Executes the operator. Implementations should adhere to the following contracts:
      1. Following execution, all input ports must be at end-of-data.
      2. Following execution, all output ports must be at end-of-data.
      Specified by:
      execute in class ExecutableOperator
      Parameters:
      ctx - context in which to lookup physical ports bound to logical ports