Class RemapFields

  • All Implemented Interfaces:
    LogicalOperator, PipelineOperator<RecordPort>, RecordPipelineOperator

    public class RemapFields
    extends ExecutableOperator
    implements RecordPipelineOperator
    Rearranges and renames fields in a record. Two options are provided for handling source fields not specifically referenced in the field renaming:
    • Unmapped fields are removed from the result. This achieves the same effect as combining the renaming operation with SelectFields.
    • Unmapped fields are kept in the result, with the names of the fields remaining the same as in the source. For example, if the source schema is [A,B,C] and the mapping is {B -> Z}, the resulting schema will be [A,Z,C].
    See Also:
    SelectFields, RetainFields, RemoveFields
    • Constructor Detail

      • RemapFields

        public RemapFields()
        Maps all fields in the input to the same fields in the output. This effectively copies records from the input to the output. Use setFieldRemapping(FieldRemapping) to define a mapping to apply.
      • RemapFields

        public RemapFields​(FieldRemapping remapping)
        Maps fields in the input to fields in the output using the specified mapping.
        Parameters:
        remapping - a mapping defining how input fields are mapped to output fields
    • Method Detail

      • getFieldRemapping

        public FieldRemapping getFieldRemapping()
        Gets how input fields are mapped to output fields,
        Returns:
        the mapping of source fields to target fields
      • setFieldRemapping

        public void setFieldRemapping​(FieldRemapping remapping)
        Defines how input fields are to be mapped to output fields.
        Parameters:
        remapping - a mapping defining how input fields are mapped to output fields
      • computeMetadata

        protected void computeMetadata​(StreamingMetadataContext ctx)
        Description copied from class: StreamingOperator
        Implementations must adhere to the following contracts

        General

        Regardless of input ports/output port types, all implementations must do the following:

        1. Validation. Validation of configuration should always be performed first.
        2. Declare parallelizability.. Implementations must declare parallelizability by calling StreamingMetadataContext.parallelize(ParallelismStrategy).

        Input record ports

        Implementations with input record ports must declare the following:
        1. Required data ordering:
        2. Implementations that have data ordering requirements must declare them by calling RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.
        3. Required data distribution (only applies to parallelizable operators):
        4. Implementations that have data distribution requirements must declare them by calling RecordPort#setRequiredDataDistribution, otherwise data will arrive in an unspecified partial distribution.
        Note that if the upstream operator's output distribution/ordering is compatible with those required, we avoid a re-sort/re-distribution which is generally a very large savings from a performance standpoint. In addition, some operators may chose to query the upstream output distribution/ordering by calling RecordPort#getSourceDataDistribution and RecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.

        Output record ports

        Implementations with output record ports must declare the following:
        1. Type: Implementations must declare their output type by calling RecordPort#setType.
        Implementations with output record ports may declare the following:
        1. Output data ordering: Implementations that can make guarantees as to their output ordering may do so by calling RecordPort#setOutputDataOrdering
        2. Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output distribution may do so by calling RecordPort#setOutputDataDistribution
        Note that both of these properties are optional; if unspecified, performance may suffer since the framework may unnecessarily re-sort/re-distributed the data.

        Input model ports

        In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:
        1. Merge handler: Model reducers must declare a merge handler by calling AbstractModelPort#setMergeHandler.
        Note that MergeModel is a convenient, re-usable model reducer, parameterized with a merge-handler.

        Output model ports

        SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare. PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:
        1. pmmlModelSpec: Implementations must declare the PMML model spec by calling PMMLPort.setPMMLModelSpec.
        Specified by:
        computeMetadata in class StreamingOperator
        Parameters:
        ctx - the context
      • execute

        protected void execute​(ExecutionContext ctx)
        Description copied from class: ExecutableOperator
        Executes the operator. Implementations should adhere to the following contracts:
        1. Following execution, all input ports must be at end-of-data.
        2. Following execution, all output ports must be at end-of-data.
        Specified by:
        execute in class ExecutableOperator
        Parameters:
        ctx - context in which to lookup physical ports bound to logical ports
      • cloneForExecution

        protected ExecutableOperator cloneForExecution()
        Description copied from class: ExecutableOperator
        Performs a deep copy of the operator for execution. The default implementation is implemented in terms of JSON serialization: we perform a JSON serialization followed by a JSON deserialization. As a best-practice, operator implementations should not override this method. If they must override, though, then they must guarantee that cloneForExecution copies any instance variables that are modified by execute.
        Overrides:
        cloneForExecution in class ExecutableOperator
        Returns:
        a deep copy of this operator