Class ExpandTextTokens

All Implemented Interfaces:
LogicalOperator, PipelineOperator<RecordPort>, RecordPipelineOperator

public class ExpandTextTokens extends ExecutableOperator implements RecordPipelineOperator
Expands a TokenizedText field. This operator can be used to expand a tokenized text field. The operator will create a new string field and populate it with a elements from the original tokenized text, with a single element per copied row. This will cause an expansion of the original data. The ExpandTextTokens operator has three properties: input field, output field, and the type of text element to expand. The input field must be a tokenized text object. The output will be generated from the input by appending each element onto the original record in a string field.
  • Constructor Details

    • ExpandTextTokens

      public ExpandTextTokens()
      Default constructor. Use setInputField(String)} and setOutputField(String) to set the name of the text field to expand and its output field.
    • ExpandTextTokens

      public ExpandTextTokens(String textField)
      Constructor specifying the tokenized text field to expand.
      Parameters:
      textField - name of the field to expand
    • ExpandTextTokens

      public ExpandTextTokens(String textField, TextElementType tokenType)
      Constructor specifying the tokenized text field to expand and the type of token to expand.
      Parameters:
      textField - name of the field to expand
      tokenType - type of token to expand
  • Method Details

    • setInputField

      public void setInputField(String textField)
      Set the tokenized text field to expand.

      If this field does not exist in the input, or is not of type TokenizedText, an exception will be thrown at composition time.

      Parameters:
      textField - name of the field to expand
    • getInputField

      public String getInputField()
      Get the tokenized text field to expand.
      Returns:
      The name of the field to expand
    • setOutputField

      public void setOutputField(String tokenField)
      Set the string output field.
      Parameters:
      tokenField - The name of the string output field
    • getOutputField

      public String getOutputField()
      Get the string output field.
      Returns:
      The name of the string output field
    • setTokenType

      public void setTokenType(TextElementType tokenType)
      Set the type of text token to expand.
      Parameters:
      tokenType - type of token to expand
    • getTokenType

      public TextElementType getTokenType()
      Get the type of text token to expand.
      Returns:
      type of token to expand
    • getInput

      public RecordPort getInput()
      Get the input port of this operator.
      Specified by:
      getInput in interface PipelineOperator<RecordPort>
      Returns:
      input port
    • getOutput

      public RecordPort getOutput()
      Get the output port of this operator.
      Specified by:
      getOutput in interface PipelineOperator<RecordPort>
      Returns:
      output port
    • computeMetadata

      protected void computeMetadata(StreamingMetadataContext ctx)
      Description copied from class: StreamingOperator
      Implementations must adhere to the following contracts

      General

      Regardless of input ports/output port types, all implementations must do the following:

      1. Validation. Validation of configuration should always be performed first.
      2. Declare parallelizability.. Implementations must declare parallelizability by calling StreamingMetadataContext.parallelize(ParallelismStrategy).

      Input record ports

      Implementations with input record ports must declare the following:
      1. Required data ordering:
      2. Implementations that have data ordering requirements must declare them by calling RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.
      3. Required data distribution (only applies to parallelizable operators):
      4. Implementations that have data distribution requirements must declare them by calling RecordPort#setRequiredDataDistribution, otherwise data will arrive in an unspecified partial distribution.
      Note that if the upstream operator's output distribution/ordering is compatible with those required, we avoid a re-sort/re-distribution which is generally a very large savings from a performance standpoint. In addition, some operators may chose to query the upstream output distribution/ordering by calling RecordPort#getSourceDataDistribution and RecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.

      Output record ports

      Implementations with output record ports must declare the following:
      1. Type: Implementations must declare their output type by calling RecordPort#setType.
      Implementations with output record ports may declare the following:
      1. Output data ordering: Implementations that can make guarantees as to their output ordering may do so by calling RecordPort#setOutputDataOrdering
      2. Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output distribution may do so by calling RecordPort#setOutputDataDistribution
      Note that both of these properties are optional; if unspecified, performance may suffer since the framework may unnecessarily re-sort/re-distributed the data.

      Input model ports

      In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:
      1. Merge handler: Model reducers must declare a merge handler by calling AbstractModelPort#setMergeHandler.
      Note that MergeModel is a convenient, re-usable model reducer, parameterized with a merge-handler.

      Output model ports

      SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare. PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:
      1. pmmlModelSpec: Implementations must declare the PMML model spec by calling PMMLPort.setPMMLModelSpec.
      Specified by:
      computeMetadata in class StreamingOperator
      Parameters:
      ctx - the context
    • execute

      protected void execute(ExecutionContext ctx)
      Description copied from class: ExecutableOperator
      Executes the operator. Implementations should adhere to the following contracts:
      1. Following execution, all input ports must be at end-of-data.
      2. Following execution, all output ports must be at end-of-data.
      Specified by:
      execute in class ExecutableOperator
      Parameters:
      ctx - context in which to lookup physical ports bound to logical ports