-
- All Implemented Interfaces:
LogicalOperator
,PipelineOperator<RecordPort>
,RecordPipelineOperator
public class SplitField extends AbstractExecutableRecordPipeline
Splits a string field into multiple fields, based on a specified pattern.The SplitField operator has three properties:
- Split Field - The name of the field to split. The field must be of type String.
- Split Pattern - A regular expression describing the delimiter used for splitting.
- Result Mapping - A map of integers to strings, detailed below.
The contents of the split field will be split using the defined split pattern, resulting in an array of substrings. The key of the result mapping corresponds to an index within this array, and the associated value defines the output field in which to place the substring.
For example, if you had a record with a field named
time
containing times in the format of18:30:00
, you could use the following SplitField operator to split the time intohour
,minute
, andsecond
fields.HashMap<Integer,String> map = new HashMap<Integer,String>();
map.put(0,"hour");
map.put(1,"minute");
map.put(2,"second");
SplitField splitter = new SplitField("time",":",map);
-
-
Field Summary
-
Fields inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
input, output
-
-
Constructor Summary
Constructors Constructor Description SplitField()
Construct the operator with no properties set.SplitField(String splitField, String splitPattern, Map<Integer,String> resultMapping)
Construct the operator while setting each property.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
execute(ExecutionContext ctx)
Executes the operator.RecordPort
getInput()
Gets the record port providing the input data to the operation.RecordPort
getOutput()
Gets the record port providing the output from the operation.Map<Integer,String>
getResultMapping()
Get the mapping of split indices to output field names.String
getSplitField()
Get the string field to be split.String
getSplitPattern()
Get the splitting pattern.void
setResultMapping(Map<Integer,String> resultMapping)
Set the mapping of split indices to output field names.void
setSplitField(String splitField)
Set the string field to be split.void
setSplitPattern(String splitPattern)
Set the splitting pattern.-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Constructor Detail
-
SplitField
public SplitField()
Construct the operator with no properties set.The split pattern defaults to whitespace. The other properties (split field and result mapping) must be set manually.
-
SplitField
public SplitField(String splitField, String splitPattern, Map<Integer,String> resultMapping)
Construct the operator while setting each property.- Parameters:
splitField
- The name of the field to be split.splitPattern
- The splitting pattern.resultMapping
- The mapping of split indices to output field names.- See Also:
setSplitField(String)
,setSplitPattern(String)
,setResultMapping(Map)
-
-
Method Detail
-
getInput
public RecordPort getInput()
Description copied from class:AbstractExecutableRecordPipeline
Gets the record port providing the input data to the operation.- Specified by:
getInput
in interfacePipelineOperator<RecordPort>
- Overrides:
getInput
in classAbstractExecutableRecordPipeline
- Returns:
- the input port for the operation
-
getOutput
public RecordPort getOutput()
Description copied from class:AbstractExecutableRecordPipeline
Gets the record port providing the output from the operation.- Specified by:
getOutput
in interfacePipelineOperator<RecordPort>
- Overrides:
getOutput
in classAbstractExecutableRecordPipeline
- Returns:
- the output port for the operation
-
setSplitField
public void setSplitField(String splitField)
Set the string field to be split.If this field does not exist in the input, or is not of type String, an exception will be thrown at composition time.
- Parameters:
splitField
- The name of the field to be split.
-
getSplitField
public String getSplitField()
Get the string field to be split.- Returns:
- The name of the field to be split.
-
setSplitPattern
public void setSplitPattern(String splitPattern)
Set the splitting pattern.The pattern should be expressed as a regular expression. The default value matches any whitespace.
- Parameters:
splitPattern
- The splitting pattern.- Throws:
com.pervasive.datarush.graphs.physical.InvalidPropertyValueException
- If the given pattern is not a valid regular expression.- See Also:
String.split(String)
-
getSplitPattern
public String getSplitPattern()
Get the splitting pattern.- Returns:
- The splitting pattern.
-
setResultMapping
public void setResultMapping(Map<Integer,String> resultMapping)
Set the mapping of split indices to output field names.The key of each entry represents an index in the array resulting from splitting the input string, and the value represents the name of the output field in which to store that substring.
It is not necessary for every array index to be mapped, or for every mapped index to exist in each split. If a value does not exist at a mapped index for a particular split, an empty string will be placed in the specified output field.
If an output field already exists in the input, or if a single output field is mapped to multiple indices, an exception will be thrown at composition time.
- Parameters:
resultMapping
- The mapping of indices to field names.
-
getResultMapping
public Map<Integer,String> getResultMapping()
Get the mapping of split indices to output field names.- Returns:
- The mapping of indices to field names.
- See Also:
setResultMapping(Map)
-
computeMetadata
public void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
-