- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
-
- com.pervasive.datarush.operators.record.ColumnsToRows
-
- All Implemented Interfaces:
LogicalOperator
,PipelineOperator<RecordPort>
,RecordPipelineOperator
public class ColumnsToRows extends AbstractExecutableRecordPipeline
Normalize records by transposing values from row columns into multiple rows. Users define one or more pivot value families to describe the output field and the source field(s) that will be mapped to these fields in the output. Target field datatypes are determined by finding the widest common scalar data type of the pivot elements. An exception can occur if no common type is possible. The number of elements defined in a keyFamily and/or valueFamily must be equal. Users may optionally define up to one pivot key family. This allows users to provide a context label when transposing rows defined in value families. Users set a list of Strings that correspond positionally to the fields defined in the list portion of defined value families. The size of all lists across both value families and key family must be the same. Users may optionally define groupKeyFields which will be the fixed or repeating value portion of the multiple rows. If this property is unset, group key fields will be determined by the remainder of the source fields not specified as pivot elements.
-
-
Field Summary
-
Fields inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
input, output
-
-
Constructor Summary
Constructors Constructor Description ColumnsToRows()
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsvoid
definePivotKeyFamily(String keyFieldName, String... labels)
Define up to one pivotKeyFamily consisting of the target field and the constant String values.void
definePivotValueFamily(String valueFieldName, String... sourceUnpivotFields)
Define a pivotValueFamily consisting of the target field and the source fields to be transposed.protected void
execute(ExecutionContext ctx)
Executes the operator.List<String>
getGroupKeyFields()
Map<String,List<String>>
getKeyMap()
Map<String,List<String>>
getValueMap()
List<String>
getValueOrder()
void
setGroupKeyFields(String... groupKeyFields)
Optional parameter to specify the fixed or repeating key fields.void
setKeyMap(Map<String,List<String>> keyMap)
Sets the name of the output field and the constant values (labels) that will appear with their corresponding valueMap entry based on position.void
setValueMap(Map<String,List<String>> valueMap)
Sets the name of the output field and the list of input fields which will be transposed in the output.void
setValueOrder(List<String> valueOrder)
Indicates the order of the fields defined in the valueMap to appear in the output.-
Methods inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
getInput, getOutput
-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Method Detail
-
definePivotValueFamily
public void definePivotValueFamily(String valueFieldName, String... sourceUnpivotFields)
Define a pivotValueFamily consisting of the target field and the source fields to be transposed. Target field output is determined by widest source type. A new pivotValueFamily is created per call.- Parameters:
valueFieldName
- the name of the field to appear in the outputsourceUnpivotFields
- the source fields to be pivoted
-
definePivotKeyFamily
public void definePivotKeyFamily(String keyFieldName, String... labels)
Define up to one pivotKeyFamily consisting of the target field and the constant String values. Calling this method repeatedly, overwrites the previous call's settings.- Parameters:
keyFieldName
- the name of the field to appear in the outputlabels
- the String values that will be added to each defined pivot value family
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
setGroupKeyFields
public void setGroupKeyFields(String... groupKeyFields)
Optional parameter to specify the fixed or repeating key fields. If this is left unset, the group key fields will automatically be made up of source fields not specified in any pivot family- Parameters:
groupKeyFields
-
-
setValueOrder
public void setValueOrder(List<String> valueOrder)
Indicates the order of the fields defined in the valueMap to appear in the output. If this parameter is unset, no order is gauranteed.- Parameters:
valueOrder
- list of the fields in the valueMap in the order you wish for them to appear in the output.
-
setKeyMap
public void setKeyMap(Map<String,List<String>> keyMap)
Sets the name of the output field and the constant values (labels) that will appear with their corresponding valueMap entry based on position. Only one keyMap value should be defined. If more that one is set an exception will occur.- Parameters:
keyMap
- output field name and list of label fields to appear in this column in the output.
-
setValueMap
public void setValueMap(Map<String,List<String>> valueMap)
Sets the name of the output field and the list of input fields which will be transposed in the output. The number of elements defined in the list portion must be consistent throughout all valueMap entries as well as the number of elements defined in the list portion of the keyMap if one keyMap entry is defined.- Parameters:
valueMap
- output field name and lsit of input fields to be transposed to this column in the output.
-
-