- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
-
- com.pervasive.datarush.operators.io.textfile.ParseTextFields
-
- All Implemented Interfaces:
LogicalOperator
,PipelineOperator<RecordPort>
,RecordPipelineOperator
public class ParseTextFields extends AbstractExecutableRecordPipeline
Parses input text records according to a specified text schema. Records which fail parsing on one or more fields will be emitted on a rejects output for further remediation.This differs in comparison to other operators such as
ReadDelimitedText
which do not provide a flow of rejected records. This also differs in that it processes an existing flow of text records instead of reading from a source directly, meaning an upstream operator for breaking data into individual fields is necessary. Note thatReadDelimitedText
can be used to do this syntactic parsing in the delimited text case by using a text schema containing only "raw" string fields.The parsed output will have the type specified by the schema. Output fields will contain the result of parsing the input field of the same name according to the type information in the provided schema.
Input fields referenced in the schema must either be string typed, in which case they are parsed according to the schema, or be of a type assignable to the output field type, in which case they are simply passed through. If a field is present in the schema, but not in the input, the output field is NULL. If an input value is NULL, the resulting output field is NULL.
The reject output has the same type as the input.
-
-
Field Summary
-
Fields inherited from class com.pervasive.datarush.operators.AbstractExecutableRecordPipeline
input, output
-
-
Constructor Summary
Constructors Constructor Description ParseTextFields()
Defines a parser which does no parsing.ParseTextFields(RecordTextSchema<?> schema)
Defines a parser using the specified schema.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
execute(ExecutionContext ctx)
Executes the operator.RecordPort
getInput()
Gets the record port providing the input data to the operation.RecordPort
getOutput()
Gets the record port providing the output from the operation.RecordPort
getRejects()
Gets the port providing records which failed parsing.RecordTextSchema<?>
getSchema()
Gets the record schema to use for parsing.void
setSchema(RecordTextSchema<?> schema)
Sets the record schema to use for parsing.-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Constructor Detail
-
ParseTextFields
public ParseTextFields()
Defines a parser which does no parsing. Input is copied directly to the output.
-
ParseTextFields
public ParseTextFields(RecordTextSchema<?> schema)
Defines a parser using the specified schema. Output records will have this schema. Any schema fields not present in the input will be NULL in the output.- Parameters:
schema
- the record schema for parsing
-
-
Method Detail
-
getInput
public RecordPort getInput()
Description copied from class:AbstractExecutableRecordPipeline
Gets the record port providing the input data to the operation.- Specified by:
getInput
in interfacePipelineOperator<RecordPort>
- Overrides:
getInput
in classAbstractExecutableRecordPipeline
- Returns:
- the input port for the operation
-
getOutput
public RecordPort getOutput()
Description copied from class:AbstractExecutableRecordPipeline
Gets the record port providing the output from the operation.- Specified by:
getOutput
in interfacePipelineOperator<RecordPort>
- Overrides:
getOutput
in classAbstractExecutableRecordPipeline
- Returns:
- the output port for the operation
-
getRejects
public RecordPort getRejects()
Gets the port providing records which failed parsing.- Returns:
- all records for which one or more fields failed to parse.
-
getSchema
public RecordTextSchema<?> getSchema()
Gets the record schema to use for parsing.- Returns:
- the record schema for parsing
-
setSchema
public void setSchema(RecordTextSchema<?> schema)
Sets the record schema to use for parsing. Output records will have this schema.Input fields referenced in the schema must either be string typed, in which case they are parsed according to the schema, or be of a type assignable to the output field type, in which case they are simply passed through. Any schema fields not present in the input will be NULL in the output.
- Parameters:
schema
- the record schema for parsing
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
-