public class FilterRows extends AbstractExecutableRecordPipeline
Row selection is controlled by evaluation of the predicate
against the input record. Records for which the predicate
evaluates to true
are emitted on the output flow.
A secondary flow consisting of any records for which the
predicate is false
or null
is also produced.
input, output
Constructor and Description |
---|
FilterRows()
Defines a filter which accepts all records by
default.
|
FilterRows(ScalarValuedFunction p)
Defines a filter using the specified predicate.
|
Modifier and Type | Method and Description |
---|---|
protected ExecutableOperator |
cloneForExecution()
Performs a deep copy of the operator for execution.
|
protected void |
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contracts
|
protected void |
execute(ExecutionContext ctx)
Executes the operator.
|
RecordPort |
getInput()
Gets the record port providing the input data to the operation.
|
RecordPort |
getOutput()
Gets the record port providing the output from the operation.
|
ScalarValuedFunction |
getPredicate()
Gets the predicate used by the filter operation.
|
RecordPort |
getRejects()
Gets the port providing records which failed the predicate test.
|
protected boolean |
handleInactiveOutput(LogicalPort output)
Called when one of our outputs is no longer being read, to perform any cleanup necessary.
|
void |
setPredicate(ScalarValuedFunction p)
Sets the predicate for the filter operation.
|
void |
setPredicate(String predicateExpression)
Sets the predicate(s) to use for filtering based on an expression similar to a where clause of a SQL query.
|
getNumInputCopies, getPortSettings
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
disableParallelism, getInputPorts, getOutputPorts
public FilterRows()
public FilterRows(ScalarValuedFunction p)
p
- the predicate to use for filtering.
The function provided must evaluate to a boolean
value.public RecordPort getInput()
AbstractExecutableRecordPipeline
getInput
in interface PipelineOperator<RecordPort>
getInput
in class AbstractExecutableRecordPipeline
public RecordPort getOutput()
AbstractExecutableRecordPipeline
getOutput
in interface PipelineOperator<RecordPort>
getOutput
in class AbstractExecutableRecordPipeline
public RecordPort getRejects()
false
or null
public ScalarValuedFunction getPredicate()
public void setPredicate(ScalarValuedFunction p)
p
- the predicate to use for filtering.
The function provided must evaluate to a boolean
value.public void setPredicate(String predicateExpression)
predicateExpression
- predicate expression to applyprotected void computeMetadata(StreamingMetadataContext ctx)
StreamingOperator
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
RecordPort.setRequiredDataOrdering(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.ports.record.DataOrdering)
, otherwise data may arrive in any order.
RecordPort.setRequiredDataDistribution(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.ports.record.DataDistribution)
, otherwise data will arrive in an unspecified partial distribution
.
RecordPort.getSourceDataDistribution(com.pervasive.datarush.operators.MetadataContext)
and RecordPort.getSourceDataOrdering(com.pervasive.datarush.operators.MetadataContext)
. These should be
viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must
still declare data ordering and data distribution requirements; otherwise there is no guarantee that
data will arrive sorted/distributed as required.
RecordPort.setType(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.types.RecordTokenType)
.RecordPort.setOutputDataOrdering(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.ports.record.DataOrdering)
RecordPort.setOutputDataDistribution(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.ports.record.DataDistribution)
AbstractModelPort.setMergeHandler(com.pervasive.datarush.operators.MetadataCalculationContext, com.pervasive.datarush.ports.model.ModelMergeHandler<T>)
.MergeModel
is a convenient, re-usable model reducer, parameterized with
a merge-handler.
SimpleModelPort
's have no associated metadata and therefore there is
never any output metadata to declare. PMMLPort
's, on the other hand,
do have associated metadata. For all PMMLPorts, implementations must declare
the following:
PMMLPort.setPMMLModelSpec
.
computeMetadata
in class StreamingOperator
ctx
- the contextprotected boolean handleInactiveOutput(LogicalPort output)
ExecutableOperator
handleInactiveOutput
in class ExecutableOperator
output
- the output that has just gone inactiveprotected void execute(ExecutionContext ctx)
ExecutableOperator
execute
in class ExecutableOperator
ctx
- context in which to lookup physical ports bound to logical portsprotected ExecutableOperator cloneForExecution()
ExecutableOperator
cloneForExecution
in class ExecutableOperator
Copyright © 2015 Actian Corporation. All Rights Reserved.