- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.matching.block.LargeGroupDetector
-
- All Implemented Interfaces:
LogicalOperator
public class LargeGroupDetector extends ExecutableOperator
An operator that issues warnings if a dataflow contains an unusually large number of distinct key values. This is useful to latch onto the output of a blocking operation to ensure keys are balanced well.
-
-
Field Summary
Fields Modifier and Type Field Description protected RecordPort
input
The input control port.protected String[]
keys
-
Constructor Summary
Constructors Constructor Description LargeGroupDetector()
Detect large groupings of key values.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
endOfData(boolean emptyInput)
Called once at the end of run.protected void
execute(ExecutionContext ctx)
Executes the operator.RecordPort
getInput()
String[]
getKeys()
protected int
getNumInputCopies(LogicalPort inputPort)
May be overridden to specify that multiple input copies are needed for a given input port.protected void
handleRow(boolean endOfGroup)
Called once per input row.protected RecordInput
nextKey(ExecutionContext ctx)
protected RecordInput
recordsIn(ExecutionContext ctx)
void
setKeys(String[] keys)
void
setWarningThreshold(long threshold)
Set the threshold for issuing a warning about group size.-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
-
-
-
Field Detail
-
input
protected final RecordPort input
The input control port. Contains the an identifier that is unique per key group. A change in the input value signifies the end of a group.
-
keys
protected String[] keys
-
-
Constructor Detail
-
LargeGroupDetector
public LargeGroupDetector()
Detect large groupings of key values. Key fields must be identified before execution.- See Also:
setKeys(String[])
-
-
Method Detail
-
setWarningThreshold
public void setWarningThreshold(long threshold)
Set the threshold for issuing a warning about group size.- Parameters:
threshold
- the minimum group size required to issue a warning
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
handleRow
protected void handleRow(boolean endOfGroup)
Called once per input row. Aggregators must step to the next row of their inputs and handle group boundaries indicated by endOfGroup.- Parameters:
endOfGroup
- true iff the input row is the last in the key group
-
endOfData
protected void endOfData(boolean emptyInput)
Called once at the end of run. Aggregators must step to the end of their inputs and push EOD on their outputs.- Parameters:
emptyInput
- true iff handleRow was called zero times (no input rows to aggregate)
-
getInput
public RecordPort getInput()
-
getKeys
public String[] getKeys()
-
setKeys
public void setKeys(String[] keys)
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
recordsIn
protected final RecordInput recordsIn(ExecutionContext ctx)
-
nextKey
protected final RecordInput nextKey(ExecutionContext ctx)
-
getNumInputCopies
protected int getNumInputCopies(LogicalPort inputPort)
Description copied from class:ExecutableOperator
May be overridden to specify that multiple input copies are needed for a given input port. By default this is one. This can be used in rare cases when we must examine multiple positions in the same input stream.- Overrides:
getNumInputCopies
in classExecutableOperator
- Parameters:
inputPort
- the port- Returns:
- the number of input copies for the port
-
-