- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.operators.sink.LogRows
-
- All Implemented Interfaces:
LogicalOperator
,RecordSinkOperator
,SinkOperator<RecordPort>
public final class LogRows extends ExecutableOperator implements RecordSinkOperator
Log information about the input data from a flow. The record type of the flow is logged. A log frequency is used to determine which data rows are logged. A data row is logged by outputing the value of each input field within the row. A format can be provided that specifies output format for each row log.
-
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
execute(ExecutionContext ctx)
Executes the operator.String
getFormat()
Get the configured row format.RecordPort
getInput()
Gets the record port providing the input data to the sink.int
getLogFrequency()
Get the configured log frequency.void
setFormat(String format)
Set the format to use when logging rows.void
setLogFrequency(int logFrequency)
Set the frequency of rows to log.-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Constructor Detail
-
LogRows
public LogRows()
Default constructor. Uses default log frequency and row format.
-
LogRows
public LogRows(int frequency)
Use the given log frequency and the default row format.- Parameters:
frequency
- log frequency
-
LogRows
public LogRows(int frequency, String format)
Use the given log frequency and row format.- Parameters:
frequency
- log frequencyformat
- row format
-
-
Method Detail
-
getInput
public RecordPort getInput()
Description copied from interface:RecordSinkOperator
Gets the record port providing the input data to the sink.- Specified by:
getInput
in interfaceRecordSinkOperator
- Specified by:
getInput
in interfaceSinkOperator<RecordPort>
- Returns:
- the input port for the sink
-
setLogFrequency
public void setLogFrequency(int logFrequency)
Set the frequency of rows to log. Setting the frequency to 1 logs every row, a value of 2 logs every other row and so on. The default frequency of 0 turns off row logging.- Parameters:
logFrequency
- log frequency (default 0)
-
getLogFrequency
public int getLogFrequency()
Get the configured log frequency.- Returns:
- log frequency
-
setFormat
public void setFormat(String format)
Set the format to use when logging rows. ReferenceMessageFormat
for more information on the syntax of formats. Two variables are passed to the format: the row count and the row contents. The default format is"row {0} is {1}"
.- Parameters:
format
- row format
-
getFormat
public String getFormat()
Get the configured row format.- Returns:
- row format
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
-