- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.StreamingOperator
-
- com.pervasive.datarush.operators.ExecutableOperator
-
- com.pervasive.datarush.script.operators.group.ProcessByGroup
-
- All Implemented Interfaces:
LogicalOperator
,RecordSinkOperator
,SinkOperator<RecordPort>
,ScriptOptionsAware
public class ProcessByGroup extends ExecutableOperator implements RecordSinkOperator, ScriptOptionsAware
Executes an application graph for each distinct key group of data within the input data set. The data for each key group is fed to a spawned application graph. The application graph is composed using the provided RushScript. The source data and the key values are provided to each executed RushScript as variables. The names of these variables are specified by properties to this operator.This operator supports parallel execution. Each operator instance will spawn DataRush applications that are limited to running on the machine of the spawning operator. When run distributed, this implies that applications executed by the operator will not be distributed. The spawned jobs run locally.
By default, the spawned graphs will run with parallelism of 1. Default parallelism is limited to prevent over subscribing of system resources. This default value can be changed using the
setSubParallelism(int)
method to set the parallelism level wanted. Caution should be used when setting this value as higher values require more system resources to process jobs. Memory consumption is especially affected by multiple jobs running with high-parallelism levels.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
ProcessByGroup.Context
Simple container for context variables passed into the JavaScript environment for each group.
-
Constructor Summary
Constructors Constructor Description ProcessByGroup()
Construct an instance of the operator.ProcessByGroup(String... keys)
Construct an instance of the operator with the given key fields.ProcessByGroup(List<String> keys)
Construct an instance of the operator with the given key fields.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description void
addVariable(String name, Object value)
Adds a single JavaScript variable name and value to the map of variables.protected void
computeMetadata(StreamingMetadataContext ctx)
Implementations must adhere to the following contractsprotected void
execute(ExecutionContext ctx)
Executes the operator.String
getContextVariableName()
Get the name of the RushScript variable containing the current context.String
getFunctionName()
Gets the name of the JavaScript function to invoke after source evaluation.RecordPort
getInput()
Gets the record port providing the input data to the sink.List<String>
getKeys()
Get the names of the key fields.String
getScript()
Get the text of the script used for graph composition.ScriptOptions
getScriptOptions()
Get the collections of scripting options.int
getSubParallelism()
Get the level of parallelism for spawned sub-graphs.Map<String,Object>
getVariables()
Gets the JavaScript variables that will be set before composing applications for each data group.void
setContextVariableName(String contextVariableName)
Set the name of the RushScript variable containing the current context.void
setFunctionName(String functionName)
Sets the name of a JavaScript function to invoke after evaluating all source files.void
setKeys(String... keys)
Set the names of the key fields used to partition and sort the input data set.void
setKeys(List<String> keys)
Set the names of the key fields used to partition and sort the input data set.void
setScript(String script)
Set the text of the RushScript that will be used to compose the subgraphs run per data grouping.void
setScriptFile(File scriptFile)
Set the text of the RushScript that will be used to compose the subgraphs run per data grouping.void
setScriptFileName(String scriptFileName)
Set the name of the file containing the script used to compose the application executed per key group.void
setScriptOptions(ScriptOptions environment)
Set a collection of scripting options that are interesting to operators utilizing scripting in some way.void
setSubParallelism(int value)
Set the parallelism level wanted for the sub-graphs spawned by this operator.void
setVariables(Map<String,Object> variables)
Provides the JavaScript variables to set within the RushScript environment before composing applications for each group of the input data.-
Methods inherited from class com.pervasive.datarush.operators.ExecutableOperator
cloneForExecution, getNumInputCopies, getPortSettings, handleInactiveOutput
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Constructor Detail
-
ProcessByGroup
public ProcessByGroup()
Construct an instance of the operator. The keys property is required and should be set using thesetKeys(List)
method.
-
ProcessByGroup
public ProcessByGroup(List<String> keys)
Construct an instance of the operator with the given key fields.- Parameters:
keys
- names of the fields to use as keys for partitioning and sorting the input data.
-
ProcessByGroup
public ProcessByGroup(String... keys)
Construct an instance of the operator with the given key fields.- Parameters:
keys
- names of the fields to use as keys for partitioning and sorting the input data.
-
-
Method Detail
-
setScriptOptions
public void setScriptOptions(ScriptOptions environment)
Description copied from interface:ScriptOptionsAware
Set a collection of scripting options that are interesting to operators utilizing scripting in some way.- Specified by:
setScriptOptions
in interfaceScriptOptionsAware
- Parameters:
environment
- set of scripting options
-
getScriptOptions
public ScriptOptions getScriptOptions()
Description copied from interface:ScriptOptionsAware
Get the collections of scripting options.- Specified by:
getScriptOptions
in interfaceScriptOptionsAware
- Returns:
- scripting options
-
getInput
public RecordPort getInput()
Description copied from interface:RecordSinkOperator
Gets the record port providing the input data to the sink.- Specified by:
getInput
in interfaceRecordSinkOperator
- Specified by:
getInput
in interfaceSinkOperator<RecordPort>
- Returns:
- the input port for the sink
-
setKeys
public void setKeys(List<String> keys)
Set the names of the key fields used to partition and sort the input data set. Order is important as it specifies the order of partitioning and sorting.- Parameters:
keys
- list of key field names
-
setKeys
public void setKeys(String... keys)
Set the names of the key fields used to partition and sort the input data set. Order is important as it specifies the order of partitioning and sorting.- Parameters:
keys
- list of key field names
-
getScript
public String getScript()
Get the text of the script used for graph composition.- Returns:
- script text
-
setScript
public void setScript(String script)
Set the text of the RushScript that will be used to compose the subgraphs run per data grouping.- Parameters:
script
- script text
-
setScriptFile
public void setScriptFile(File scriptFile)
Set the text of the RushScript that will be used to compose the subgraphs run per data grouping. This is a convenience method that will source the text from the provided file.- Parameters:
scriptFile
- file containing the text of the RushScript used for graph composition
-
setScriptFileName
public void setScriptFileName(String scriptFileName)
Set the name of the file containing the script used to compose the application executed per key group. If this operator is invoked within an application built using RushScript, the file may be found using the include directories specified at run time. Otherwise, the file name must be an absolute path name.- Parameters:
fileName
- name of the file containing the RushScript source
-
getContextVariableName
public String getContextVariableName()
Get the name of the RushScript variable containing the current context.- Returns:
- RushScript variable name
-
setContextVariableName
public void setContextVariableName(String contextVariableName)
Set the name of the RushScript variable containing the current context. The default value is "context".- Parameters:
contextVariableName
- RushScript variable name for the context
-
setVariables
public void setVariables(Map<String,Object> variables)
Provides the JavaScript variables to set within the RushScript environment before composing applications for each group of the input data. These variables can be used for passing settings such as an output directory to the script that composes the applications.- Parameters:
variables
- a map of valid JavaScript variable names to values
-
getVariables
public Map<String,Object> getVariables()
Gets the JavaScript variables that will be set before composing applications for each data group.- Returns:
- map of variable names to values
-
addVariable
public void addVariable(String name, Object value)
Adds a single JavaScript variable name and value to the map of variables. Any variables set by this method will be lost ifsetVariables(Map)
is invoked after.- Parameters:
name
- valid JavaScript variable namevalue
- value to set for the variable
-
setSubParallelism
public void setSubParallelism(int value)
Set the parallelism level wanted for the sub-graphs spawned by this operator. By default, the parallelism will be set to 0, indicating all available resources should be consumed.- Parameters:
value
- parallelism level
-
getSubParallelism
public int getSubParallelism()
Get the level of parallelism for spawned sub-graphs.- Returns:
- parallelism level
-
getFunctionName
public String getFunctionName()
Gets the name of the JavaScript function to invoke after source evaluation.- Returns:
- function name
-
setFunctionName
public void setFunctionName(String functionName)
Sets the name of a JavaScript function to invoke after evaluating all source files. The function can be used to further compose the DataRush application.- Parameters:
functionName
- JavaScript function name
-
computeMetadata
protected void computeMetadata(StreamingMetadataContext ctx)
Description copied from class:StreamingOperator
Implementations must adhere to the following contractsGeneral
Regardless of input ports/output port types, all implementations must do the following:- Validation. Validation of configuration should always be performed first.
- Declare parallelizability.. Implementations must declare parallelizability by calling
StreamingMetadataContext.parallelize(ParallelismStrategy)
.
Input record ports
Implementations with input record ports must declare the following:- Required data ordering: Implementations that have data ordering requirements must declare them by calling
- Required data distribution (only applies to parallelizable operators): Implementations that have data distribution requirements must declare them by calling
RecordPort#setRequiredDataOrdering
, otherwise data may arrive in any order.RecordPort#setRequiredDataDistribution
, otherwise data will arrive in anunspecified partial distribution
.RecordPort#getSourceDataDistribution
andRecordPort#getSourceDataOrdering
. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.Output record ports
Implementations with output record ports must declare the following:- Type: Implementations must declare their output type by calling
RecordPort#setType
.
- Output data ordering: Implementations that can make guarantees as to their output
ordering may do so by calling
RecordPort#setOutputDataOrdering
- Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output
distribution may do so by calling
RecordPort#setOutputDataDistribution
Input model ports
In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:- Merge handler: Model reducers must declare a merge handler by
calling
AbstractModelPort#setMergeHandler
.
MergeModel
is a convenient, re-usable model reducer, parameterized with a merge-handler.Output model ports
SimpleModelPort
's have no associated metadata and therefore there is never any output metadata to declare.PMMLPort
's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:- pmmlModelSpec: Implementations must declare the PMML model spec
by calling
PMMLPort.setPMMLModelSpec
.
- Specified by:
computeMetadata
in classStreamingOperator
- Parameters:
ctx
- the context
-
execute
protected void execute(ExecutionContext ctx)
Description copied from class:ExecutableOperator
Executes the operator. Implementations should adhere to the following contracts:- Following execution, all input ports must be at end-of-data.
- Following execution, all output ports must be at end-of-data.
- Specified by:
execute
in classExecutableOperator
- Parameters:
ctx
- context in which to lookup physical ports bound to logical ports
-
-