Class RunJavaScript

  • All Implemented Interfaces:
    LogicalOperator, PipelineOperator<RecordPort>, RecordPipelineOperator

    public class RunJavaScript
    extends AbstractExecutableRecordPipeline
    Processes rows using user-defined scripts written in JavaScript. The Rhino JavaScript engine is used to compile and execute the scripts.

    The execution of the operator is as follows:

    1. Compile all scripts
    2. Bind input fields into the script's context. Input field names may have to be transformed to valid script variable names.
    3. Run the beforeFirstRecordScript
    4. For every row of input:
      1. Move data from input ports to script context
      2. Run the onEveryRecordScript
      3. Move data from script context to output ports
    5. Run the afterLastRecordScript
    Binding and retrieval are done using the names given to the input and output fields via their associated types. Native JavaScript types are supported where possible.

    Note that this operator supports running in a distributed context, whether that be locally partitioned or within a cluster. Each instance of the operator executes the above operation sequence. This implies that the before and after scripts only affect the scripting environment within which they are run. The environments are not shared across operator instances.

    By default, the highest level of optimization is used for compiling the scripts. This will provide the most efficient and highest performing script execution. Setting the optimization level to -1 turns on interpreted mode, disabling the performance gains of compiled scripts. There should never be a need to lower the optimization level. However, compiled scripts can type variables differently at times. If this causes an issues, use interpreted mode by setting the optimization level to -1.

    By default, the JavaScript language version is set to version 1.8. As of this writing, this was the highest language version supported by the Rhino engine. Set the language version lower to disable newer language features as desired.

    • Constructor Detail

      • RunJavaScript

        public RunJavaScript()
    • Method Detail

      • getOnEveryRecordScript

        public String getOnEveryRecordScript()
      • setOnEveryRecordScript

        public void setOnEveryRecordScript​(String scriptText)
      • getOptimizationLevel

        public int getOptimizationLevel()
      • setOptimizationLevel

        public void setOptimizationLevel​(int optimizationLevel)
      • getLanguageVersion

        public int getLanguageVersion()
      • setLanguageVersion

        public void setLanguageVersion​(int languageVersion)
      • setOutputType

        public void setOutputType​(RecordTokenType outputType)
      • getBeforeFirstRecordScript

        public String getBeforeFirstRecordScript()
      • setBeforeFirstRecordScript

        public void setBeforeFirstRecordScript​(String beforeFirstRecordScript)
      • getAfterLastRecordScript

        public String getAfterLastRecordScript()
      • setAfterLastRecordScript

        public void setAfterLastRecordScript​(String afterLastRecordScript)
      • setVariables

        public void setVariables​(Map<String,​Object> variables)
      • getEnforceType

        public boolean getEnforceType()
      • setEnforceType

        public void setEnforceType​(boolean enforceType)
      • computeMetadata

        protected void computeMetadata​(StreamingMetadataContext ctx)
        Description copied from class: StreamingOperator
        Implementations must adhere to the following contracts

        General

        Regardless of input ports/output port types, all implementations must do the following:

        1. Validation. Validation of configuration should always be performed first.
        2. Declare parallelizability.. Implementations must declare parallelizability by calling StreamingMetadataContext.parallelize(ParallelismStrategy).

        Input record ports

        Implementations with input record ports must declare the following:
        1. Required data ordering:
        2. Implementations that have data ordering requirements must declare them by calling RecordPort#setRequiredDataOrdering, otherwise data may arrive in any order.
        3. Required data distribution (only applies to parallelizable operators):
        4. Implementations that have data distribution requirements must declare them by calling RecordPort#setRequiredDataDistribution, otherwise data will arrive in an unspecified partial distribution.
        Note that if the upstream operator's output distribution/ordering is compatible with those required, we avoid a re-sort/re-distribution which is generally a very large savings from a performance standpoint. In addition, some operators may chose to query the upstream output distribution/ordering by calling RecordPort#getSourceDataDistribution and RecordPort#getSourceDataOrdering. These should be viewed as a hints to help chose a more efficient algorithm. In such cases, though, operators must still declare data ordering and data distribution requirements; otherwise there is no guarantee that data will arrive sorted/distributed as required.

        Output record ports

        Implementations with output record ports must declare the following:
        1. Type: Implementations must declare their output type by calling RecordPort#setType.
        Implementations with output record ports may declare the following:
        1. Output data ordering: Implementations that can make guarantees as to their output ordering may do so by calling RecordPort#setOutputDataOrdering
        2. Output data distribution (only applies to parallelizable operators): Implementations that can make guarantees as to their output distribution may do so by calling RecordPort#setOutputDataDistribution
        Note that both of these properties are optional; if unspecified, performance may suffer since the framework may unnecessarily re-sort/re-distributed the data.

        Input model ports

        In general, there is nothing special to declare for input model ports. Models are implicitly duplicated to all partitions when going from non-parallel to parallel operators. The case of a model going from a parallel to a non-parallel node is a special case of a "model reducer" operator. In the case of a model reducer, the downstream operator, must declare the following:
        1. Merge handler: Model reducers must declare a merge handler by calling AbstractModelPort#setMergeHandler.
        Note that MergeModel is a convenient, re-usable model reducer, parameterized with a merge-handler.

        Output model ports

        SimpleModelPort's have no associated metadata and therefore there is never any output metadata to declare. PMMLPort's, on the other hand, do have associated metadata. For all PMMLPorts, implementations must declare the following:
        1. pmmlModelSpec: Implementations must declare the PMML model spec by calling PMMLPort.setPMMLModelSpec.
        Specified by:
        computeMetadata in class StreamingOperator
        Parameters:
        ctx - the context
      • execute

        protected void execute​(ExecutionContext ctx)
        Description copied from class: ExecutableOperator
        Executes the operator. Implementations should adhere to the following contracts:
        1. Following execution, all input ports must be at end-of-data.
        2. Following execution, all output ports must be at end-of-data.
        Specified by:
        execute in class ExecutableOperator
        Parameters:
        ctx - context in which to lookup physical ports bound to logical ports
      • validateFieldName

        public static String validateFieldName​(int position,
                                               String fldName)
        Function to ensure an input/output field name conforms to JS variable identifier requirements. If the field name is not valid, performs the following standardization on the field to produce a valid identifier. This is provided as a convenience for proper identifier formatting within scripts. Input and output schemas set on this operator undergo name validation so this is a mechanism that can be used to ensure identifier referencing is consistent.
        Parameters:
        position - the ordinal position of the field in the source/target schema - may be needed to disambiguate field names after character substitution takes place.
        fldName - the name of the field being validated
        Returns:
        a JS compliant identifier
      • getIdentifierRegex

        public static String getIdentifierRegex()
        Function to return a Regular Expression string that can be used to identify a valid JS identifier. The string returned here is used internally for the validateFieldName function.
        Returns:
        the Regular Expression string used to identify a JS compliant identifier
      • compile

        public static String compile​(String scriptSource)
        Compile the given snippet of JavaScript and capture any warnings or errors. The detailed information about the warnings or errors will be contained in the returned string.
        Parameters:
        scriptSource - String containing the script to compile
        Returns:
        a string containing any compiler warnings or errors, may be null or empty
      • compile

        public static String[] compile​(Map<String,​Object> variables,
                                       String... scriptSources)
        Compile the given snippets of JavaScript code and capture any errors or warnings that occur. Detailed information about warnings or errors are contained in the array of output messages. Each output message corresponds in order to the input script.

        The input variables are set into a JavaScript scope. The scope is used when evaluating each script. Set up the variables that need to be defined for the given scripts. The values can be default values allowing a simple evaluation of the scripts to catch warning and error messages.

        Parameters:
        variables - initial settings of input variables used by the given scripts
        scriptSources - sources of scripts to compile and evaluate
        Returns:
        output messages resultant from evaluating each input script