- All Implemented Interfaces:
LogicalOperator,RecordSinkOperator,SinkOperator<RecordPort>
setFormatMode(ARFFMode)
method. The default mode is sparse. When the input data is highly sparse, the sparse
mode can save space since it only writes non-zero values. However, if the data is
not very sparse, using sparse mode can actually cause the resultant file to be
larger; use dense mode for dense data.
Note that when writing sparse data that contains enumerated types, the zero ordinal category will always be treated as sparse and so will not appear in the output file. This is normal and expected behavior. The enumerated type is captured in the meta-data. When the file is read, the zero ordinal values will be restored as expected.
-
Field Summary
Fields inherited from class com.pervasive.datarush.operators.io.textfile.AbstractTextWriter
encodingPropsFields inherited from class com.pervasive.datarush.operators.io.AbstractWriter
input, options -
Constructor Summary
ConstructorsConstructorDescriptionWrites ARFF data to an empty target with default settings.Writes ARFF data to the specified path in the given mode, using default settings.Writes ARFF data to the specified target sink in the given mode.Writes ARFF data to the specified path in the given mode, using default settings. -
Method Summary
Modifier and TypeMethodDescriptionvoidaddComment(String comment) Add a comment line that will be written to the ARFF meta-data section of the file.protected DataFormatDetermines the data format for the target.Gets the comments lines current set to be written.charGet the configured field delimiter property value.Get the value of the mode property.Get the record separator.Get the value of the relation name property.Get the configured schema for this reader instance.voidsetComments(List<String> comments) Set the comments lines to write to the ARFF meta-data section of the file.voidsetFieldDelimiter(char delimiter) Set the field delimiter to use when writing the file contents.voidsetFormatMode(ARFFMode mode) Set the mode with which to write the file.voidsetRecordSeparator(String separator) Set the record separator.voidsetRelationName(String relationName) Set the relation name attribute.voidsetSchema(TextRecord schema) Set the schema to use for the provided data.Methods inherited from class com.pervasive.datarush.operators.io.textfile.AbstractTextWriter
getCharset, getCharsetName, getEncodeBuffer, getEncoding, getErrorAction, getReplacement, setCharset, setCharsetName, setEncodeBuffer, setEncoding, setErrorAction, setReplacementMethods inherited from class com.pervasive.datarush.operators.io.AbstractWriter
compose, getFormatOptions, getInput, getMode, getSaveMetadata, getTarget, getWriteBuffer, getWriteOnClient, getWriteSingleSink, isIgnoreSortOrder, setFormatOptions, setIgnoreSortOrder, setMode, setSaveMetadata, setTarget, setTarget, setTarget, setWriteBuffer, setWriteOnClient, setWriteSingleSinkMethods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyErrorMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
Constructor Details
-
WriteARFF
public WriteARFF()Writes ARFF data to an empty target with default settings. The target must be set before execution or an error will be raised.- See Also:
-
WriteARFF
Writes ARFF data to the specified path in the given mode, using default settings.If the writer is parallelized, this is interpreted as a directory in which each partition will write a fragment of the entire input stream. Otherwise, it is interpreted as the file to write.
- Parameters:
path- the path to which to writemode- how to handle existing files
-
WriteARFF
Writes ARFF data to the specified path in the given mode, using default settings.If the writer is parallelized, this is interpreted as a directory in which each partition will write a fragment of the entire input stream. Otherwise, it is interpreted as the file to write.
- Parameters:
path- the path to which to writemode- how to handle existing files
-
WriteARFF
Writes ARFF data to the specified target sink in the given mode.The writer can only be parallelized if the sink is fragmentable. In this case, each partition will be written as an independent sink. Otherwise, the writer will run non-parallel.
- Parameters:
target- the sink to which to writemode- how to handle an existing sink
-
-
Method Details
-
getRelationName
Get the value of the relation name property.- Returns:
- relation name
-
setRelationName
Set the relation name attribute. In ARFF, the relation name is captured in the meta-data with the tag@attribute.- Parameters:
relationName- name of the relation
-
addComment
Add a comment line that will be written to the ARFF meta-data section of the file. Multiple comments may be added. They will be written in the order provided.- Parameters:
comment- a line of commentary to add to the output file
-
setComments
Set the comments lines to write to the ARFF meta-data section of the file.- Parameters:
comments- lines of commentary to add to the output file
-
getComments
Gets the comments lines current set to be written.- Returns:
- the lines of commentary to add to the output file
-
setRecordSeparator
Set the record separator. This string will be output at the end of each record written. The default value is the system dependent record separator.- Parameters:
separator- text separating records
-
getRecordSeparator
Get the record separator.- Returns:
- record separator
-
getFormatMode
Get the value of the mode property.- Returns:
- write mode
-
setFormatMode
Set the mode with which to write the file. The supported modes areSPARSEandDENSE.- Parameters:
mode- write mode; either sparse or dense
-
getSchema
Get the configured schema for this reader instance.- Returns:
- configured schema (may be null)
-
setFieldDelimiter
public void setFieldDelimiter(char delimiter) Set the field delimiter to use when writing the file contents. A single quote is used by default. The only supported values are a single quote and a double quote.- Parameters:
delimiter- the delimiter to used for field values containing spaces
-
getFieldDelimiter
public char getFieldDelimiter()Get the configured field delimiter property value.- Returns:
- configured field delimiter
-
setSchema
Set the schema to use for the provided data. A schema is not required. If a schema is not provided, a default schema will be constructed using default formatting options for all fields. A schema is useful to explicitly specify the formatting options for fields. This is especially useful for date and timestamp types, but is applicable to all types.- Parameters:
schema- definition of field ordering, names, types and formats
-
computeFormat
Description copied from class:AbstractWriterDetermines the data format for the target. The returned format is used during composition to construct aWriteSinkoperator. If an implementation supports schema discovery, it must be performed in this method.- Specified by:
computeFormatin classAbstractWriter- Parameters:
ctx- the composition context for the current invocation ofAbstractWriter.compose(CompositionContext)- Returns:
- the target format to use
-