- java.lang.Object
-
- com.pervasive.datarush.operators.AbstractLogicalOperator
-
- com.pervasive.datarush.operators.CompositeOperator
-
- com.pervasive.datarush.operators.io.AbstractWriter
-
- com.pervasive.datarush.operators.io.textfile.AbstractTextWriter
-
- com.pervasive.datarush.operators.io.textfile.WriteARFF
-
- All Implemented Interfaces:
LogicalOperator
,RecordSinkOperator
,SinkOperator<RecordPort>
public class WriteARFF extends AbstractTextWriter
Write files using the Attribute-Relation File Format (ARFF). ARFF supports both sparse and dense formats. This can be specified by using thesetFormatMode(ARFFMode)
method. The default mode is sparse. When the input data is highly sparse, the sparse mode can save space since it only writes non-zero values. However, if the data is not very sparse, using sparse mode can actually cause the resultant file to be larger; use dense mode for dense data.Note that when writing sparse data that contains enumerated types, the zero ordinal category will always be treated as sparse and so will not appear in the output file. This is normal and expected behavior. The enumerated type is captured in the meta-data. When the file is read, the zero ordinal values will be restored as expected.
-
-
Field Summary
-
Fields inherited from class com.pervasive.datarush.operators.io.textfile.AbstractTextWriter
encodingProps
-
Fields inherited from class com.pervasive.datarush.operators.io.AbstractWriter
input, options
-
-
Constructor Summary
Constructors Constructor Description WriteARFF()
Writes ARFF data to an empty target with default settings.WriteARFF(Path path, WriteMode mode)
Writes ARFF data to the specified path in the given mode, using default settings.WriteARFF(ByteSink target, WriteMode mode)
Writes ARFF data to the specified target sink in the given mode.WriteARFF(String path, WriteMode mode)
Writes ARFF data to the specified path in the given mode, using default settings.
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description void
addComment(String comment)
Add a comment line that will be written to the ARFF meta-data section of the file.protected DataFormat
computeFormat(CompositionContext ctx)
Determines the data format for the target.List<String>
getComments()
Gets the comments lines current set to be written.char
getFieldDelimiter()
Get the configured field delimiter property value.ARFFMode
getFormatMode()
Get the value of the mode property.String
getRecordSeparator()
Get the record separator.String
getRelationName()
Get the value of the relation name property.TextRecord
getSchema()
Get the configured schema for this reader instance.void
setComments(List<String> comments)
Set the comments lines to write to the ARFF meta-data section of the file.void
setFieldDelimiter(char delimiter)
Set the field delimiter to use when writing the file contents.void
setFormatMode(ARFFMode mode)
Set the mode with which to write the file.void
setRecordSeparator(String separator)
Set the record separator.void
setRelationName(String relationName)
Set the relation name attribute.void
setSchema(TextRecord schema)
Set the schema to use for the provided data.-
Methods inherited from class com.pervasive.datarush.operators.io.textfile.AbstractTextWriter
getCharset, getCharsetName, getEncodeBuffer, getEncoding, getErrorAction, getReplacement, setCharset, setCharsetName, setEncodeBuffer, setEncoding, setErrorAction, setReplacement
-
Methods inherited from class com.pervasive.datarush.operators.io.AbstractWriter
compose, getFormatOptions, getInput, getMode, getSaveMetadata, getTarget, getWriteBuffer, getWriteOnClient, getWriteSingleSink, isIgnoreSortOrder, setFormatOptions, setIgnoreSortOrder, setMode, setSaveMetadata, setTarget, setTarget, setTarget, setWriteBuffer, setWriteOnClient, setWriteSingleSink
-
Methods inherited from class com.pervasive.datarush.operators.AbstractLogicalOperator
disableParallelism, getInputPorts, getOutputPorts, newInput, newInput, newOutput, newRecordInput, newRecordInput, newRecordOutput, notifyError
-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Methods inherited from interface com.pervasive.datarush.operators.LogicalOperator
disableParallelism, getInputPorts, getOutputPorts
-
-
-
-
Constructor Detail
-
WriteARFF
public WriteARFF()
Writes ARFF data to an empty target with default settings. The target must be set before execution or an error will be raised.- See Also:
AbstractWriter.setTarget(ByteSink)
-
WriteARFF
public WriteARFF(String path, WriteMode mode)
Writes ARFF data to the specified path in the given mode, using default settings.If the writer is parallelized, this is interpreted as a directory in which each partition will write a fragment of the entire input stream. Otherwise, it is interpreted as the file to write.
- Parameters:
path
- the path to which to writemode
- how to handle existing files
-
WriteARFF
public WriteARFF(Path path, WriteMode mode)
Writes ARFF data to the specified path in the given mode, using default settings.If the writer is parallelized, this is interpreted as a directory in which each partition will write a fragment of the entire input stream. Otherwise, it is interpreted as the file to write.
- Parameters:
path
- the path to which to writemode
- how to handle existing files
-
WriteARFF
public WriteARFF(ByteSink target, WriteMode mode)
Writes ARFF data to the specified target sink in the given mode.The writer can only be parallelized if the sink is fragmentable. In this case, each partition will be written as an independent sink. Otherwise, the writer will run non-parallel.
- Parameters:
target
- the sink to which to writemode
- how to handle an existing sink
-
-
Method Detail
-
getRelationName
public String getRelationName()
Get the value of the relation name property.- Returns:
- relation name
-
setRelationName
public void setRelationName(String relationName)
Set the relation name attribute. In ARFF, the relation name is captured in the meta-data with the tag@attribute
.- Parameters:
relationName
- name of the relation
-
addComment
public void addComment(String comment)
Add a comment line that will be written to the ARFF meta-data section of the file. Multiple comments may be added. They will be written in the order provided.- Parameters:
comment
- a line of commentary to add to the output file
-
setComments
public void setComments(List<String> comments)
Set the comments lines to write to the ARFF meta-data section of the file.- Parameters:
comments
- lines of commentary to add to the output file
-
getComments
public List<String> getComments()
Gets the comments lines current set to be written.- Returns:
- the lines of commentary to add to the output file
-
setRecordSeparator
public void setRecordSeparator(String separator)
Set the record separator. This string will be output at the end of each record written. The default value is the system dependent record separator.- Parameters:
separator
- text separating records
-
getRecordSeparator
public String getRecordSeparator()
Get the record separator.- Returns:
- record separator
-
getFormatMode
public ARFFMode getFormatMode()
Get the value of the mode property.- Returns:
- write mode
-
setFormatMode
public void setFormatMode(ARFFMode mode)
Set the mode with which to write the file. The supported modes areSPARSE
andDENSE
.- Parameters:
mode
- write mode; either sparse or dense
-
getSchema
public TextRecord getSchema()
Get the configured schema for this reader instance.- Returns:
- configured schema (may be null)
-
setFieldDelimiter
public void setFieldDelimiter(char delimiter)
Set the field delimiter to use when writing the file contents. A single quote is used by default. The only supported values are a single quote and a double quote.- Parameters:
delimiter
- the delimiter to used for field values containing spaces
-
getFieldDelimiter
public char getFieldDelimiter()
Get the configured field delimiter property value.- Returns:
- configured field delimiter
-
setSchema
public void setSchema(TextRecord schema)
Set the schema to use for the provided data. A schema is not required. If a schema is not provided, a default schema will be constructed using default formatting options for all fields. A schema is useful to explicitly specify the formatting options for fields. This is especially useful for date and timestamp types, but is applicable to all types.- Parameters:
schema
- definition of field ordering, names, types and formats
-
computeFormat
protected DataFormat computeFormat(CompositionContext ctx)
Description copied from class:AbstractWriter
Determines the data format for the target. The returned format is used during composition to construct aWriteSink
operator. If an implementation supports schema discovery, it must be performed in this method.- Specified by:
computeFormat
in classAbstractWriter
- Parameters:
ctx
- the composition context for the current invocation ofAbstractWriter.compose(CompositionContext)
- Returns:
- the target format to use
-
-