| |
Methods defined here:
- __init__(self, data=None, dict_data=None, text_column=None, language='en', model=None, output_delimiter='/', output_byword=False, user_dictionary=None, accumulate=None, data_sequence_column=None, dict_data_sequence_column=None, data_order_column=None, dict_data_order_column=None)
- DESCRIPTION:
The TextTokenizer function extracts English, Chinese, or Japanese
tokens from text. Examples of tokens are words, punctuation marks,
and numbers. Tokenization is the first step of many types of
text analysis.
PARAMETERS:
data:
Required Argument.
teradataml DataFrame that contains the text to be scanned.
data_order_column:
Optional Argument.
Specifies Order By columns for data.
Values to this argument can be provided as a list, if multiple
columns are used for ordering.
Types: str OR list of Strings (str)
dict_data:
Optional Argument.
teradataml DataFrame that contains the dictionary for
segementing words.
dict_data_order_column:
Optional Argument.
Specifies Order By columns for dict_data.
Values to this argument can be provided as a list, if multiple
columns are used for ordering.
Types: str OR list of Strings (str)
text_column:
Required Argument.
Specifies name of the column in the argument data, that contains
the text to tokenize.
Types: str
language:
Optional Argument.
Specifies the language of the text in text_column.
Default Value: "en"
Permitted Values: en, zh_CN, zh_TW, jp
Types: str
model:
Optional Argument.
Specifies the name of model file that the function uses for
tokenizing. The model must be a conditional random-fields model and
model_file must already be installed on the database. If you omit
this argument, or if model_file is not installed on the database,
then the function uses white spaces to separate English words and an
embedded dictionary to tokenize Chinese text.
Note: If you specify the argument "language" with value "jp", the
function ignores this argument.
Types: str
output_delimiter:
Optional Argument.
Specifies the delimiter for separating tokens in the output.
Default Value: "/"
Types: str
output_byword:
Optional Argument.
Specifies whether to output one token in each row (output one
line of text in each row).
Default Value: False
Types: bool
user_dictionary:
Optional Argument.
Specifies the name of the user dictionary to use to correct
results specified by the model. If you specify both this
argument and a dictionary teradataml DataFrame (dict_data), then
the function uses the union of user_dictionary and dict_data as
its dictionary. That describes the format of user_dictionary_file
and dict.
Note: If the function finds more than one matched term,
it selects the longest term for the first match.
Types: str
accumulate:
Optional Argument.
Specifies the name of the column in the argument data, to copy
to the output table.
Types: str OR list of Strings (str)
data_sequence_column:
Optional Argument.
Specifies the list of column(s) that uniquely identifies each
row of the input argument "data". The argument is used to ensure
deterministic results for functions which produce results that
vary from run to run.
Types: str OR list of Strings (str)
dict_data_sequence_column:
Optional Argument.
Specifies the list of column(s) that uniquely identifies each
row of the input argument "dict_data". The argument is used to
ensure deterministic results for functions which produce results
that vary from run to run.
Types: str OR list of Strings (str)
RETURNS:
Instance of TextTokenizer.
Output teradataml DataFrames can be accessed using attribute
references, such as TextTokenizerObj.<attribute_name>.
Output teradataml DataFrame attribute name is:
result
RAISES:
TeradataMlException
EXAMPLES:
# Load the data to run the example.
load_example_data("TextTokenizer","complaints")
# Create teradataml DataFrame
complaints = DataFrame.from_table("complaints")
# Example 1 -
text_tokenizer_out = TextTokenizer(data=complaints,
text_column='text_data',
language='en',
output_delimiter=' ',
output_byword =True,
accumulate='doc_id')
# Print the result DataFrame
print(text_tokenizer_out.result)
- __repr__(self)
- Returns the string representation for a TextTokenizer class instance.
- get_build_time(self)
- Function to return the build time of the algorithm in seconds.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- get_prediction_type(self)
- Function to return the Prediction type of the algorithm.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- get_target_column(self)
- Function to return the Target Column of the algorithm.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- show_query(self)
- Function to return the underlying SQL query.
When model object is created using retrieve_model(), then None is returned.
|