| |
Methods defined here:
- __init__(self, object=None, newdata=None, input_token_column=None, doc_id_columns=None, model_type='MULTINOMIAL', top_k=None, model_token_column=None, model_category_column=None, model_prob_column=None, terms=None, output_responses=None, output_prob=False, newdata_sequence_column=None, object_sequence_column=None, newdata_partition_column=None, newdata_order_column=None, object_order_column=None, stopwords=None, is_tokenized=True, convert_to_lower_case=False, stem_tokens=True, stopwords_sequence_column=None, stopwords_order_column=None)
- DESCRIPTION:
The NaiveBayesTextClassifierPredict function uses the model
teradataml DataFrame generated by the NaiveBayesTextClassifier or
NaiveBayesTextClassifier2 function to predict outcomes for test data.
Test data can be in the form of either documents or tokens.
Note:
1. This function is available only when teradataml is connected to
Vantage 1.1 or later versions.
2. Teradata recommends to use NaiveBayesTextClassifier function when
teradataml is connected to Vantage 1.1.1 or earlier versions.
3. Teradata recommends to use NaiveBayesTextClassifier2 function when
teradataml is connected to Vantage 1.3 or later versions.
PARAMETERS:
object:
Required Argument.
Specifies the teradataml DataFrame containing the model data
or instance of NaiveBayesTextClassifier or NaiveBayesTextClassifier2,
which contains the model.
object_order_column:
Optional Argument.
Specifies Order By columns for "object".
Values to this argument can be provided as a list, if multiple
columns are used for ordering.
Types: str OR list of Strings (str)
newdata:
Required Argument.
Specifies the teradataml DataFrame containing the input test
data.
newdata_partition_column:
Required Argument.
Specifies Partition By columns for "newdata".
Values to this argument can be provided as a list, if multiple
columns are used for partitioning.
Types: str OR list of Strings (str)
newdata_order_column:
Optional Argument.
Specifies Order By columns for "newdata".
Values to this argument can be provided as a list, if multiple
columns are used for ordering.
Types: str OR list of Strings (str)
input_token_column:
Required Argument.
Specifies the name of the column in the input argument "newdata"
that contains the texts or tokens.
Types: str
doc_id_columns:
Optional Argument. Required if teradataml is connected to
Vantage 1.1.1 or earlier version.
Specifies the names of the columns in the input argument
"newdata" that contain the document identifier.
Types: str OR list of Strings (str)
model_type:
Optional Argument.
Specifies the model type of the text classifier.
Default Value: "MULTINOMIAL"
Permitted Values: MULTINOMIAL, BERNOULLI
Types: str
top_k:
Optional Argument.
Specifies the number of most likely prediction categories to output
with their log-likelihood values (for example, the top 10 most
likely prediction categories). The default is all prediction
categories.
Note:
"top_k" cannot be specified along with "output_responses".
Types: int
model_token_column:
Optional Argument.
Specifies the name of the column in the argument "object" that
contains the tokens. The default value is the first column of
the model.
Note:
This argument must be specified along with "model_category_column"
and "model_prob_column".
Types: str
model_category_column:
Optional Argument.
Specifies the name of the column in the argument "object"
that contains the prediction categories. The default value is
the second column of the model.
Note:
This argument must be specified along with "model_token_column"
and "model_prob_column".
Types: str
model_prob_column:
Optional Argument.
Specifies the name of the column in the argument "object" that
contains the token counts. The default value is the third
column of the model.
Note:
This argument must be specified along with "model_token_column"
and "model_category_column".
Types: str
output_prob:
Optional Argument.
Specifies whether to output probabilities.
Default Value: False
Types: bool
terms:
Optional Argument.
Specifies the names of the input teradataml DataFrame columns to copy
to the output teradataml DataFrame.
Types: str OR list of Strings (str)
output_responses:
Optional Argument.
Specifies a list of output_responses to output.
Note:
1. "output_responses" argument support is only available when teradataml
is connected to Vantage 1.1.1 or later versions.
2. "output_responses" cannot be specified along with "top_k".
Types: str OR list of Strings (str)
newdata_sequence_column:
Optional Argument.
Specifies the list of column(s) that uniquely identifies each row of
the input argument "newdata". The argument is used to ensure
deterministic results for functions which produce results that vary
from run to run.
Types: str OR list of Strings (str)
object_sequence_column:
Optional Argument.
Specifies the list of column(s) that uniquely identifies each row of
the input argument "object". The argument is used to ensure
deterministic results for functions which produce results that vary
from run to run.
Types: str OR list of Strings (str)
stopwords:
Optional Argument when "is_tokenized" is 'False', disallowed otherwise.
Specifies the teradataml DataFrame defining the stop words.
Note:
"stopwords" argument support is only available when teradataml
is connected to Vantage 1.3 or later versions.
stopwords_order_column:
Optional Argument.
Specifies Order By columns for "stopwords".
Values to this argument can be provided as a list, if multiple
columns are used for ordering.
Note:
"stopwords_order_column" argument support is only available when
teradataml is connected to Vantage 1.3 or later versions.
Types: str OR list of Strings (str)
is_tokenized:
Optional Argument.
Specifies whether the input data is tokenized or not.
When it is set to 'True', input data is tokenized, otherwise input data
is not tokenized and will be tokenized internally.
Note:
"is_tokenized" argument support is only available when teradataml
is connected to Vantage 1.3 or later versions.
Default Value: True
Types: bool
convert_to_lower_case:
Optional Argument when "is_tokenized" is 'False', disallowed otherwise.
Specifies whether to convert all letters in the input text to lowercase.
value "true".
Note:
"convert_to_lower_case" argument support is only available when
teradataml is connected to Vantage 1.3 or later versions.
Default Value: False
Types: bool
stem_tokens:
Optional Argument when "is_tokenized" is 'False', disallowed otherwise.
Specifies whether to stem the tokens as part of text tokenization.
Note:
"stem_tokens" argument support is only available when teradataml
is connected to Vantage 1.3 or later versions.
Default Value: True
Types: bool
stopwords_sequence_column:
Optional Argument.
Specifies the list of column(s) that uniquely identifies each row of
the input argument "stopwords". The argument is used to ensure
deterministic results for functions which produce results that vary
from run to run.
Note:
"stopwords_sequence_column" argument support is only available when
teradataml is connected to Vantage 1.3 or later versions.
Types: str OR list of Strings (str)
RETURNS:
Instance of NaiveBayesTextClassifierPredict.
Output teradataml DataFrames can be accessed using attribute
references, such as
NaiveBayesTextClassifierPredictObj.<attribute_name>.
Output teradataml DataFrame attribute name is:
result
RAISES:
TeradataMlException, TypeError, ValueError
EXAMPLES:
# Load the data to run the example.
load_example_data("NaiveBayesTextClassifierPredict",["complaints_tokens_test","token_table",
"complaints","complaints_mini"])
# Create teradataml DataFrame.
token_table = DataFrame("token_table")
complaints_tokens_test = DataFrame("complaints_tokens_test")
complaints = DataFrame("complaints")
complaints_mini = DataFrame("complaints_mini")
# Example 1 -
# We will try to predict the 'tokens' for the complaints_tokens_test
# represented by the data points in the train data (token_table).
# Run NaiveBayesTextClassifier on the train data.
nbt_out = NaiveBayesTextClassifier(data = token_table,
token_column = 'token',
doc_id_columns = 'doc_id',
doc_category_column = 'category',
model_type = "Bernoulli",
data_partition_column = 'category')
# Use the generated model to predict the 'tokens' on the test data
# complaints_tokens_test by using nbt_out model which is
# generated by NaiveBayesTextClassifier.
nbt_predict_out1 = NaiveBayesTextClassifierPredict(object = nbt_out,
newdata = complaints_tokens_test,
input_token_column = 'token',
doc_id_columns = 'doc_id',
model_type = "Bernoulli",
model_token_column = 'token',
model_category_column = 'category',
model_prob_column = 'prob',
newdata_partition_column = 'doc_id')
# Print the result DataFrame.
print(nbt_predict_out1.result)
# Example 2 - "top_k" specified and "is_tokenized" set to 'False'
# We will try to predict the 'documents' for the complaints_test
# represented by the data points in the train data (complaints).
# Run NaiveBayesTextClassifier2 on the train data.
# Note:
# This Example will work only when teradataml is connected
# to Vantage 1.3 or later.
nbtct2_out = NaiveBayesTextClassifier2(data=complaints,
doc_category_column='category',
text_column='text_data',
doc_id_column='doc_id',
model_type='BERNOULLI',
is_tokenized=False
)
# Use the generated model to predict the 'documents' on the test data
# complaints_test by using Bernoulli model nbtct2_out which is
# generated by NaiveBayesTextClassifier2.
nbt_predict_out2 = NaiveBayesTextClassifierPredict(object = nbtct2_out,
newdata = complaints_mini,
input_token_column = 'text_data',
doc_id_columns = 'doc_id',
model_type = "Bernoulli",
newdata_partition_column = 'doc_id',
top_k=2,
output_prob=True,
is_tokenized=False)
# Print the result DataFrame.
print(nbt_predict_out2.result)
# Example 3 - "top_k" omitted and "is_tokenized" set to 'True'
# The input teradataml DataFrame 'complaints_test' is tokenized using
# TextTokenizer function.
# Note:
# This Example will work only when teradataml is connected
# to Vantage 1.3 or later.
complaints_test_tokenized = TextTokenizer(data=complaints_mini,
text_column='text_data',
language='en',
output_delimiter=' ',
output_byword =True,
accumulate=['doc_id', 'category'])
# Use input teradataml DataFrame complaints_test_tokenized which is the output of
# TextTokenizer function and Bernoulli model nbtct2_out which is
# generated by NaiveBayesTextClassifier2.
nbt_predict_out3 = NaiveBayesTextClassifierPredict(object = nbtct2_out,
newdata = complaints_test_tokenized.result,
input_token_column = 'token',
doc_id_columns = 'doc_id',
output_responses=['crash','no_crash'],
model_type = "Bernoulli",
newdata_partition_column = 'doc_id',
output_prob=True,
is_tokenized=True)
# Print the result DataFrame.
print(nbt_predict_out3.result)
- __repr__(self)
- Returns the string representation for a NaiveBayesTextClassifierPredict class instance.
- get_build_time(self)
- Function to return the build time of the algorithm in seconds.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- get_prediction_type(self)
- Function to return the Prediction type of the algorithm.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- get_target_column(self)
- Function to return the Target Column of the algorithm.
When model object is created using retrieve_model(), then the value returned is
as saved in the Model Catalog.
- show_query(self)
- Function to return the underlying SQL query.
When model object is created using retrieve_model(), then None is returned.
|