PMMLPredict
Description
The function transforms the input data during model training as part
of a pipeline. The generated model, stored in XML format, includes
the preprocessing steps. During model prediction, the transformations are
applied to the input data and the transformed data is scored by the td_pmml_predict_sqle()
.
PMML supports the following input data transformations:
Normalization: Scales continuous or discrete input values to specified range. Python Function: MinMaxScaler
Discretization: Maps continuous input values to discrete values. Python Function: CutTransformer
Value Mapping: Maps discrete input values to other discrete values. Python Functions: StandardScalar, LabelEncoder
Function Mapping: Maps input values to values derived from applying a function. Python Function: FunctionTransformer
td_pmml_predict_sqle()
function supports the following external models:
Anomaly Detection
Association Rules
Cluster
General Regression
k-Nearest Neighbors
Naive Bayes
Neural Network
Regression
Ruleset
Scorecard
Random Forest
Decision Tree
Vector Machine
Multiple Models
Usage
td_pmml_predict_sqle (
newdata = NULL,
modeldata = NULL,
accumulate = NULL,
model.output.fields = NULL,
overwrite.cached.models = NULL,
is.debug = FALSE,
...
)
Arguments
newdata |
Required Argument. |
modeldata |
Required Argument. |
accumulate |
Required Argument. |
model.output.fields |
Optional Argument. |
overwrite.cached.models |
Optional Argument. |
is.debug |
Optional Argument.
Default Value: FALSE |
... |
Specifies the generic keyword arguments SQLE functions accept. Below
are the generic keyword arguments: volatile: Function allows the user to partition, hash, order or local order the input data. These generic arguments are available for each argument that accepts tbl_teradata as input and can be accessed as:
Note: |
Value
Function returns an object of class "td_pmml_predict_sqle"
which is a named list containing object of class "tbl_teradata".
Named list member(s) can be referenced directly with the "$" operator
using the name(s):result
Examples
# Get the current context/connection..
con <- td_get_context()$connection
# Load example data.
loadExampleData("pmmlpredict_example", "iris_test")
# Create tbl_teradata object.
iris_test <- tbl(con, "iris_test")
# Set install location of BYOM functions.
options(byom.install.location = "mldb")
# Check the list of available analytic functions.
display_analytic_functions(type="BYOM")
# Example 1: This example scores the data on Vantage using a GLM model generated
# outside of Vantage. The example performs prediction with td_pmml_predict_sqle
# function using this GLM model in PMML format generated by open source
# model. Corresponding values are specified for the "overwrite.cached.models".
# This will erase entire cache.
# Create following table on vantage if it does not exist.
crt_tbl <- "CREATE SET TABLE byom_models(model_id VARCHAR(40), model BLOB)
PRIMARY INDEX (model_id);"
DBI::dbExecute(con, sql(crt_tbl))
# Run the following query through BTEQ or Teradata Studio to load the
# models. 'load_byom_model.txt' and byom files can be found under
# 'inst/scripts' in tdplyr installation directory. This file and the byom
# models to be loaded should be in the same directory.
# .import vartext file load_byom_model.txt
# .repeat *
# USING (c1 VARCHAR(40), c2 BLOB AS DEFERRED BY NAME) INSERT INTO byom_models(:c1, :c2);
# Retrieve model.
modeldata <- tbl(con, "byom_models")
result <- td_pmml_predict_sqle(
modeldata = modeldata,
newdata = iris_test,
accumulate = c('id', 'sepal_length', 'petal_length'),
overwrite.cached.models = '*',
)
# Print the results.
print(result$result)
# Example 2: This example scores the data on Vantage using a XGBoost model generated
# outside of Vantage. The example performs prediction with td_pmml_predict_sqle
# function using this XGBoost model in PMML format generated by open source
# model. Corresponding values are specified for the "overwrite.cached.models".
# This will erase entire cache.
# Retrieve model.
modeldata <- tbl(con, "byom_models")
result <- td_pmml_predict_sqle(
modeldata = modeldata,
newdata = iris_test,
accumulate = c('id', 'sepal_length', 'petal_length'),
overwrite.cached.models = '*',
)
# Print the results.
print(result$result)
# Example 3: Example to show case the trace table usage using
# is.debug=TRUE.
# Create the trace table.
crt_tbl_query <- 'CREATE GLOBAL TEMPORARY TRACE TABLE BYOM_Trace \
(vproc_ID BYTE(2) \
,Sequence INTEGER \
,Trace_Output VARCHAR(31000) CHARACTER SET LATIN NOT CASESPECIFIC) \
ON COMMIT PRESERVE ROWS;'
dbExecute(con, crt_tbl_query)
# Turn on tracing for the session.
dbExecute(con, "SET SESSION FUNCTION TRACE USING '' FOR TABLE BYOM_Trace;")
modeldata <- tbl(con, "byom_models")
# Execute the td_pmml_predict_sqle() function using is.debug=TRUE.
result <- td_pmml_predict_sqle(
modeldata = modeldata,
newdata = iris_test,
accumulate = c('id', 'sepal_length', 'petal_length'),
overwrite.cached.models = '*',
is.debug=TRUE
)
# Print the results.
print(result$result)
# View the trace table information.
trace_df <- dbGetQuery(con, "select * from BYOM_Trace")
print(trace_df)
# Turn off tracing for the session.
dbExecute(con, "SET SESSION FUNCTION TRACE OFF;")