Teradata Package for Python Function Reference | 20.00 - XGBoost - Teradata Package for Python - Look here for syntax, methods and examples for the functions included in the Teradata Package for Python.

Teradata® Package for Python Function Reference - 20.00

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Package for Python
Release Number
20.00
Published
March 2024
Language
English (United States)
Last Update
2024-04-10
dita:id
TeradataPython_FxRef_Enterprise_2000
Product Category
Teradata Vantage

H2OPredict() using XGBoost model.

Setup

In [1]:
import tempfile
import getpass
import teradataml as td
from teradataml import create_context, remove_context, load_example_data, DataFrame,\
db_drop_table, save_byom, retrieve_byom, delete_byom, list_byom
from teradataml.options.configure import configure
from teradataml.analytics.byom.H2OPredict import H2OPredict
import h2o
In [2]:
# Create the connection.
host = getpass.getpass("Host: ")
username = getpass.getpass("Username: ")
password = getpass.getpass("Password: ")

con = create_context(host=host, username=username, password=password)

Load example data and use sample() for splitting input data into testing and training dataset.

In [3]:
load_example_data("byom", "iris_input")
iris_input = DataFrame("iris_input")

# Create 2 samples of input data - sample 1 will have 80% of total rows and sample 2 will have 20% of total rows. 
iris_sample = iris_input.sample(frac=[0.8, 0.2])
In [4]:
# Create train dataset from sample 1 by filtering on "sampleid" and drop "sampleid" column as it is not required for training model.
iris_train = iris_sample[iris_sample.sampleid == "1"].drop("sampleid", axis = 1)
iris_train
Out[4]:
id sepal_length sepal_width petal_length petal_width species
118 7.7 3.8 6.7 2.2 3
99 5.1 2.5 3.0 1.1 2
97 5.7 2.9 4.2 1.3 2
38 4.9 3.6 1.4 0.1 1
76 6.6 3.0 4.4 1.4 2
101 6.3 3.3 6.0 2.5 3
141 6.7 3.1 5.6 2.4 3
17 5.4 3.9 1.3 0.4 1
78 6.7 3.0 5.0 1.7 2
59 6.6 2.9 4.6 1.3 2
In [5]:
# Create test dataset from sample 2 by filtering on "sampleid" and drop "sampleid" column as it is not required for scoring.
iris_test = iris_sample[iris_sample.sampleid == "2"].drop("sampleid", axis = 1)
iris_test
Out[5]:
id sepal_length sepal_width petal_length petal_width species
127 6.2 2.8 4.8 1.8 3
36 5.0 3.2 1.2 0.2 1
114 5.7 2.5 5.0 2.0 3
38 4.9 3.6 1.4 0.1 1
108 7.3 2.9 6.3 1.8 3
32 5.4 3.4 1.5 0.4 1
11 5.4 3.7 1.5 0.2 1
66 6.7 3.1 4.4 1.4 2
133 6.4 2.8 5.6 2.2 3
59 6.6 2.9 4.6 1.3 2

Prepare dataset for creating an XGBoost model.

In [6]:
h2o.init()

# Since H2OFrame accepts pandas DataFrame, converting teradataml DataFrame to pandas DataFrame.
iris_train_pd = iris_train.to_pandas()
h2o_df = h2o.H2OFrame(iris_train_pd)
h2o_df
Checking whether there is an H2O instance running at http://localhost:54321 . connected.
H2O_cluster_uptime: 10 mins 52 secs
H2O_cluster_timezone: America/Los_Angeles
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.32.1.6
H2O_cluster_version_age: 1 month and 21 days
H2O_cluster_name: H2O_from_python_gp186005_ip5q0u
H2O_cluster_total_nodes: 1
H2O_cluster_free_memory: 3.998 Gb
H2O_cluster_total_cores: 12
H2O_cluster_allowed_cores: 12
H2O_cluster_status: locked, healthy
H2O_connection_url: http://localhost:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.3 final
Parse progress: |█████████████████████████████████████████████████████████| 100%
sepal_length sepal_width petal_length petal_width species
5 2 3.5 1 2
6.3 3.3 6 2.5 3
5.1 3.4 1.5 0.2 1
5.7 3.8 1.7 0.3 1
4.9 3.6 1.4 0.1 1
6.7 3.1 5.6 2.4 3
5.7 2.6 3.5 1 2
5.1 2.5 3 1.1 2
6.7 3 5 1.7 2
5.4 3.9 1.3 0.4 1
Out[6]:

Train XGBoost Model.

In [7]:
# Import required libraries.
from h2o.estimators import H2OXGBoostEstimator
In [8]:
# Add the code for training model. 
h2o_df["species"] = h2o_df["species"].asfactor()
predictors = h2o_df.columns
response = "species"
In [9]:
iris_xgb = H2OXGBoostEstimator(booster='dart',
                               normalize_type="tree",
                               seed=1234)
In [10]:
iris_xgb.train(x=predictors, y=response, training_frame=h2o_df)
xgboost Model Build progress: |███████████████████████████████████████████| 100%

Save the model in MOJO format.

In [11]:
# Saving H2O Model to a file.
temp_dir = tempfile.TemporaryDirectory()
model_file_path = iris_xgb.save_mojo(path=f"{temp_dir.name}", force=True)

Save the model in Vantage.

In [12]:
# Save the H2O Model in Vantage.
save_byom("h2o_xgb_iris", model_file_path, "byom_models")
Created the model table 'byom_models' as it does not exist.
Model is saved.

List the models from Vantage.

In [13]:
# List the models from "byom_models".
list_byom("byom_models")
                                 model
model_id                              
h2o_xgb_iris  b'504B03041400080808...'

Retrieve the model from Vantage.

In [14]:
# Retrieve the model from vantage using the model name 'h2o_xgb_iris'.
modeldata = retrieve_byom("h2o_xgb_iris", table_name="byom_models")

Set "configure.byom_install_location" to the database where BYOM functions are installed.

In [15]:
configure.byom_install_location = getpass.getpass("byom_install_location: ")

Score the model.

In [16]:
result = H2OPredict(newdata=iris_test,
                    newdata_partition_column='id',
                    newdata_order_column='id',
                    modeldata=modeldata,
                    modeldata_order_column='model_id',
                    model_output_fields=['label', 'classProbabilities'],
                    accumulate=['id', 'sepal_length', 'petal_length'],
                    overwrite_cached_models='*',
                    enable_options='stageProbabilities',
                    model_type='OpenSource'
                   )
In [17]:
# Print the query.
print(result.show_query())
SELECT * FROM "mldb".H2OPredict(
	ON "MLDB"."ml__select__16345201251589" AS InputTable
	PARTITION BY "id"
	ORDER BY "id" 
	ON (select model_id,model from "MLDB"."ml__filter__16345179358491") AS ModelTable
	DIMENSION
	ORDER BY "model_id"
	USING
	Accumulate('id','sepal_length','petal_length')
	ModelOutputFields('label','classProbabilities')
	OverwriteCachedModel('*')
	EnableOptions('stageProbabilities')
) as sqlmr
In [18]:
# Print the result.
result.result
Out[18]:
id sepal_length petal_length prediction label classprobabilities
95 5.6 4.2 2 2 {"1": 0.002973866416141391,"2": 0.9956537485122681,"3": 0.0013723127776756883}
53 6.9 4.9 2 2 {"1": 0.0022306146565824747,"2": 0.9898779392242432,"3": 0.007891373708844185}
69 6.2 4.5 2 2 {"1": 0.0034840735606849194,"2": 0.9866116046905518,"3": 0.009904342703521252}
25 4.8 1.9 1 1 {"1": 0.9940190315246582,"2": 0.004053735174238682,"3": 0.0019271911587566137}
76 6.6 4.4 2 2 {"1": 0.0015327803557738662,"2": 0.9975995421409607,"3": 8.676660363562405E-4}
11 5.4 1.5 1 1 {"1": 0.9942521452903748,"2": 0.004054686054587364,"3": 0.001693196245469153}
28 5.2 1.5 1 1 {"1": 0.9942521452903748,"2": 0.004054686054587364,"3": 0.001693196245469153}
48 4.6 1.4 1 1 {"1": 0.9940190315246582,"2": 0.004053735174238682,"3": 0.0019271911587566137}
61 5.0 3.5 2 2 {"1": 0.017349962145090103,"2": 0.9489255547523499,"3": 0.03372446820139885}
21 5.4 1.7 1 1 {"1": 0.9942521452903748,"2": 0.004054686054587364,"3": 0.001693196245469153}

Cleanup.

In [19]:
# Delete the model from table "byom_models", using the model id 'h2o_xgb_iris'.
delete_byom("h2o_xgb_iris", "byom_models")
Model is deleted.
In [20]:
# Drop models table.
db_drop_table("byom_models")
Out[20]:
True
In [21]:
# Drop input data table.
db_drop_table("iris_input")
Out[21]:
True
In [22]:
# One must run remove_context() to close the connection and garbage collect internally generated objects.
remove_context()
Out[22]:
True
In [ ]: