Teradata Package for Python Function Reference | 20.00 - GLM - Teradata Package for Python - Look here for syntax, methods and examples for the functions included in the Teradata Package for Python.

Teradata® Package for Python Function Reference - 20.00

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Package for Python
Release Number
20.00.00.03
Published
December 2024
ft:locale
en-US
ft:lastEdition
2024-12-19
dita:id
TeradataPython_FxRef_Enterprise_2000
Product Category
Teradata Vantage

H2OPredict() using GLM model.

Setup

In [2]:
import tempfile
import getpass
from teradataml import create_context, DataFrame, save_byom, retrieve_byom, \
delete_byom, list_byom, remove_context, load_example_data, db_drop_table
from teradataml.options.configure import configure
from teradataml.analytics.byom.H2OPredict import H2OPredict
import h2o
In [3]:
# Create the connection.
host = getpass.getpass("Host: ")
username = getpass.getpass("Username: ")
password = getpass.getpass("Password: ")

con = create_context(host=host, username=username, password=password)

Load example data and use sample() for splitting input data into testing and training dataset.

In [4]:
load_example_data("byom", "iris_input")
iris_input = DataFrame("iris_input")

# Create 2 samples of input data - sample 1 will have 80% of total rows and sample 2 will have 20% of total rows. 
iris_sample = iris_input.sample(frac=[0.8, 0.2])
In [5]:
# Create train dataset from sample 1 by filtering on "sampleid" and drop "sampleid" column as it is not required for training model.
iris_train = iris_sample[iris_sample.sampleid == "1"].drop("sampleid", axis = 1)
iris_train
Out[5]:
id sepal_length sepal_width petal_length petal_width species
120 6.0 2.2 5.0 1.5 3
78 6.7 3.0 5.0 1.7 2
76 6.6 3.0 4.4 1.4 2
101 6.3 3.3 6.0 2.5 3
139 6.0 3.0 4.8 1.8 3
19 5.7 3.8 1.7 0.3 1
59 6.6 2.9 4.6 1.3 2
99 5.1 2.5 3.0 1.1 2
17 5.4 3.9 1.3 0.4 1
38 4.9 3.6 1.4 0.1 1
In [6]:
# Create test dataset from sample 2 by filtering on "sampleid" and drop "sampleid" column as it is not required for scoring.
iris_test = iris_sample[iris_sample.sampleid == "2"].drop("sampleid", axis = 1)
iris_test
Out[6]:
id sepal_length sepal_width petal_length petal_width species
55 6.5 2.8 4.6 1.5 2
36 5.0 3.2 1.2 0.2 1
30 4.7 3.2 1.6 0.2 1
61 5.0 2.0 3.5 1.0 2
93 5.8 2.6 4.0 1.2 2
141 6.7 3.1 5.6 2.4 3
9 4.4 2.9 1.4 0.2 1
49 5.3 3.7 1.5 0.2 1
38 4.9 3.6 1.4 0.1 1
99 5.1 2.5 3.0 1.1 2

Prepare dataset for a creating Generalised Linear Regression model.

In [7]:
h2o.init()

# Since H2OFrame accepts pandas DataFrame, converting teradataml DataFrame to pandas DataFrame.
iris_train_pd = iris_train.to_pandas()
h2o_df = h2o.H2OFrame(iris_train_pd)
h2o_df
Checking whether there is an H2O instance running at http://localhost:54321 . connected.
H2O_cluster_uptime: 8 mins 57 secs
H2O_cluster_timezone: America/Los_Angeles
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.32.1.6
H2O_cluster_version_age: 1 month and 21 days
H2O_cluster_name: H2O_from_python_gp186005_ip5q0u
H2O_cluster_total_nodes: 1
H2O_cluster_free_memory: 3.998 Gb
H2O_cluster_total_cores: 12
H2O_cluster_allowed_cores: 12
H2O_cluster_status: locked, healthy
H2O_connection_url: http://localhost:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.3 final
Parse progress: |█████████████████████████████████████████████████████████| 100%
sepal_length sepal_width petal_length petal_width species
5 2 3.5 1 2
6.3 3.3 6 2.5 3
5.1 3.4 1.5 0.2 1
5.6 2.8 4.9 2 3
4.9 3.6 1.4 0.1 1
6.7 3.1 5.6 2.4 3
5.7 2.6 3.5 1 2
6.6 2.9 4.6 1.3 2
6.6 3 4.4 1.4 2
5.4 3.9 1.3 0.4 1
Out[7]:

Train Genralised Linear Regression Model.

In [8]:
# Import required libraries.
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
In [9]:
# Add the code for training model. 
h2o_df["species"] = h2o_df["species"].asfactor()
predictors = h2o_df.columns
response = "species"
In [10]:
glm_model = H2OGeneralizedLinearEstimator()
In [11]:
glm_model.train(x=predictors, y=response, training_frame=h2o_df)
glm Model Build progress: |███████████████████████████████████████████████| 100%

Save the model in MOJO format.

In [12]:
# Saving H2O Model to a file.
temp_dir = tempfile.TemporaryDirectory()
model_file_path = glm_model.save_mojo(path=f"{temp_dir.name}", force=True)

Save the model in Vantage.

In [13]:
# Save the H2O Model in Vantage.
save_byom(model_id="h2o_glm_iris", model_file=model_file_path, table_name="byom_models")
Created the model table 'byom_models' as it does not exist.
Model is saved.

List the models from Vantage.

In [14]:
# List the models from "byom_models".
list_byom("byom_models")
                                 model
model_id                              
h2o_glm_iris  b'504B03041400080808...'

Retrieve the model from Vantage.

In [15]:
# Retrieve the model from vantage using the model name 'h2o_glm_iris'.
model=retrieve_byom(model_id="h2o_glm_iris", table_name="byom_models")

Set "configure.byom_install_location" to the database where BYOM functions are installed.

In [16]:
configure.byom_install_location = getpass.getpass("byom_install_location: ")

Score the model.

In [17]:
# Score the model on 'iris_test' data.
result = H2OPredict(newdata=iris_test,
                    newdata_partition_column='id',
                    newdata_order_column='id',
                    modeldata=model,
                    modeldata_order_column='model_id',
                    model_output_fields=['label', 'classProbabilities'],
                    accumulate=['id', 'sepal_length', 'petal_length'],
                    overwrite_cached_models='*',
                    enable_options='stageProbabilities',
                    model_type='OpenSource'
                   )
In [18]:
# Print the query.
print(result.show_query())
SELECT * FROM "mldb".H2OPredict(
	ON "MLDB"."ml__select__16344331845030" AS InputTable
	PARTITION BY "id"
	ORDER BY "id" 
	ON (select model_id,model from "MLDB"."ml__filter__16345163360268") AS ModelTable
	DIMENSION
	ORDER BY "model_id"
	USING
	Accumulate('id','sepal_length','petal_length')
	ModelOutputFields('label','classProbabilities')
	OverwriteCachedModel('*')
	EnableOptions('stageProbabilities')
) as sqlmr
In [19]:
# Print the result.
result.result
Out[19]:
id sepal_length petal_length prediction label classprobabilities
88 6.3 4.4 2 2 {"1": 1.1783597564925874E-4,"2": 0.967322787872112,"3": 0.03255937615223876}
26 5.0 1.6 1 1 {"1": 0.9845786767142926,"2": 0.01542132328543173,"3": 2.757285440583015E-13}
60 5.2 3.9 2 2 {"1": 0.011952398134030913,"2": 0.9802572428184518,"3": 0.00779035904751729}
31 4.8 1.6 1 1 {"1": 0.9932486533040435,"2": 0.006751346695857105,"3": 9.95222274874653E-14}
45 5.1 1.9 1 1 {"1": 0.9975636889484245,"2": 0.0024363110515080834,"3": 6.740997390287097E-14}
50 5.0 1.4 1 1 {"1": 0.9965247902402493,"2": 0.003475209759737982,"3": 1.2509455236308859E-14}
54 5.5 4.0 2 2 {"1": 0.0014189199773822644,"2": 0.9860378216942937,"3": 0.012543258328324173}
57 6.3 4.7 2 2 {"1": 0.0015271539667560878,"2": 0.9282112143323902,"3": 0.07026163170085384}
36 5.0 1.2 1 1 {"1": 0.9963562311141493,"2": 0.003643768885841865,"3": 8.783774210261064E-15}
3 4.7 1.3 1 1 {"1": 0.9977891670380162,"2": 0.002210832961974951,"3": 8.89184767577873E-15}

Cleanup.

In [20]:
# Delete the saved Model from the table byom_models, using the model id h2o_glm_iris.
delete_byom("h2o_glm_iris", table_name="byom_models")
Model is deleted.
In [21]:
# Drop model table.
db_drop_table("byom_models")
Out[21]:
True
In [22]:
# Drop input data table.
db_drop_table("iris_input")
Out[22]:
True
In [23]:
# One must run remove_context() to close the connection and garbage collect internally generated objects.
remove_context()
Out[23]:
True
In [ ]: