PMMLPredict() using Logistic Regression Model¶
Setup¶
In [1]:
# Import required libraries
import getpass
import tempfile
from teradataml import PMMLPredict, DataFrame, load_example_data, create_context, \
db_drop_table, remove_context, save_byom, retrieve_byom, delete_byom, list_byom
from teradataml.options.configure import configure
In [2]:
# Create the connection.
host = getpass.getpass("Host: ")
username = getpass.getpass("Username: ")
password = getpass.getpass("Password: ")
con = create_context(host=host, username=username, password=password)
Host: ········ Username: ········ Password: ········
Load example data and use sample() for splitting input data into testing and training dataset.¶
In [3]:
# Load the example data.
load_example_data("byom", "iris_input")
iris_input = DataFrame("iris_input")
In [4]:
# Create 2 samples of input data - sample 1 will have 80% of total rows and sample 2 will have 20% of total rows.
iris_sample = iris_input.sample(frac=[0.8, 0.2])
In [5]:
# Create train dataset from sample 1 by filtering on "sampleid" and drop "sampleid" column as it is not required for training model.
iris_train = iris_sample[iris_sample.sampleid == "1"].drop("sampleid", axis = 1)
iris_train
Out[5]:
id | sepal_length | sepal_width | petal_length | petal_width | species |
---|---|---|---|---|---|
17 | 5.4 | 3.9 | 1.3 | 0.4 | 1 |
59 | 6.6 | 2.9 | 4.6 | 1.3 | 2 |
99 | 5.1 | 2.5 | 3.0 | 1.1 | 2 |
40 | 5.1 | 3.4 | 1.5 | 0.2 | 1 |
57 | 6.3 | 3.3 | 4.7 | 1.6 | 2 |
61 | 5.0 | 2.0 | 3.5 | 1.0 | 2 |
78 | 6.7 | 3.0 | 5.0 | 1.7 | 2 |
76 | 6.6 | 3.0 | 4.4 | 1.4 | 2 |
120 | 6.0 | 2.2 | 5.0 | 1.5 | 3 |
122 | 5.6 | 2.8 | 4.9 | 2.0 | 3 |
In [6]:
# Create test dataset from sample 2 by filtering on "sampleid" and drop "sampleid" column as it is not required for scoring.
iris_test = iris_sample[iris_sample.sampleid == "2"].drop("sampleid", axis = 1)
iris_test
Out[6]:
id | sepal_length | sepal_width | petal_length | petal_width | species |
---|---|---|---|---|---|
106 | 7.6 | 3.0 | 6.6 | 2.1 | 3 |
38 | 4.9 | 3.6 | 1.4 | 0.1 | 1 |
148 | 6.5 | 3.0 | 5.2 | 2.0 | 3 |
19 | 5.7 | 3.8 | 1.7 | 0.3 | 1 |
137 | 6.3 | 3.4 | 5.6 | 2.4 | 3 |
55 | 6.5 | 2.8 | 4.6 | 1.5 | 2 |
95 | 5.6 | 2.7 | 4.2 | 1.3 | 2 |
110 | 7.2 | 3.6 | 6.1 | 2.5 | 3 |
36 | 5.0 | 3.2 | 1.2 | 0.2 | 1 |
61 | 5.0 | 2.0 | 3.5 | 1.0 | 2 |
Train Logistic Regression model¶
In [7]:
# Import required libraries.
import numpy as np
from nyoka import skl_to_pmml
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
In [8]:
# Convert teradataml dataframe to pandas dataframe.
# features : Training data.
# target : Training targets.
train_pd = iris_train.to_pandas()
features = train_pd.columns.drop('species')
target = 'species'
In [9]:
# Generate the logistic regression model
LogReg_pipe_obj = Pipeline([
("LogReg", LogisticRegression(random_state=0))
])
In [10]:
LogReg_pipe_obj.fit(train_pd[features],train_pd[target])
C:\Users\pg255042\Anaconda3\envs\teraml\lib\site-packages\sklearn\linear_model\_logistic.py:765: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
Out[10]:
Pipeline(steps=[('LogReg', LogisticRegression(random_state=0))])
Save the model in PMML format.¶
In [11]:
temp_dir = tempfile.TemporaryDirectory()
model_file_path = f"{temp_dir.name}/LogReg_pmml.pmml"
In [12]:
skl_to_pmml(LogReg_pipe_obj, features, target,model_file_path)
Save the model in Vantage.¶
In [13]:
# Save the PMML Model in Vantage.
save_byom("LogReg_pmml", model_file_path, "byom_models")
Created the model table 'byom_models' as it does not exist. Model is saved.
List the model from Vantage.¶
In [14]:
# List the PMML Models in Vantage.
list_byom("byom_models")
model model_id LogReg_pmml b'3C3F786D6C20766572...'
Retrieve the model from Vantage.¶
In [15]:
# Retrieve the model from table "byom_models", using the model id 'LogReg_pmml'.
modeldata = retrieve_byom("LogReg_pmml", "byom_models")
Set "configure.byom_install_location" to the database where BYOM functions are installed.¶
In [16]:
configure.byom_install_location = getpass.getpass("byom_install_location: ")
byom_install_location: ········
Score the model.¶
In [17]:
# Perform prediction using PMMLPredict() and the PMML model stored in Vantage.
result = PMMLPredict(
modeldata = modeldata,
newdata = iris_test,
accumulate = ['id', 'sepal_length', 'petal_length','petal_width','sepal_width'],
overwrite_cached_models = '*',
)
In [18]:
# Print the query.
print(result.show_query())
SELECT * FROM "mldb".PMMLPredict( ON "MLDB"."ml__select__1646280875903936" AS InputTable PARTITION BY ANY ON (select model_id,model from "MLDB"."ml__filter__1646286065165265") AS ModelTable DIMENSION USING Accumulate('id','sepal_length','petal_length','petal_width','sepal_width') OverwriteCachedModel('*') ) as sqlmr
In [19]:
# Print the result.
result.result
Out[19]:
id | sepal_length | petal_length | petal_width | sepal_width | prediction | json_report |
---|---|---|---|---|---|---|
28 | 5.2 | 1.5 | 0.2 | 3.5 | 1 | {"probability_1":0.5092043722052524,"predicted_species":1,"probability_2":0.4907717686590658,"probability_3":2.385913568182996E-5} |
38 | 4.9 | 1.4 | 0.1 | 3.6 | 1 | {"probability_1":0.5100130647653455,"predicted_species":1,"probability_2":0.4899721767209039,"probability_3":1.4758513750685702E-5} |
148 | 6.5 | 5.2 | 2.0 | 3.0 | 3 | {"probability_1":0.004010147129431223,"predicted_species":3,"probability_2":0.46571406846300106,"probability_3":0.5302757844075677} |
99 | 5.1 | 3.0 | 1.1 | 2.5 | 2 | {"probability_1":0.453516154312164,"predicted_species":2,"probability_2":0.5349786491137434,"probability_3":0.01150519657409262} |
74 | 6.1 | 4.7 | 1.2 | 2.8 | 2 | {"probability_1":0.031362355689500854,"predicted_species":2,"probability_2":0.5734551881733468,"probability_3":0.3951824561371523} |
40 | 5.1 | 1.5 | 0.2 | 3.4 | 1 | {"probability_1":0.5091500534660736,"predicted_species":1,"probability_2":0.49082480091847847,"probability_3":2.5145615448023423E-5} |
80 | 5.7 | 3.5 | 1.0 | 2.6 | 2 | {"probability_1":0.34394424099857585,"predicted_species":2,"probability_2":0.6200129236908061,"probability_3":0.03604283531061808} |
118 | 7.7 | 6.7 | 2.2 | 3.8 | 3 | {"probability_1":1.258246923313423E-4,"predicted_species":3,"probability_2":0.4415798744675827,"probability_3":0.558294300840086} |
15 | 5.8 | 1.2 | 0.2 | 4.0 | 1 | {"probability_1":0.5085773809738329,"predicted_species":1,"probability_2":0.491414288701027,"probability_3":8.330325140279367E-6} |
61 | 5.0 | 3.5 | 1.0 | 2.0 | 2 | {"probability_1":0.30610641895198143,"predicted_species":2,"probability_2":0.6433684908861783,"probability_3":0.05052509016184026} |
Cleanup.¶
In [20]:
# Delete the saved Model.
delete_byom("LogReg_pmml", table_name="byom_models")
Model is deleted.
In [21]:
# Drop model table.
db_drop_table("byom_models")
Out[21]:
True
In [22]:
# Drop input data table.
db_drop_table("iris_input")
Out[22]:
True
In [23]:
# One must run remove_context() to close the connection and garbage collect internally generated objects.
remove_context()
Out[23]:
True
In [ ]: