Use case: You want to create multiple micro models for each partition of the data (e.g., each product, each time period) and then score these models simultaneously.
Prerequisite steps:
- Import the python library (teradataml) and specific environment setup module.
import getpass import os from collections import OrderedDict from teradataml import create_context, remove_context, copy_to_sql, DataFrame, create_env, get_env, set_user_env, list_user_envs, remove_env from teradataml.scriptmgmt import UserEnv, lls_utils from teradataml.table_operators import Apply from teradatasqlalchemy.types import FLOAT, BLOB
- Connect from a client to a target VantageCloud Lake system where the training and scoring tasks will be performed.
print("Creating the context...") host = getpass.getpass("Host: ") username = getpass.getpass("Username: ") password = getpass.getpass("Password: ") engine = create_context(host=host, username=username, password=password)
- Generate the authentication token using set_auth_token API.
from teradataml import set_auth_token ues_url = getpass.getpass("ues_url: ") set_auth_token(ues_url=ues_url)