Training and Scoring Multiple Micro Models for Each Partition | Open Analytics Framework on VantageCloud Lake - Training and Scoring Multiple Micro Models for Each Partition - Teradata Vantage

Teradata® VantageCloud Lake

Deployment
VantageCloud
Edition
Lake
Product
Teradata Vantage
Published
January 2023
ft:locale
en-US
ft:lastEdition
2024-12-11
dita:mapPath
phg1621910019905.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
phg1621910019905

Use case: You want to create multiple micro models for each partition of the data (e.g., each product, each time period) and then score these models simultaneously.

Prerequisite steps:
  • Import the python library (teradataml) and specific environment setup module.
    import getpass
    import os
    from collections import OrderedDict
    from teradataml import create_context, remove_context, copy_to_sql, DataFrame, create_env, get_env, set_user_env, list_user_envs, remove_env
    from teradataml.scriptmgmt import UserEnv, lls_utils
    from teradataml.table_operators import Apply
    from teradatasqlalchemy.types import FLOAT, BLOB
  • Connect from a client to a target VantageCloud Lake system where the training and scoring tasks will be performed.
    print("Creating the context...")
    host =  getpass.getpass("Host: ")
    username = getpass.getpass("Username: ")
    password = getpass.getpass("Password: ")
    engine = create_context(host=host, username=username, password=password)
  • Generate the authentication token using set_auth_token API.
    from teradataml import set_auth_token
    ues_url = getpass.getpass("ues_url: ")
    set_auth_token(ues_url=ues_url)