TDPredictor.from_predictor Method |teradataml Extension | API Integration - TDPredictor.from_predictor Method - Teradata Vantage

Teradata Vantageā„¢ - API Integration Guide for Cloud Machine Learning

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Vantage
Release Number
1.4
Published
September 2023
Language
English (United States)
Last Update
2023-09-28
dita:mapPath
mgu1643999543506.ditamap
dita:ditavalPath
ayr1485454803741.ditaval
dita:id
mgu1643999543506

Use the TDPredictor.from_predictor method to create TDPredictor from the SageMaker predictor object to allow for prediction using teradataml DataFrame and SageMaker endpoint represented by this predictor object.

Required Arguments:
  • sagemaker_predictor_obj: Specifies the instance of SageMaker predictor class.
  • tdapi_context: Specifies the TDAPI context object holding AWS credentials information.

Example

from tdapiclient import TDPredictor, create_tdapi_context
import sagemaker
from sagemaker.xgboost.estimator import XGBoost
from sagemaker.session import s3_input, Session
# Initialize hyperparameters
hyperparameters = {
        "max_depth":"5",
        "eta":"0.2",
        "gamma":"4",
        "min_child_weight":"6",
        "subsample":"0.7",
        "verbosity":"1",
        "objective":"reg:linear",
        "num_round":"50"}
 
    
# Set an output path where the trained model will be saved
bucket = sagemaker.Session().default_bucket()
prefix = 'DEMO-xgboost-as-a-framework'
output_path = 's3://{}/{}/{}/output'.format(bucket, prefix, 'abalone-xgb-framework')
# Construct a SageMaker XGBoost estimator
# Specify the entry_point to your xgboost training script
estimator = XGBoost(entry_point = "your_xgboost_abalone_script.py",
                    framework_version='1.0-1',
                    hyperparameters=hyperparameters,
                    role=sagemaker.get_execution_role(),
                    train_instance_count=1,
                    train_instance_type='ml.m5.2xlarge',
                    output_path=output_path)
# Define the data type and paths to the training and validation datasets
content_type = "csv"
train_input = s3_input("s3://{}/{}/{}/".format(bucket, prefix, 'train'), content_type=content_type)
validation_input = s3_input("s3://{}/{}/{}/".format(bucket, prefix, 'validation'), content_type=content_type)
# Execute the XGBoost training job
estimator.fit({'train': train_input, 'validation': validation_input})
sagemaker_predictor = estimator.deploy()
context = create_tdapi_context("aws", "s3_bucket")
tdsg_predictor = TDSagemakerPredictor.from_predictor(sagemaker_predictor, context)