Teradata Package for Python Function Reference on VantageCloud Lake - evaluate - Teradata Package for Python - Look here for syntax, methods and examples for the functions included in the Teradata Package for Python.
Teradata® Package for Python Function Reference on VantageCloud Lake
- Deployment
- VantageCloud
- Edition
- Lake
- Product
- Teradata Package for Python
- Release Number
- 20.00.00.08
- Published
- November 2025
- ft:locale
- en-US
- ft:lastEdition
- 2025-12-05
- dita:id
- TeradataPython_FxRef_Lake_2000
- Product Category
- Teradata Vantage
- teradataml.automl.AutoChurn.evaluate = evaluate(self, data, rank=1, use_loaded_models=False)
- DESCRIPTION:
Function evaluates on data using model rank in leaderboard
and generates performance metrics.
Note:
* AutoCluster does not support evaluate method, so it raises an exception.
* If both fit and load method are called before predict, then fit method model will be used
for prediction by default unless 'use_loaded_models' is set to True in predict.
PARAMETERS:
data:
Required Argument.
Specifies the dataset on which performance metrics needs to be generated.
Types: teradataml DataFrame
Note:
* Target column used for generating model is mandatory in "data" for evaluation.
rank:
Optional Argument.
Specifies the rank of the model available in the leaderboard to be used for evaluation.
Default Value: 1
Types: int
use_loaded_models:
Optional Argument.
Specifies whether to use loaded models from database for prediction or not.
Default Value: False
Types: bool
RETURNS:
Pandas DataFrame with performance metrics.
RAISES:
TeradataMlException.
EXAMPLES:
# Create an instance of the AutoML called "automl_obj" by referring
# "AutoML()" or "AutoRegressor()" or "AutoClassifier()" or
# "AutoFraud()" or "AutoChurn()" method.
# Perform fit() operation on the "automl_obj".
# Perform evaluate() operation on the "automl_obj".
# Example 1: Run evaluate on test data using best performing model.
>>> performance_metrics = automl_obj.evaluate(admissions_test)
>>> performance_metrics
# Example 2: Run evaluate on test data using second best performing model.
>>> performance_metrics = automl_obj.evaluate(admissions_test, rank=2)
>>> performance_metrics
# Example 3: Run evaluate on test data using loaded model.
>>> automl_obj.load("model_table")
>>> evaluation = automl_obj.evaluate(admissions_test, rank=3)
>>> evaluation
# Example 4: Run predict on test data using loaded model when fit is also called.
>>> automl_obj.fit(admissions_train, "admitted")
>>> automl_obj.load("model_table")
>>> evaluation = automl_obj.evaluate(admissions_test, rank=3, use_loaded_models=True)
>>> evaluation