Model Evaluation Report - Teradata Vantage

ClearScape Analytics ModelOps User Guide

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Vantage
Release Number
7.0
Published
April 2023
Language
English (United States)
Last Update
2023-04-19
dita:mapPath
rfi1654194187578.ditamap
dita:ditavalPath
ayr1485454803741.ditaval
dita:id
B700-1175
ModelOps provides allows you to evaluate a model and mark a champion model based on its performance. You can view the evaluation report that highlights the model performance in the form of certain metrics and compare models based on the metric values.
The model evaluation report displays the following areas:
  • Model Version Details
  • Key Metrics
  • Metrics
  • Performance Charts
  • Actions

Model Version Details

Lists down all the details of the model version, training and evaluation jobs.

The following details display related to the training job:

Property Description
Model Version ID Specifies the model version ID. You can select the Model version ID link to go to the Model Version lifecycle page.
Evaluation Job ID Specifies the evaluation job ID. You can select the Job ID link to go to the Job's details.
Evaluation Date Specifies the evaluation date.
Dataset ID Displays the training dataset ID used to train the job. You can select the Dataset ID link to see the dataset details.
Dataset Name Displays the training dataset name used to train the job.
Hyper Parameters Specifies the hyperparameters defined to run the job including eta and max_depth.

Key Metrics

Displays the key metrics that you mark in the Metrics area. The Metrics area can contain a large list of performance metrics. You can mark some of the metrics as Key Metrics to easily access them. All the key metrics will display in this area.

Metrics

Lists down the performance metrics and their values for the current model version. There can be a large list of metrics including Accuracy, Recall, Precision, F1 score. The Mark as Key Metric option allows you to mark the key metrics and they will display in the Key Metrics area.

A list of common performance metrics is:

Metric Description
Accuracy The ratio of the number of correct predictions to the total number of input samples.
Recall The number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive).
Precision The number of correct positive results divided by the number of positive results predicted by the classifier.
F1-score F1 Score is the Harmonic Mean between precision and recall. The range for F1 Score is (0,1). It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances).

Performance Charts

Displays a number of performance charts based on different metrics including Confusion matrix, ROC curve, and SHAP feature importance. These charts help you monitor model performance visually and decide if you want to mark the model as Champion.

Chart Description
Confusion Matrix A Confusion matrix is an N x N matrix used to evaluate model performance, where N is the number of target classes. The matrix compares the actual target values with those predicted by the machine learning model.
ROC Curve ROC Curves summarize the trade-off between the true positive rate and false positive rate for a predictive model using different probability thresholds.
SHAP Feature Importance SHAP feature importance is based on magnitude of feature attributions.

Actions

The model evaluation report allows you to perform a number of actions on the current model version.

Action Description
Approve Lets you approve the model version. For details, see Approving or Rejecting a Model Version.
Reject Lets you reject the model version. For details, see Approving or Rejecting a Model Version.
Mark/Unmark as Champion Lets you mark/unmark the model version as Champion based on its performance. For details, see Marking a Model as Champion.
View Model Drift Allows you to go to the Model drift page and monitor the model performance. For details, see Drift Monitoring.