# TD_ROC Function | ROC | Teradata Vantage - TD_ROC - Analytics Database

## Database Analytic Functions

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Analytics Database
Release Number
17.20
Published
June 2022
Language
English (United States)
Last Update
2024-04-06
dita:mapPath
gjn1627595495337.ditamap
dita:ditavalPath
ayr1485454803741.ditaval
dita:id
jmh1512506877710
Product Category
TD_ROC (Receiver Operating Characteristic) function accepts a set of prediction-actual pairs for a binary classification model and calculates the following values for a range of discrimination thresholds:
• True-positive rate (TPR)
• False-positive rate (FPR)
• The area under the ROC curve (AUC)
• Gini coefficient

An ROC curve shows how much a model is capable of distinguishing between classes. It is a graph showing the performance of a classification model at various classification thresholds, ranging from 0 to 1.

Each prediction by a classifier is either a:
• True Positive (TP, positive prediction that was actually positive)
• True Negative (TN, negative prediction that was actually negative)
• False Positive (FP, positive prediction that was actually negative)
• False Negative (FN, negative prediction that was actually positive)
The curve plots two parameters:
• TPR – The true positive rate also known as sensitivity, is calculated as TP/TP+FN. TPR is the probability that an actual positive is predicted as positive by the model.
• FPR – The false positive rate is calculated as FP/FP+TN. FPR is the probability that an actual negative is predicted as positive by the model.

The ROC curve shows the tradeoff between sensitivity (or TPR) and specificity (1 – FPR). Typically, a lower decision threshold identifies more positive cases, because you set a lower bar to classify an observation as positive. However, as you classify more observations as positive due to lenient threshold, you might misclassify more negative cases as positive as well. A better classifier makes fewer tradeoffs to catch more of both classes correctly. A ROC plot illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied.

AUC stands for "Area under the ROC Curve." That is, AUC measures the entire two-dimensional area underneath the entire ROC curve from (0,0) to (1,1).

AUC provides an aggregate measure of performance across classification thresholds.

An AUC of 1 indicates a perfect classifier, an AUC of 0 indicates a classifier that always predicts the opposite of the actual class, and an AUC of 0.5 indicates a classifier that performs as good as random guessing.