TD_FITMETRICS Function | Teradata Vantage - TD_FITMETRICS - Teradata Vantage

Database Unbounded Array Framework Time Series Functions

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Teradata Vantage
Release Number
17.20
Published
June 2022
Language
English (United States)
Last Update
2024-10-04
dita:mapPath
ncd1634149624743.ditamap
dita:ditavalPath
ruu1634160136230.ditaval
dita:id
ncd1634149624743

TD_FITMETRICS takes a multivariate series, consisting of the original series, the model-predicted series, and the modeling residuals. It combines the multivariate series with the computed original series mean to generate the goodness-of-fit metrics associated with the modeling exercise. The function accepts a single multivariate input, referenced by a SERIES_SPEC or an ART_SPEC referencing an ART containing a ARTFITRESIDUALS layer generated by another function.

When building a predictive model, evaluate how well the model fits the data. Some measures of fit for models include the following:

  • Mean squared error (MSE): The average squared difference between the predicted values and the actual values. A lower MSE indicates a better fit.
  • Root mean squared error (RMSE): The square root of MSE that provides an interpretable measure of the error in the same units as the response variable.
  • Mean absolute error (MAE): Te average absolute difference between the predicted values and the actual values. Like MSE, a lower MAE indicates a better fit.
  • R-squared (R2): The proportion of variance in the response variable that is explained by the model. A higher value indicates a better fit.
  • Adjusted R-squared: This is a modified version of R2 that adjusts for the number of predictor variables in the model.
  • Mean absolute percentage error (MAPE): The average of the absolute percentage differences between the predicted values and the actual values.

These measures compare the performance of different models, and are used to select the best model.

The fitness statistics must be interpreted in the context of the specific exercise, and may not always provide a complete measure of the model performance. Factors, such as the interpretation ability of the model and its computational complexity, must also be considered when selecting a prediction model. Use a consistent and appropriate measure of fit when evaluating and comparing different models.