The simplest form of model drift monitoring is to ensure you regularly compare the predicted results to the ground truth result. To do this, you need access to the actual result, so you can compare. This information is not always available in a timely manner (email campaigns, churn, etc.), and sometimes not at all. However, when it is, this then turns into an evaluation problem where the performance metrics determined by the Data Scientist are calculated on the new ground truth datasets and can be compared.
Using ModelOps, you can visualize the information generated from evaluation results over time.
The following details include in Performance Drift: