May 2025 - Teradata VantageCloud Lake

Lake - Updates and Changes

Deployment
VantageCloud
Edition
Lake
Product
Teradata VantageCloud Lake
Release Number
Published
February 2025
ft:locale
en-US
ft:lastEdition
2025-10-24
dita:mapPath
gze1736983055688.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
gze1736983055688

The following lists the fixed and known issues in this release. If you experience any of the following issues, open an incident with Teradata Customer Support and include the Key in your description.

Known Issues

Vantage ModelOps

Key Description
VMO - 1832 The evaluation job for BYOM (except Python/R) with custom metrics fails.

Workaround: Cannot evaluate BYOM (except Python/R) with custom metrics but can be evaluated using default metrics. While importing the default metrics can be selected for monitoring.

Deployments: Lake on Azure| GC| AWS

VMO - 1827 The evaluation job fails for BYOM DataRobot with error - ValueError: Classification metrics can't handle a mix of binary and unknown targets.

Workaround: None. Cannot evaluate BYOM DataRobot.

Deployments: Lake on Azure| GC| AWS

VMO - 1814 While running a compute statistics job for BYOM model in Demo project (pre-loaded ModelOps project), it fails with error - categorical_features = f for f in feature_names if feature_summaryf.lower()] == 'categorical'] KeyError: 'numtimesprg'

Workaround: Change database to td_modelops in the dataset template and for all (dataset template and datasets) SQL queries add the database as td_modelops. (Only for the Demo project)

Example - Change SELECT * FROM pima_patient_features to SELECT * FROM td_modelops.pima_patient_features

Deployments: Lake on Azure| GC| AWS

VMO - 1791 Validating a training dataset while importing a BYOM fails with error - Cannot read properties of undefined (reading 'dropTableSql').

Workaround: None. The user cannot validate the training dataset and prediction expression.

Deployments: Lake on Azure| GC| AWS

VMO-1747 ModelOps provisioning could fail due to Private DNS timeout after 5 minutes.

Workaround: Delete ModelOps and retry ModelOps provisioning.

Deployments: Lake on AWS

VMO-1716 While running a compute statistics job for BYOM model in Demo project (pre-loaded ModelOps project), it fails with error - categorical_features = f for f in feature_names if feature_summaryf.lower()] == 'categorical'] KeyError: 'numtimesprg'

Workaround: Instead of generating a prediction expression, the user can manually enter the prediction expression.

Example - CAST(CAST(json_report AS JSON).JSONExtractValue('$.predicted_HasDiabetes') AS INT). The prediction expression cannot be validated

Deployments: Lake on Azure| GC| AWS

User Defined Functions

Key Description
UDF - 1709 BYOM functions may fail after a database restart. This is due to a problem with UDF containers being out of sync.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

UDF - 1427 It was discovered that there are issues in supporting more than one third party UDF solutions, including the following:
  1. If two third-party solutions are installed in the same environment, upgrade (including blue/green or in-place upgrade) will not be successful.
  2. If the user wants to make any changes (e.g. update or delete) the system may hang.
  3. If the customer wants to install more than one third party solutions at the same time (rare scenario), one or both may fail.

Workaround: There is no workaround to install more than one third party UDF solutions until this issue is resolved. If running into any issues, please open a case from Support Portal to engage Teradata Support for assistance.

Deployments: AWS is the only platform on Lake supporting more than one third-party solution.

OptToolsCloud

Key Description
TCOPTT-1012 In VantageCloud Lake, when upgrading to the August 2024 release or later, if the PDCR history table PDCRDATA.AcctgDtl_Hst, Acctg_Hst, MonitorSession_Hst, TDWMThrottleStats_Hst or TDWMUtilityStats_Hst contains data (e.g., from a previous migration), then it will not be converted to an OFS table and the table's corresponding collection job will not run.

Workaround: Run SQL statements to rename the existing table and create a new empty table in OFS. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

SQLE Services

Key Description
SQLES-14124 If encryption is enabled for internal bucket, it can cause failure of provisioning of the QueryGrid component.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on Azure| GC| AWS

Open Table Format

Key Description
OTF-3306 Insert Statement Fails with "Index Out of Bounds" for Iceberg/DeltaLake Unity.

Workaround: Use all lowercase for both the table column names and the PARTITIONED BY clause.

Problem query:

create table iceberg_unity.otfdev.tab1( AGE BIGINT,

EMP_ID BIGINT,

FIRST_NM VARCHAR(30),

LAST_NM VARCHAR(30))

partitioned by(emp_id);

Workaround for this issue:

create table iceberg_unity.otfdev.tab1( AGE BIGINT,

emp_id BIGINT,

FIRST_NM VARCHAR(30),

LAST_NM VARCHAR(30))

partitioned by(emp_id);
If you use uppercase letters for column names in the CREATE TABLE statement or the PARTITIONED BY clause, it will result in an error. So, for a workaround, always use lowercase column names in both the CREATE TABLE statement and the PARTITIONED BY clause

Deployments: Lake on Azure| GC| AWS

OTF-3207 The issue was seen when a Teradata Stored Procedure (TDSP) is created prior to 20.00.25.14 and is then called on a later release. In this case, Failure 7551 could be returned. The issue could also be seen when a table with Partitioned Primary Index (PPI) is created prior to 20.00.25.14 and is then used in a SELECT statement on a later release.

Workaround: For a TDSP, recompile the TDSP before calling it. For a PPI table, revalidate the table before using it in a DML.

Deployments: Lake on Azure| GC| AWS

OTF-3206 DeltaLake Glue query fails with TD_OTFDB.TD_DELTA_READ: Provider for class javax.xml.stream.XMLInputFactory cannot be created.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

OTF-3171 Intermittent failures with mixed workload on AWS OTF queries.

Workaround: Restart Java OTF UDF Server. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

OTF-3023 Intermittent issue where OTF query cannot open the input stream.

Workaround: Restart Java OTF UDF Server. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

Data Insights
Key Description
DINSIGHTS-1273 ASK API requests may intermittently fail with a 400 Bad Request due to an underlying 429: Rate limit is exceeded error from the Azure OpenAI service.

Workaround: Retry the request after a short delay (e.g., 1 second).

Deployments: Lake on Azure

Fixed Issues

HARM

Key Description
HARM-5668 When using dot notation to shred a result JSON value in a result set, a 3807 error can be returned. This error only occurs when upgrades are in progress.

Workaround: Avoid using dot notation to shred the JSON object.

Deployments: Lake on Azure| GC| AWS

COG

Key Description
COG-10689 Analytic COGs could be stuck in "resuming" state.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on Azure| GC| AWS

Cloud Control Plane

Key Description
CCP-9545 When Full System Restore or Failover fails, incorrect error message is displayed on the console.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

CCP-9544 Retrying Failover job may not run properly.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS