August 2025 - Teradata VantageCloud Lake

Lake - Updates and Changes

Deployment
VantageCloud
Edition
Lake
Product
Teradata VantageCloud Lake
Release Number
Published
February 2025
ft:locale
en-US
ft:lastEdition
2025-10-24
dita:mapPath
gze1736983055688.ditamap
dita:ditavalPath
pny1626732985837.ditaval
dita:id
gze1736983055688

The following lists the fixed and known issues in this release. If you experience any of the following issues, open an incident with Teradata Customer Support and include the Key in your description.

Known Issues

Vantage Ecosystem Languages
Key Description
VEL - 2559 When Vector store is created with non-English data, we may see accuracy issue when using ask or similarity search API on such vector store.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

User Defined Functions

Key Description
UDF - 1709 BYOM functions may fail after a database restart. This is due to a problem with UDF containers being out of sync.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

UDF - 1427 It was discovered that there are issues in supporting more than one third party UDF solutions, including the following:
  1. If two third-party solutions are installed in the same environment, upgrade (including blue/green or in-place upgrade) may not be successful.
  2. If the user wants to make any changes (e.g. update or delete) the system may hang.
  3. If the customer wants to install more than one third party solutions at the same time (rare scenario), one or both may fail.

Workaround: There is no workaround to install more than one third party UDF solutions until this issue is resolved. If running into any issues, open a case from Support Portal to engage Teradata Support for assistance.

Deployments: AWS is the only platform on Lake supporting more than one third-party solution.

TDGSS
Key Description
TDGSS-11273 The Power BI job failed after the TLS connection, established through the .NET data provider, was unexpectedly disconnected during execution.

Workaround: Add proper serialization of TLS read/write operations to prevent connection instability during Power BI jobs.

Deployments: Lake on Azure| GC| AWS

SQLE Services
Key Description
SQLES-14124 If encryption is enabled for internal bucket, it can cause failure of provisioning of the QueryGrid component.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on Azure| GC| AWS

Open Table Format

Key Description
OTF-3749 Query failures observed due to OTF Java engine hitting the Out Of Memory exceptions.

Workaround: The memory issue occurs when the OTF engine is subjected to a high volume of concurrent queries over an extended period. While typically short-lived, in rare cases this can cause the Java-based OTF engine to enter a stale state, disrupting communication with AMP processes. When this happens, restarting the Java engine usually resolves the problem. To initiate a restart, open a support ticket.

Deployments: Lake on Azure| GC| AWS

OTF-3516 Failure during synchronization of Iceberg metadata in Databricks Unity catalog using Spark SQL.

Workaround: The issue happens when a spark job is submitted on the Databricks Spark cluster to perform Iceberg metadata sync operation. This is an intermittent issue and the current workaround is to rerun the query.

Deployments:Databricks Unity/Iceberg write operations ONLY on all CSPs.

HARM
Key Description
HARM-6898 The incorrect error is returned to the user causing confusion.

Workaround: None. For now all 4500 memory errors should be viewed as an LSN not found error, which means the system was restarted and no reconnect is allowed.

Deployments: Lake on Azure| GC| AWS

Cloud Control Panel
Key Description
CCP-11041 If user excludes some object from a database having more than 10K objects then restore job will not restore objects beyond 10k objects from that database.

Workaround: Do not exclude tables from a database when it has more than 10k tables or restore the complete database.

Deployments: Lake on AWS

Bring Your Own Analytics
Key Description
BYOA-3179 A Timeout Error is observed when org admin user activates Anaconda feature from the VantageCloud Lake Console.

Workaround: None. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on GC

Fixed Issues

Vantage ModelOps
Key Description
VMO - 1832 The evaluation job for BYOM (except Python/R) with custom metrics fails.

Workaround: Cannot evaluate BYOM (except Python/R) with custom metrics but can be evaluated using default metrics. While importing the default metrics can be selected for monitoring.

Deployments: Lake on Azure| GC| AWS

VMO - 1827 The evaluation job fails for BYOM DataRobot with error - ValueError: Classification metrics cannot handle a mix of binary and unknown targets.

Workaround: None. Cannot evaluate BYOM DataRobot.

Deployments: Lake on Azure| GC| AWS

VMO - 1814 While running a compute statistics job for BYOM model in Demo project (pre-loaded ModelOps project), it fails with error - categorical_features = f for f in feature_names if feature_summaryf.lower()] == 'categorical'] KeyError: 'numtimesprg'

Workaround: Change database to td_modelops in the dataset template and for all (dataset template and datasets) SQL queries add the database as td_modelops. (Only for the Demo project)

Example - Change SELECT * FROM pima_patient_features to SELECT * FROM td_modelops.pima_patient_features

Deployments: Lake on Azure| GC| AWS

VMO-1747 ModelOps provisioning could fail due to Private DNS timeout after 5 minutes.

Workaround: Delete ModelOps and retry ModelOps provisioning.

Deployments: Lake on AWS

VMO-1716 While importing a BYOM and generating a prediction expression, the expressions do not list out and the loading icon shows up. The error message can be - 'Cannot convert undefined or null to object'.

Workaround: Instead of generating a prediction expression, the user can manually enter the prediction expression.

Example - CAST(CAST(json_report AS JSON).JSONExtractValue('$.predicted_HasDiabetes') AS INT). The prediction expression cannot be validated

Deployments: Lake on Azure| GC| AWS

OptToolsCloud
Key Description
TCOPTT-1012 In VantageCloud Lake, when upgrading to the August 2024 release or later, if the PDCR history table PDCRDATA.AcctgDtl_Hst, Acctg_Hst, MonitorSession_Hst, TDWMThrottleStats_Hst or TDWMUtilityStats_Hst contains data (e.g., from a previous migration), then it will not be converted to an OFS table and the table's corresponding collection job will not run.

Workaround: Run SQL statements to rename the existing table and create a new empty table in OFS. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

Open Table Format
Key Description
OTF-3207 The issue was seen when a Teradata Stored Procedure (TDSP) is created prior to 20.00.25.14 and is then called on a later release. In this case, Failure 7551 may return. The issue could also be seen when a table with Partitioned Primary Index (PPI) is created prior to 20.00.25.14 and is then used in a SELECT statement on a later release.

Workaround: For a TDSP, recompile the TDSP before calling it. For a PPI table, revalidate the table before using it in a DML.

Deployments: Lake on Azure| GC| AWS

OTF-3171 Intermittent failures with mixed workload on AWS OTF queries.

Workaround: Restart Java OTF UDF Server. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

OTF-3023 Intermittent issue where OTF query cannot open the input stream.

Workaround: Restart Java OTF UDF Server. Submit a case from Support Portal to engage Teradata Support for assistance.

Deployments: Lake on AWS

HARM
Key Description
HARM-6749 Some data connections could fail causing a load job to hang for long periods before failing. Now load sessions are more likely to succeed and if they still fail they will fail immediately.
Data Insights
Key Description
DINSIGHTS-1273 ASK API requests may intermittently fail with a 400 Bad Request due to an underlying 429: Rate limit is exceeded error from the Azure OpenAI service.

Workaround: Retry the request after a short delay (e.g., 1 second).

Deployments: Lake on Azure