The OTF engine is a JVM-based infrastructure designed to establish connections to the Iceberg catalog and object store for reading and processing table records. This process includes scanning and filtering data files, as well as converting data file records to Teradata records. For optimal performance, the JVM requires a dedicated amount of memory to efficiently execute these operations. The memory allocation for the JVM is determined by the number of concurrent OTF queries it needs to support. By default, the JVM is configured with a maximum heap size of 15GB. The memory configuration supports up to 3 concurrent queries on systems with 24 AMPs/node and 36 AMPs/node.
The OTF engine does not start automatically with the database start or restart; it is initiated only when the first OTF query is executed on the system. Once active, it remains running until the database is shut down. Therefore, it is crucial to allocate a portion of the node-level memory to ensure the OTF infrastructure starts up and functions properly. This can be achieved by either reducing the FSGCache percentage or replacing some existing high-memory workloads (such as NOS) with OTF workloads.
When OTF query processing requires more than max allowed JVM memory, the user may experience the following:
- Significant query performance impacts due to high levels of memory swaps.
- An out of memory exception being returned.
- A UDF secure mode process error due to a communication breakdown between the database and the OTF Java server.
- In the worst case, the OTF infrastructure could enter a stale state, requiring a JVM restart. This can be done by running the following commands in the specified order:
call SQLJ.ServerControl('JAVAOTF', 'disable', a);
call SQLJ.ServerControl('JAVAOTF', 'shutdown', a);
call SQLJ.ServerControl('JAVAOTF', 'status', a);
call SQLJ.ServerControl('JAVAOTF', 'enable', a);
Here is a suggestion for the number of concurrent queries supported on different systems with the default JVM memory configuration:
| Number Of OTF Queries (Read/Write) | System with 24 AMPs/Node | System with 36 AMPs/Node | System with 48 AMPs/Node |
|---|---|---|---|
| 1 | 3.5 GB | 5 GB | 7 GB |
| 2 | 7 GB | 10 GB | 12.7 GB |
| 3 | 10 GB | 15 GB | 20 GB |
The JVM memory allocation can be adjusted using the cufconfig utility to support higher concurrency. This HybridServer2JVMOptions is the cufconfig field that needs to be updated to adjust jvm heap memory size. This can be done by following these steps from a TPA node:
- Create a jvm.txt file under /tmp directory with below content. -Xmx value is what need to be adjusted. In this example the memory limit is increased to 30GB:
HybridServer2JVMOptions: -Xms2g -Xmx30g -Dtdjvmtype=otf -Djava.security.properties=/etc/opt/teradata/tdotf/java_override.security
- Run the following cufconfig command:
# cufconfig -f /tmp/jvm.txt