Any number of users and applications can simultaneously access data stored in a Teradata database.
The Lock Manager imposes concurrency control on Teradata Database by managing the locks on the database objects being accessed by each transaction and releasing those locks when the transaction either commits or rolls back its work. This control ensures that the data remains consistent for all users. Note that with the exceptions of global deadlock prevention and detection, locks in Teradata Database are not managed globally, but locally by each individual AMP.
For information about global deadlock detection, see Deadlock.
While the Parser defines the locks for a request automatically, you can upgrade, and in some cases, downgrade locks explicitly by using the SQL LOCKING request modifier. See “LOCKING Request Modifier” in SQL Data Manipulation Language for more information.
Teradata Database always makes an effort to lock database objects at the least restrictive level and severity possible to ensure database integrity while at the same time maintaining maximum concurrency.
(This is not strictly true for HUT locks, where the same locking levels and severities are always used for a given Archive/Recovery utility command. For more information about HUT locks, see Teradata Archive/Recovery Utility Reference.)
Users have no control over when locks are released because of the way two-phase locking works (see Database Locks, Two-Phase Locking, and Serializability).
When determining whether to grant a lock, the Lock Manager takes into consideration both the requested locking severity and the object to be locked. For example, a READ lock requested at the table level cannot be granted if a WRITE or EXCLUSIVE lock already exists on any of the following database objects:
- The database that owns the table
- The table itself
- Any partitions or rows in the table
A WRITE lock requested at the row hash level cannot be granted if a READ, WRITE, or EXCLUSIVE lock already exists on any of the following database objects:
- The owner database for the table
- The parent table for the row
- A partition of the table
- The row hash itself
In each case, the request is queued until the conflicting lock is released.
It is possible to exhaust Lock Manager resources. The Transaction Manager aborts any transaction that requests a lock when Lock Manager resources are exhausted. In such cases, you can disable rowhash-level locking for DDL requests. There is also a per AMP limit on the number of locks that can be placed at a time.
If an application begins a transaction and then performs a large number of single-row updates without closing that transaction, at least one AMP lock table eventually fills to its maximum capacity. This typically occurs for, but is not limited to, transactions in ANSI session mode.
Such an AMP lock table overflow also affects other transactions because no request can begin without first acquiring one or more locks on its underlying database objects. After a while, transactions that are unable to acquire any locks abort, and can even cause the system to crash.
You can control the maximum number of rowhash-level locks that the Lock Manager allows for each transaction using the DBS Control field MaxRowHashBlocksPercent (see Utilities for details).
The default for this field is 50 percent of the total number of control blocks that an AMP lock table can support. You can set the value for MaxRowHashBlocksPercent to up to 100% to accommodate the number of rowhash-level locks that your system workloads require.
Teradata Database automatically aborts a transaction when the number of rowhash-level locks it acquires exceeds the threshold defined by MaxRowHashBlocksPercent. No other transactions are affected when this occurs. This control over the number of rowhash-level locks held by a transaction applies only to ANSI session mode and explicit Teradata session mode transactions. Implicit Teradata mode transactions are not affected because each implicit Teradata mode request is a transaction in itself, and the system releases its locks immediately after it commits or rolls back.
You can review all the active locks and determine which other user locks are blocking your transactions using the Lock Display (lokdisp) utility (see Utilities) or the Lock Viewer Viewpoint portlet.
Once Placed, Locks Are Not Released Until A Transaction Completes
A database lock placed as part of a transaction is held during processing of the transaction and are not released until one of the following events occurs:
- The transaction commits.
- The transaction aborts and has completed its rollback.
ABORT/ROLLBACK statements, asynchronous aborts, failures, time outs, log offs, and system restarts can cause a transaction to abort.
This lock release occurs regardless of when you receive the response to a request because the spool might exist after the end of the transaction. Each of these actions also drops the Transient Journal and closes any open cursors.
During system restart, only update transactions that were in progress at the time of the crash need to be aborted and rolled back. WRITE and EXCLUSIVE locks remain in place for those transactions until they are rolled back.
Unlike the case for database transaction locks, you must release HUT locks explicitly. For more information about HUT locks, see Teradata Archive/Recovery Utility Reference.
Using Multistatement Requests to Minimize How Long Locks Are Held
When possible, you should group individual requests that access the same set of tables or views together using multistatement requests. For example, suppose you want to insert a row into a table, update another row in the same table, and then delete a third row from that table.
You could do this using the following individual requests that insert a row into cust_rate, update an existing row in cust_rate, delete a third row from cust_rate, execute a macro that does something to cust_rate, and then update the cust_rate row that was just touched by the init_cust macro:
INSERT INTO cust_rate VALUES (123, “GOOD’); UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999); UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999;
In this case, you are submitting only one SQL statement per request, which is inefficient for minimizing request blocking.
If you were to instead submit a multistatement request that executed the identical SQL, Teradata Database could impose the same row-hash-level WRITE lock on cust_rate just twice, freeing the table for faster access by other concurrently running transactions. The multistatement requests used to do that would be optimally sequenced in the following order:
INSERT INTO cust_rate VALUES (123, ‘GOOD’) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456 ; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999;
It is possible to combine these two multistatement requests into an even more efficient explicit multiple request transaction that only sets one row-hash-level WRITE lock on cust_rate.
BEGIN TRANSACTION ; INSERT INTO cust_rate VALUES (123, ‘GOOD’) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456 ; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999; ; END TRANSACTION;