Any number of users and applications can simultaneously access data stored in a database.
The Lock Manager imposes concurrency control on Vantage by managing the locks on the database objects being accessed by each transaction and releasing those locks when the transaction either commits or rolls back its work. This control makes sure that the data remains consistent for all users. With the exceptions of global deadlock prevention and detection, locks are not managed globally, but locally by each individual AMP.
For information about global deadlock detection, see Deadlock.
While the Parser defines the locks for a request, you can upgrade, and may be able to downgrade, locks explicitly by using the SQL LOCKING request modifier. See LOCKING Request Modifier.
Locking Considerations
Vantage tries to lock database objects at the least restrictive level and severity possible to make sure of database integrity while at the same time maintaining maximum concurrency.
Users have no control over when locks are released because of the way two-phase locking works (see Database Locks, Two-Phase Locking, and Serializability).
- The database that owns the table
- The table itself
- Any partitions or rows in the table
- The owner database for the table
- The parent table for the row
- A partition of the table
- The row hash itself
In each case, the request is queued until the conflicting lock is released.
Lock Manager resources can be exhausted. The Transaction Manager stops any transaction that requests a lock when Lock Manager resources are exhausted. In such cases, you can disable rowhash-level locking for DDL requests. There is also a per AMP limit on the number of locks that can be placed at a time.
If an application begins a transaction and then performs a large number of single-row updates without closing that transaction, at least one AMP lock table fills to its maximum capacity. This typically occurs for, but is not limited to, transactions in ANSI session mode.
Such an AMP lock table overflow also affects other transactions because no request can begin without first acquiring one or more locks on its underlying database objects. After a while, transactions that are unable to acquire any locks stop, and can even cause the system to stop responding.
Locks Are Not Released until a Transaction Completes
- The transaction commits.
- The transaction aborts and has completed its rollback.
ABORT/ROLLBACK statements, asynchronous aborts, failures, time outs, log offs, and system restarts can cause a transaction to abort.
This lock release occurs regardless of when you receive the response to a request because the spool may exist after the end of the transaction. Each of these actions also drops the Transient Journal and closes any open cursors.
During system restart, only update transactions that were in progress at the time the system stopped responding need to be stopped and rolled back. WRITE and EXCLUSIVE locks remain in place for those transactions until those transactions are rolled back.
Using Multiple-Statement Requests to Minimize How Long Locks Are Held
When possible, group individual requests that access the same set of tables or views together using multiple-statement requests. For example, suppose you want to insert a row into a table, update another row in the same table, and then delete a third row from that table.
You can do this using the following individual requests that insert a row into cust_rate, update an existing row in cust_rate, delete a third row from cust_rate, run a macro that does something to cust_rate, and then update the cust_rate row that was touched by the init_cust macro:
INSERT INTO cust_rate VALUES (123, “GOOD’); UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999); UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999;
You are submitting only one SQL statement per request, which is inefficient for minimizing request blocking.
If you submit a multiple-statement request that runs the identical SQL, Vantage can impose the same row-hash-level WRITE lock on cust_rate only twice, freeing the table for faster access by other concurrently running transactions. The multiple-statement requests used to do that are optimally sequenced in the following order:
INSERT INTO cust_rate VALUES (123, ‘GOOD’) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456 ; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999;
BEGIN TRANSACTION ; INSERT INTO cust_rate VALUES (123, ‘GOOD’) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=456 ; DELETE FROM cust_rate WHERE cust_id=789; EXECUTE init_cust (9999) ; UPDATE cust_rate SET cust_rating=’FAIR’ WHERE cust_id=9999; ; END TRANSACTION;