Types of Lock | Transaction Processing | Teradata Vantage - Proxy Locks - Advanced SQL Engine - Teradata Database

SQL Request and Transaction Processing

Product
Advanced SQL Engine
Teradata Database
Release Number
17.05
17.00
Published
June 2020
Language
English (United States)
Last Update
2021-01-24
dita:mapPath
ykx1561500561173.ditamap
dita:ditavalPath
lze1555437562152.ditaval
dita:id
B035-1142
lifecycle
previous
Product Category
Teradata Vantageā„¢

Proxy locks provide a mechanism for queueing locks to avoid the global deadlocks that can otherwise occur when Teradata places a lock on each of the AMPs in a parallel system (see Deadlock).

When you make an all-AMP request for a READ, WRITE, or EXCLUSIVE lock, or for an ACCESS lock on a load-isolated table (see Load Isolation), the system automatically places a proxy lock on a single AMP before it places the all-AMP lock. Think of the proxy lock as an intention lock, that is, it indicates that the request intends to place a lock on each of the AMPs.

About Proxy Locks

A proxy lock enables sequential locking on database objects that span multiple AMPs in a parallel database architecture. Without a proxy lock, if multiple users simultaneously submit an all-AMP request on the same table, a deadlock is almost certain to occur because, if multiple requests on that table are sent in parallel, they are likely to arrive in different sequential orders at the various AMPs holding the table rows. Each request then locks the rows that belong to that table on different AMPs, which creates a deadlock situation.

For example, suppose a request from user_1 locks table rows on AMP 3, while user_2 locks the table rows on AMP 4 first. When the user_1 request attempts to lock table rows on AMP 4, or when the user_2 request attempts to lock table rows on AMP 3, a global deadlock occurs. A proxy lock prevents such deadlocks from occurring.

Proxy locking has the following properties:

  • Each table has a system-assigned table ID that can be associated with a unique hash value. This hash value identifies a single AMP, called the gatekeeper AMP, on which to place the proxy lock for this table.
  • The table ID hash values are evenly distributed across the AMPs so that proxy locks are not set on one AMP. However, for the same table, the same AMP is always used.
  • An all-AMP step for a READ, WRITE, or EXCLUSIVE lock (or an ACCESS lock for load-isolated tables) is always preceded by a single-AMP step that is sent to the relevant gatekeeper AMP to place a proxy lock.
  • The gatekeeper AMP places a rowhash lock on the table by using a reserved hash code value of 0xFFFFFFFF. This reserved hash value is a value that cannot be generated by the hashing function.
  • For a row-partitioned table, a proxy lock may be placed on a partition with a reserved internal partition number of 0xFFFFFFFFFFFFFFFF. This reserved internal partition number is a value that cannot be generated for actual partitions. The reserved partition is used to prevent more than one writer to a table.

Table-Level Proxy Locking

Consider the following scenario:

  1. User_1 submits an all-AMPs request.
  2. The request-originating PE sends a message to the gatekeeper AMP for the table.
  3. The gatekeeper AMP places a rowhash lock on the table using the reserved hash value.
  4. Because the table is not currently locked, the user_1 request obtains the requested reserved rowhash lock and proceeds to obtain a lock on each of the AMPs.
  5. Meanwhile, user_2 submits an all-AMP request for the same table.
  6. The request-originating PE sends a message to the gatekeeper AMP for the table.
  7. The gatekeeper AMP attempts to place a rowhash lock on the table using the reserved hash value.
  8. Because user_1 already has the reserved rowhash locked for the table, the request from user_2 must wait in a queue until the request submitted by user_1 releases its locks on the table. Because the user_2 request was the next in sequence to request a proxy lock on the table, it is next in the queue to lock the table for processing.

Teradata Database handles all-AMP lock requests as follows:

  1. The PE that processes an all-AMPs request uses the table ID hash value to determine the gatekeeper AMP where the proxy lock is to be set for the table.
  2. The first request to place a proxy lock acquires locks on the table across all AMPs.

The following example is an EXPLAIN report for a simple SELECT from the nonpartitioned table named t4.

EXPLAIN SELECT * FROM t4;
Explanation
-----------------------------------------------------------------------
  1) First, we  lock PLS.t4 for read on a reserved rowHash  to prevent
     global deadlock.
  2) Next, we  lock PLS.t4 for read.
  3) We do an all-AMPs RETRIEVE step from PLS.t4 by way of an all-rows
     scan with no residual conditions into Spool 1 (group_amps), which
     is built locally on the AMPs. The size of Spool 1 is estimated
     with low confidence to be 8 rows (344 bytes). The estimated time
     for this step is 0.15 seconds.
  4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 0.15 seconds.

Step 1 is a single-AMP lock step that is sent to the gatekeeper AMP for table t4. The gatekeeper AMP is determined by the hash value of the table ID of table t4. The gatekeeper AMP places a rowhash lock using the reserved rowhash on table t4.

Step 2 is an all-AMP lock step that places a table-level lock on table t4 on each AMP.

In the above example, step 1 is a typical proxy lock that is associated with a subsequent table-level lock on the actual table in step 2. This form is always used for a single table. If multiple tables need to be locked by proxy, Teradata Database places a separate single-AMP proxy lock for each of them before placing the table-level locks. Note that unlike the proxy locks that are placed in individual steps, the table-level locks on multiple tables are all placed in one all-AMP step.

The following examples use a row-partitioned table named t5, which was created as follows:

CREATE TABLE t5 (a INTEGER, b INTEGER)
PRIMARY INDEX (a)
PARTITION BY RANGE_N (b BETWEEN 1 AND 100 EACH 1);

The following example is an EXPLAIN report for a simple select from table t5. The relevant EXPLAIN text is highlighted in boldface type.

EXPLAIN SELECT * FROM t5;
Explanation
-----------------------------------------------------------------------
  1) First,  we lock PLS.t5 for read on a reserved rowHash in all 
     partitions to prevent global deadlock. 
  2) Next, we lock PLS.t5 for read.
  3) We do an all-AMPs RETRIEVE step from PLS.t5 by way of an all-rows
     scan with no residual conditions into Spool 1 (group_amps), which
     is built locally on the AMPs. The size of Spool 1 is estimated
     with low confidence to be 8 rows (344 bytes). The estimated time
     for this step is 0.15 seconds.
  4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 0.15 seconds.

Step 1 is a single-AMP lock step that is sent to the gatekeeper AMP for table t5. The gatekeeper AMP is determined by the hash value of the table ID of table t5. The gatekeeper AMP places a rowhash lock in all partitions using the reserved rowhash on table t5.

Step 2 is an all-AMP lock step that places a table-level lock on table t5 on each AMP.

The following example is an EXPLAIN report for a select from table t5 with a condition on the partitioning column such that only a single partition needs to be accessed.

EXPLAIN SELECT * FROM t5 WHERE b = 4;
Explanation
-----------------------------------------------------------------------
  1) First,  we lock PLS.t5 for read on a reserved rowHash in a single 
 partition to prevent global deadlock.   
  2) Next,  we lock PLS.t5 for read on a single partition. 
  3) We do an all-AMPs RETRIEVE step from a single partition of PLS.t5
     with a condition of ("PLS.t5.B = 4") with a residual condition of
     ("PLS.t5.B = 4") into Spool 1 (group_amps), which is built locally
     on the AMPs. The size of Spool 1 is estimated with no confidence
     to be 1 row (43 bytes). The estimated time for this step is 0.15
     seconds.
  4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 0.15 seconds.

Step 1 is a single-AMP lock step that is sent to the gatekeeper AMP for table t5. The gatekeeper AMP is determined by the hash value of the table ID of table t5. The gatekeeper AMP places a rowhash lock in a single partition using the reserved rowhash on table t5.

Step 2 is an all-AMP lock step that places a partition lock on table t5 on each AMP.

The following example is an EXPLAIN report for a select from table t5 with a condition that limits access to a range of partitions.

EXPLAIN SELECT * FROM t5 WHERE b BETWEEN 3 AND 5;
Explanation
-----------------------------------------------------------------------
  1) First,  we lock PLS.t5 for read on a reserved rowHash in all 
      partitions to prevent global deadlock. 
  2) Next,  we lock PLS.t5 for read. 
  3) We do an all-AMPs RETRIEVE step from 3 partitions of PLS.t5 with a
     condition of ("(PLS.t5.B <= 5) AND (PLS.t5.B >= 3)") into Spool 1
     (group_amps), which is built locally on the AMPs. The size of
     Spool 1 is estimated with no confidence to be 2 rows (86 bytes).
     The estimated time for this step is 0.15 seconds.
  4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 0.15 seconds.

Step 1 is a single-AMP lock step that is sent to the gatekeeper AMP for table t5. The gatekeeper AMP is determined by the hash value of the table ID of table t5. The gatekeeper AMP places a rowhash lock in all partitions (since more than one partition must be locked) using the reserved rowhash on table t5.

Step 2 is an all-AMP lock step that places a table-level lock on table t5 on each AMP.