Data Placement to Support Parallel Processing | Database Design | Vantage - Data Placement to Support Parallel Processing - Advanced SQL Engine - Teradata Database

Database Design

Product
Advanced SQL Engine
Teradata Database
Release Number
17.10
Published
July 2021
Language
English (United States)
Last Update
2021-07-27
dita:mapPath
kko1591750222108.ditamap
dita:ditavalPath
kko1591750222108.ditaval
dita:id
B035-1094
lifecycle
previous
Product Category
Teradata Vantageā„¢

Balanced Workloads

The key to parallel processing is balancing the workload of the database management system. SQL Engine balances its workload by distributing table rows evenly across the AMPs and by giving them the sole responsibility for the data they own.

Row Distribution

Database rows are hashed across a tableā€™s AMPs using the row hash value of their primary index or primary AMP index as the hash key. This does not apply for the rows of NoPI tables.

Vantage also uses the row hash of a primary index or primary AMP index to retrieve a row. The following graphic shows how a row is hashed and assigned to an AMP:


How a row is hashed and assigned to an AMP

By carefully choosing the primary index or primary AMP index for each table, you can ensure that rows that are frequently joined hash to the same AMP, eliminating the need to redistribute the rows across the BYNET in order to join them.

The following graphic shows how you can set up rows from commonly joined tables on the same AMP to ensure that a join operation avoids being redistributed across the BYNET:


Commonly joined tables on the same AMP

Shared-Nothing Architecture

The term shared-nothing architecture is used to describe a multiprocessor database management system in which neither memory nor disk storage is shared among the processors. The PE and AMP vprocs in the SQL Engine architecture share neither memory nor disk across CPU units; therefore, Vantage uses a shared-nothing database architecture. It is this architecture that affords Teradata systems their scalability.

AMP Ownership of Data

Because of its shared-nothing architecture, each AMP in a Teradata system exclusively controls its own virtual disk space. As a result, each row is owned by exactly one AMP. That AMP is the only one in the system that can create, read, update, or lock its data. The AMP-local control of logging and locking not only enhances system parallelism, but also reduces BYNET traffic significantly. The following graphic shows how local autonomy provides each AMP (AMP 92 in this particular example) with total accountability for its own data.


Example of local autonomy on each AMP

Other Applications of Hashing for Parallel Processing

SQL Engine employs row hashing for tasks beyond simple row distribution and retrieval.

Several types of join processing hash on join column values to determine which AMP is to handle the join of the particular rows. This enables Vantage to balance the join load across AMPs, with each doing an even subset of the total work.

Similarly, the row hash match scan method sorts rows to be joined on their row hash values, then scans the joined tables in parallel to permit the scan of one of the tables to skip ahead in its hash scan to the row hash where the second table is already positioned, as indicated by the following graphic:



In this example, Table A reads and joins row 1 to row 1 of Table B. Table A then obtains the row hash value of the next row in Table B, row 9, and joins its row 9 to row 9 of Table B, which has a matching row hash value. Unmatched rows are skipped without being read by hashing to the next highest candidate value for which a join might be possible. This processing of candidate joined rows can shorten the time required to perform this join significantly. See Teradata Vantageā„¢ - SQL Request and Transaction Processing, B035-1142 for more information about this join method.

Aggregate processing also takes advantage of hashing to build its totals. For example, the following graphic indicates how each AMP aggregates its rows locally as the parallel first step in the global parallel aggregation process:


Example of AMP aggregation on local rows

Following that, the fields in the GROUP BY key are hashed, and the resulting hash bucket for each distinct value points to the AMP responsible for building the global aggregate for that piece of the aggregate.

The illustration shows only 1 AMP per node for simplicity of presentation. A real Teradata system typically has multiple AMPs per node, and each would be involved in performing its own role in producing the global aggregate totals.