Job Scheduling Around Peak Utilization - Teradata Database - Teradata Vantage NewSQL Engine

Teradata Vantage™ - Database Administration

Product
Teradata Database
Teradata Vantage NewSQL Engine
Release Number
16.20
Published
March 2019
Language
English (United States)
Last Update
2019-05-03
dita:mapPath
tgx1512080410608.ditamap
dita:ditavalPath
TD_DBS_16_20_Update1.ditaval
dita:id
ujp1472240543947
Product Category
Software
Teradata Vantage

Rescheduling Jobs

Once you determine your peak system utilization times, you can recommend that some jobs be moved to other time slots.

For example, if peak periods are 9 A.M. to 5 P.M., you can schedule batch and load jobs overnight so they do not interfere with peak daytime demand.

Bound Jobs

Since some jobs tend to be CPU-bound and others I/O- bound, it is a good idea to determine which jobs fit into each category. You can determine this by means of AMPUsage data analysis.

You can schedule a CPU-bound job with an I/O bound job so that the resource underutilized by one job can be used more fully by the other.

TASM and Concurrency

TASM throttle rules can be useful in controlling the concurrency of certain types of queries during peak utilization times.

In addition, TASM filter rules can prevent queries with certain characteristics from even starting to execute during specific windows of time. This can help keep utilization levels under control at times of high contention.

Managing I/O-Intensive Workloads

Below are suggestions for balancing resource usage when the system is I/O-bound:

  • Identify I/O-intensive portions of the total work using AMPUsage reports and DBQL.
  • Reschedule I/O-Intensive work to off-hours.
  • Look for query or database tuning opportunities, including:
    • Collecting/refreshing statistics on all join and selection columns
    • Adding indexes, join indexes, or sparse indexes
    • Using MVC to reduce row size and get more rows per block
    • Using PPI
    • Increasing block sizes
    • Using 3NF data model to obtain narrower rows, more rows / block, fewer I/Os, and then denormalizing as needed
    • Increasing node memory, in order to expand the size of the FSG cache