Job Scheduling Around Peak Utilization - Analytics Database - Teradata Vantage

Database Administration

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Analytics Database
Teradata Vantage
Release Number
17.20
Published
June 2022
ft:locale
en-US
ft:lastEdition
2024-10-04
dita:mapPath
pgf1628096104492.ditamap
dita:ditavalPath
qkf1628213546010.ditaval
dita:id
ujp1472240543947
lifecycle
latest
Product Category
Teradata Vantageā„¢

Rescheduling Jobs

Once you determine your peak system utilization times, you can recommend that some jobs be moved to other time slots.

For example, if peak periods are 9 A.M. to 5 P.M., you can schedule batch and load jobs overnight so they do not interfere with peak daytime demand.

Bound Jobs

Since some jobs tend to be CPU-bound and others I/O- bound, it is a good idea to determine which jobs fit into each category. You can determine this by means of AMPUsage data analysis.

You can schedule a CPU-bound job with an I/O bound job so that the resource underutilized by one job can be used more fully by the other.

TASM and Concurrency

TASM throttle rules can be useful in controlling the concurrency of certain types of queries during peak utilization times.

In addition, TASM filter rules can prevent queries with certain characteristics from even starting to execute during specific windows of time. This can help keep utilization levels under control at times of high contention.

Managing I/O-Intensive Workloads

Suggestions for balancing resource usage when the system is I/O-bound follow:
  • Identify I/O-intensive portions of the total work using AMPUsage reports and DBQL.
  • Reschedule I/O-Intensive work to off-hours.
  • Look for query or database tuning opportunities, including:
    • Collecting/refreshing statistics on all join and selection columns
    • Adding indexes, join indexes, or sparse indexes
    • Using MVC to reduce row size and get more rows per block
    • Using PPI
    • Increasing block sizes
    • Using 3NF data model to obtain narrower rows, more rows / block, fewer I/Os, and then denormalizing as needed
    • Increasing node memory, in order to expand the size of the FSG cache