15.11 - run_job - Data Stream Architecture

Teradata Data Stream Architecture (DSA) User Guide

Data Stream Architecture
December 2016
User Guide


The run_job command runs a job as soon as all necessary resources are available. The DSC system limit is set at 20 concurrent running jobs, and up to 20 jobs can queue above that limit. The DSC also queues jobs if the defined target media is not available before the job starts.


run_job -n|-name JobName -b|backup_type BackupType -p|-preview -r|runtime -f|-file File -w|wait -q|query_status -u|user_authentication User -I|original_job_execution_ID


dsc run_job -n job1 -b cumulative -p -f file1.xml

dsc run_job -n job3 -b cumulative -I 13


n|name Name
The name of the job on which to perform the action. Must be unique for each job.
b|backup_type BackupType
Enter the type of backup: full , delta , or cumulative
[Optional] Generates an XML file that lists the job plan and settings. When the -r parameter is also used, a job plan is generated that includes only systems and media servers that are online. If the -r parameter is not used, the job plan includes all systems and media servers, even if they are not online.
[Optional] Checks to see if any of the media servers or systems are down, then generates a job plan to include only those online media servers or systems. To use the r parameter, you must also select the -p parameter.
f|file filename
[Optional] If you are previewing the job, this is the file path and file name of the output file to save the job plan.
[Optional] Waits until the job has run, then displays a brief status, such as COMPLETED_ERRORS. You can add the -q parameter for a more detailed status.
[Optional] Returns full status after the job has run. Status includes the percentage of completion and elapsed time of the job run. To use this parameter, you must also select the -w parameter.
u|user_authentication User
Required when security management is enabled. Supplies the command with the Viewpoint user, and triggers a password prompt for authentication.
-I | original_job_execution_ID job ID of the original full backup job
[Optional] Runs a backup or restore job with the objects that were skipped in the original full backup job. The new job contains only the skipped objects. The job execution ID must be from:
  • A job that completed with errors or warnings
  • A job with skipped objects
  • The job specified with the -n option
The save set containing the skipped objects is not a base for delta or cumulative backup jobs, that is, the next backup for the job must be a full backup.
To obtain a job execution ID, you can use the list_job_history command.
The run_job command with the -I option is not intended to be mixed with incremental backup job executions (delta or cumulative). Running incremental operations after a backup that uses the -I option results in incremental backups that cannot be restored or that result in an incomplete restore, including loss of data in the restored objects.
If you run the backup job using the -I option, but the job completes with errors, you can use the original save set and run the job again with the -I option. The save set that results from running the job again includes the objects that were skipped in the original execution that completed with errors. The newly-generated save sets, together with the original save sets, are correlated and are required when restoring any object defined in the backup job definition.
# dsc run_job -n LV500001 -I 13

Usage Notes

The run_job command cannot be used successfully for a retired job.

XML File Example

This command does not require an XML file as input. You must supply a file name and location to which the XML file results are exported as output.