Purpose
The start command starts a job that was created with the create command. You may specify different job variable values than were originally used by entering them in the command line at runtime. You may also modify job variable values as well as the list of objects to copy by supplying an updated parameters.xml file. If the daemon does not have sufficient resources to run the job immediately, the job is queued.
Syntax
Data Mover XML SchemasParameters
See Parameter Order.
- data_streams
- [Optional] Number of data streams to use between the source and target databases. Applies to jobs that use Teradata DSA and TPT API (to and from Teradata). All other protocols use a single data stream.
- db_client_encryption
- [Optional] Set to true if job needs to be encrypted during data transfer.
- dm.rest.endpoint
- [Optional] Enter a Data Mover REST server URL to overwrite the default value specified in the commandline.properties file in order to connect to a different REST server (and therefore a different daemon) at runtime.
- force_utility
- [Optional] Forces the Data Mover daemon to use a specific utility for all copy operations.
Valid Values
- dsa
- jdbc
- tptapi
- tptapi_load
- tptapi_stream
- tptapi_update
- T2T
If this value is not specified, the Data Mover daemon determines which Teradata utility is the best to use for the job.
Copying data to an older version of Analytics Database using Teradata DSA is not valid. You cannot use Teradata DSA if the source and target TDPIDs are the same. - job_name
- Name of the job to be started.
- job_priority
- [Optional] Specifies the execution priority for the job. Supported values are: HIGH, MEDIUM, and LOW, and UNSPECIFIED. If no value is specified, the default of MEDIUM is used at runtime.
- log_level
- [Optional] Log level for log file output.
Valid Values
- 0
- 1
- 2
- 99
- max_agents_per_task
- [Optional] Maximum number of Data Mover agents to use in parallel when moving tables or databases.
- netrace
- [Optional] CLI netrace parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
- netrace_buf_len
- [Optional] CLI netrace_buf_len parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
- online_archive
- [Optional] Allows read and write access to the source table(s) while the tables are being copied with Teradata DSA. Updates occur to the source table during the copy, but are not transferred to the target table. After a successful copy, the data contained in the target table matches the data that was in the source table at the beginning of the copy.Valid Values
Value Description True Enables online archive False Disables online archive Unspecified Default – the value is set to the value in the Data Mover daemon configuration file - overwrite_existing_objects
- [Optional] Job overwrites objects that already exist on the target.Valid ValuesIf the parameter is not specified, the value is set to the overwrite_existing_objects parameter value in the Data Mover daemon configuration file. If the parameter is specified as true or false, that value takes precedence over the parameter value in the Data Mover daemon configuration file.
Value Description True Enables overwriting False Disables overwriting Unspecified Default – the value is set to the value in the Data Mover daemon configuration file - query_band
- [Optional] A semicolon-separated set of name-value pairs that uniquely identifies Teradata sessions or transactions for the source and target. To use a query band to identify the job payroll , the user ID aa100000 , and job session number 1122, define the query band as follows:
- response_timeout
- [Optional] Amount of time, in seconds, to wait for response from the Data Mover daemon.
- save_changes
- [Optional] Saves the changed job variable values and uses them to replace the values originally defined when the job was created.
- security_password
- [Optional] Password for the super user or authorized Viewpoint user.
- security_password_encrypted
- [Optional] Encrypted password for the super user.
- security_username
- [Optional] User ID of the super user or authorized Viewpoint user. The user ID of the super user is dmcl_admin and cannot be changed.
- source_account_id
- [Optional] Logon account ID for source database.
- source_logon_mechanism
- [Optional] Logon mechanism for source system. To log on to a source system, the user must provide at least one of the following:
- source_user and source_password
- source_logon_mechanism
Logon mechanisms are not supported for Teradata DSA jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. If -source_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -source_logon_mechanism with Teradata DSA specified for -force_utility results in an error.
- source_logon_mechanism_data
- [Optional] Additional parameters needed for the logon mechanism of the source system.
- source_password
- [Optional] Source Teradata logon password.
- source_sessions
- [Optional] Number of sessions per data stream on the source database.
- source_tdpid
- [Optional] Source Teradata Database.
- source_user
- [Optional] Source Teradata logon id.
- sync
- [Optional] Waits for the job to complete, then returns an exit code that indicates if the job completed successfully. An exit code of 0 indicates successful job completion. An exit code other than 0 indicates an error with the job or with the command.
- target_account_id
- [Optional] Logon account ID for target database.
- target_logon_mechanism
- [Optional] Logon mechanism for target system. To log on to a target system, the user must provide at least one of the following:
- target_user and target_password
- target_logon_mechanism
Teradata DSA does not support logon mechanisms. Use logon mechanisms only with Teradata PT API and Teradata JDBC jobs. If -target_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -target_logon_mechanism with Teradata DSA specified for -force_utility results in an error.
- target_password
- [Optional] Target Teradata logon password.
- target_sessions
- [Optional] Number of sessions per data stream on the target database.
- target_tdpid
- [Optional] Target database.
- target_user
- [Optional] Target Teradata logon id.
- tpt_debug
- [Optional] TPT API trace debug log parameter. Any value greater than or equal to 0 generates a TPT API trace log. Valid TPT API value must be provided.
- uowid
- [Optional] Alternate ID or name for the batch of work associated with the job. If you provide a value for this parameter, Data Mover reports this value as the unit of work ID when sending events to Teradata Ecosystem Manager or to its internal TMSMEVENT table. If you do not specify this parameter, Data Mover uses a default value as the unit of work ID when sending events to Teradata Ecosystem Manager or to its internal TMSMEVENT table. The default value for the unit of work ID is composed of the job execution name and the current timestamp. For example, if you want to define the origins of a query source the job execution name is sales_table, the default value of the unit of work ID is sales_table-20211110122330
Usage Notes
Each time a job starts, a new job instance is created with the name job name-date time year. This name appears in the status command or the logs to identify information for this specific execution of the job. Use the status command to monitor the job status after the job has been started.
If an instance of a job has not completed all specified steps, a new instance of the job cannot be started. If the daemon does not have adequate resources to run the job immediately, the job is placed in a queue. Copying the same objects to the same target as another job that is running or queued is not allowed.
- Specify the value for the job variable directly in the command line, just as you can using the create command. For example to set the logging level for job1 to a value of 99, type: datamove start -job_name job1 -log_level 99.
- Specify the new value for the job variable by modifying the XML file and submitting it to the start command, just as you can using the create command: datamove start -job_name job1 -f job1.xml
If you want to modify the list of objects to copy, you must do so by modifying and submitting the XML file. Be sure that the XML file contains all of the objects you want to copy, not only those with name changes. If an object that was originally specified to be copied is no longer included in the list, it is not included in the job execution.
- At the command line prompt, type:datamove start -job_name job1 -log_level 99 -save_changes
- In the XML file, specify: <saveChanges>true</saveChanges> The save changes parameter syntax is different when used at the command line versus in an XML file, as shown in the example.
- If security is enabled, a user must have write permission to be able to set the save changes parameter to true.
- A user who is not the job owner must supply source and target Teradata system user names and passwords when specifying the object list in the XML and submitting it using the start command.
- If security is enabled and job_security is specified in the modified XML to change job permission, the user must be dcml_admin or the job owner, and the user must provide all the permissions, not just the modified permissions. If job_owner is specified in job_security and the user wants to change the job owner, the user must be dcml_admin.
XML File Example
For the start command, type datamove start -f parameters.xml.
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<dmCreate xmlns="http://schemas.teradata.com/dataMover/v2009"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schemas.teradata.com/unity/datamover.xsd">
<job_name>testStart</job_name>
<source_tdpid>dmdev</source_tdpid>
<source_user>dmuser</source_user>
<source_password>dbc</source_password>
<target_tdpid>dm-daemon2</target_tdpid>
<target_user>dmuser</target_user>
<target_password>dbc</target_password>
<data_streams>1</data_streams>
<source_sessions>1</source_sessions>
<target_sessions>1</target_sessions>
<overwrite_existing_objects>TRUE</overwrite_existing_objects>
<freeze_job_steps>true</freeze_job_steps>
<force_utility>DSA</force_utility>
<log_level>1</log_level>
<online_archive>false</online_archive>
<database selection="unselected">
<name>testdb</name>
<table selection="included">
<name>fmt_inf</name>
<validate_row_count>ALL</validate_row_count>
</table>
<table selection="included">
<name>test1</name>
<compare_ddl>true</compare_ddl>
</table>
</database>
</dmCreate>
<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<dmEdit xmlns="http://schemas.teradata.com/dataMover/v2009"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schemas.teradata.com/unity/DataMover.xsd">
<job_name>testStart</job_name>
<source_user>dmuser</source_user>
<source_password>dbc</source_password>
<target_user>dmuser</target_user>
<target_password>dbc</target_password>
<database selection="unselected">
<name>testdb</name>
<table selection="included">
<name>fmt_inf</name>
<validate_row_count>ALL</validate_row_count>
<compare_ddl>true</compare_ddl>
<sql_where_clause>
<![CDATA[ where i = 2]]>
</sql_where_clause>
<key_columns>
<key_column>i</key_column>
</key_columns>
</table>
<table selection="included">
<name>test3</name>
<validate_row_count>ALL</validate_row_count>
<compare_ddl>true</compare_ddl>
</table>
</database>
</dmEdit>