Purpose
You need a valid cloud staging area for creating a cloud staging copy Job. To create a cloud staging copy Job define the force_utility as DSA and add a cloud_staging_area element.
Command: create -job_name DMCS2Job -f create.xml
Syntax
Data Mover XML SchemasParameters
See Parameter Order.
The following examples shows the additional DSA parameters added to the XML.
- data_streams
- [Optional] Number of data streams to use between the source and target databases. Applies to jobs that use Teradata DSA and TPT API (to and from Teradata). All other protocols use a single data stream.
- db_client_encryption
- [Optional] Set to true if job needs to be encrypted during data transfer.
- dm.rest.endpoint
- [Optional] Enter a Data Mover REST server URL to overwrite the default value specified in the commandline.properties file in order to connect to a different REST server (and therefore a different daemon) at runtime.
- execute_permission
- [Optional] Defines the username[s] and role[s] that have execute permission for the created job.
- force_utility
- [Optional] Forces the Data Mover daemon to use a specific utility for all copy operations.
Valid Values
- dsa
- jdbc
- tptapi
- tptapi_load
- tptapi_stream
- tptapi_update
- T2T
If this value is not specified, the Data Mover daemon determines which Teradata utility is the best to use for the job.
Copying data to an older version of Analytics Database using Teradata DSA is not valid. You cannot use Teradata DSA if the source and target TDPIDs are the same. - freeze_job_steps
- [Optional] Freezes the job steps so that the steps are not recreated each time the job is started. Only set to true if the source and target environments do not change after the job is created.Valid Values
- true - the job steps are not recreated each time the job is started
- false - the job steps are recreated each time the job is started
- Unspecified (default) - the value is set to false
- job_name
- [Optional] Name for this job. The name must be unique and up to 32 characters.
- job_priority
- [Optional] Specifies the execution priority for the job. Supported values are: HIGH, MEDIUM, and LOW, and UNSPECIFIED. If no value is specified, the default of MEDIUM is used at runtime.
- job_security
- [Optional] Defines the access parameters for the created job.
- log_level
- [Optional] Log level for log file output.
Valid Values
- 0
- 1
- 2
- 99
- log_to_event_table
- [Optional] Specifies the event table to be used for this job. For more information, see Using Event Tables.
- cloud_staging_area
- [Optional] Specifies the cloud staging area to be used for this job. Cloud staging area is necessary to run a cloud staging copy job.If -cloud_storage_area is specified and -force_utility is not used, default use of Teradata DSA happens.
- max_agents_per_task
- [Optional] Maximum number of Data Mover agents to use in parallel when moving tables or databases.
- netrace
- [Optional] CLI netrace parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
- netrace_buf_len
- [Optional] CLI netrace_buf_len parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
- online_archive
- [Optional] Allows read and write access to the source table(s) while the tables are being copied with Teradata DSA. Updates occur to the source table during the copy, but are not transferred to the target table. After a successful copy, the data contained in the target table matches the data that was in the source table at the beginning of the copy.Valid Values
Value Description True Enables online archive False Disables online archive Unspecified Default – the value is set to the value in the Data Mover daemon configuration file - overwrite_existing_objects
- [Optional] Job overwrites objects that already exist on the target.Valid ValuesIf the parameter is not specified, the value is set to the overwrite_existing_objects parameter value in the Data Mover daemon configuration file. If the parameter is specified as true or false, that value takes precedence over the parameter value in the Data Mover daemon configuration file.
Value Description True Enables overwriting False Disables overwriting Unspecified Default – the value is set to the value in the Data Mover daemon configuration file - owner_name
- [Optional] User who created the job.
- read_permission
- [Optional] Defines the username[s] and role[s] that have read permission for the created job.
- response_timeout
- [Optional] Amount of time, in seconds, to wait for response from the Data Mover daemon.
- source_account_id
- [Optional] Logon account ID for source database.
- source_logon_mechanism
- [Optional] Logon mechanism for source system. To log on to a source system, the user must provide at least one of the following:
- source_user and source_password
- source_logon_mechanism
Logon mechanisms are not supported for Teradata DSA jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. If -source_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -source_logon_mechanism with Teradata DSA specified for -force_utility results in an error.
- source_logon_mechanism_data
- [Optional] Additional parameters needed for the logon mechanism of the source system.
- source_password
- [Optional] Source Teradata logon password.
- source_password_encrypted
- [Optional] Source Teradata encrypted logon password.
- source_sessions
- [Optional] Number of sessions per data stream on the source database.
- source_tdpid
- Source database.
- source_user
- [Optional] Source Teradata logon id.
- source_userid_pool
- [Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as target_userid_pool if specifying both parameters in the same job definition.
- table
- [Optional] Table to be copied.
- target_account_id
- [Optional] Logon account ID for target database.
- target_logon_mechanism
- [Optional] Logon mechanism for target system. To log on to a target system, the user must provide at least one of the following:
- target_user and target_password
- target_logon_mechanism
Teradata DSA does not support logon mechanisms. Use logon mechanisms only with Teradata PT API and Teradata JDBC jobs. If -target_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -target_logon_mechanism with Teradata DSA specified for -force_utility results in an error.
- target_logon_mechanism_data
- [Optional] Additional parameters that are needed for the target system's logon mechanism.
- target_password
- [Optional] Target Teradata logon password.
- target_password_encrypted
- [Optional] Target Teradata encrypted logon password.
- target_sessions
- [Optional] Number of sessions per data stream on the target database.
- target_tdpid
- [Optional] Target database.
- target_user
- [Optional] Target Teradata logon id.
- target_userid_pool
- [Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as source_userid_pool if specifying both parameters in the same job definition.
- tpt_debug
- [Optional] TPT API trace debug log parameter. Any value greater than or equal to 0 generates a TPT API trace log. Valid TPT API value must be provided.
- write_permission
- [Optional] Defines the username[s] and role[s] that have write permission for the created job.
Usage Notes
Type datamove create -f create.xml to create a job. The job name is displayed on the screen when the create command completes.
The create command does not start the job. Use the start command to start the job, or the edit command to review the job scripts.
XML File Example
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <dmCreate xmlns="http://schemas.teradata.com/dataMover/v2009" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance xsi:schemaLocation="http://schemas.teradata.com/dataMover/v2009/DataMover.xsd"> <source_tdpid>sourceSys</source_tdpid> <source_user>source_user</source_user> <source_password>source_password</source_password> <target_tdpid>targetSyc</target_tdpid> <target_user>target_user</target_user> <target_password>target_password</target_password> <data_streams>1</data_streams> <source_sessions>1</source_sessions> <target_sessions>1</target_sessions> <freeze_job_steps>FALSE</freeze_job_steps> <force_utility>DSA</force_utility> <log_level>99</log_level> <cloud_staging_area> <name>csareaname</name> </cloud_staging_area> <dsa_options> <target_group_name>my_target_group</target_group_name> <parallel_builds>1</parallel_builds> </dsa_options> <database selection="unselected"> <name>db1</name> <table selection="included"> <name>tb1</name> </table> </database> </dmCreate>