Create a Cloud staging Copy Job | Commands | Teradata Data Mover - Create a Cloud Staging Copy Job - Teradata Data Mover

Teradata® Data Mover User Guide - 20.01

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
Lake
VMware
Product
Teradata Data Mover
Release Number
20.01
Published
November 2023
Language
English (United States)
Last Update
2023-12-05
dita:mapPath
hlv1700545853003.ditamap
dita:ditavalPath
mpm1591127278842.ditaval
dita:id
don1467241476387
Product Category
Analytical Ecosystem

Purpose

You need a valid cloud staging area for creating a cloud staging copy Job. To create a cloud staging copy Job define the force_utility as DSA and add a cloud_staging_area element.

Command: create -job_name DMCS2Job -f create.xml

Parameters

See Parameter Order.

The following examples shows the additional DSA parameters added to the XML.

data_streams
[Optional] Number of data streams to use between the source and target databases. Applies to jobs that use Teradata DSA and TPT API (to and from Teradata). All other protocols use a single data stream.
Example: 4
The default value is dynamically calculated by Data Mover.
db_client_encryption
[Optional] Set to true if job needs to be encrypted during data transfer.
dm.rest.endpoint
[Optional] Enter a Data Mover REST server URL to overwrite the default value specified in the commandline.properties file in order to connect to a different REST server (and therefore a different daemon) at runtime.
https://dm-server1:1443/datamover
execute_permission
[Optional] Defines the username[s] and role[s] that have execute permission for the created job.
force_utility
[Optional] Forces the Data Mover daemon to use a specific utility for all copy operations.

Valid Values

  • dsa
  • jdbc
  • tptapi
  • tptapi_load
  • tptapi_stream
  • tptapi_update
  • T2T

If this value is not specified, the Data Mover daemon determines which Teradata utility is the best to use for the job.

Copying data to an older version of Analytics Database using Teradata DSA is not valid. You cannot use Teradata DSA if the source and target TDPIDs are the same.
Example: dsa
freeze_job_steps
[Optional] Freezes the job steps so that the steps are not recreated each time the job is started. Only set to true if the source and target environments do not change after the job is created.
Valid Values
  • true - the job steps are not recreated each time the job is started
  • false - the job steps are recreated each time the job is started
  • Unspecified (default) - the value is set to false
Example: true
job_name
[Optional] Name for this job. The name must be unique and up to 32 characters.
If you do not specify a name, it is automatically generated using the format: <source tdpid >_<target tdpid >_<date time year>.
job_priority
[Optional] Specifies the execution priority for the job. Supported values are: HIGH, MEDIUM, and LOW, and UNSPECIFIED. If no value is specified, the default of MEDIUM is used at runtime.
Example: MEDIUM
job_security
[Optional] Defines the access parameters for the created job.
log_level
[Optional] Log level for log file output.

Valid Values

  • 0
  • 1
  • 2
  • 99
Example: 2
The default value is 0.
log_to_event_table
[Optional] Specifies the event table to be used for this job. For more information, see Using Event Tables.
cloud_staging_area
[Optional] Specifies the cloud staging area to be used for this job. Cloud staging area is necessary to run a cloud staging copy job.
If -cloud_storage_area is specified and -force_utility is not used, default use of Teradata DSA happens.
max_agents_per_task
[Optional] Maximum number of Data Mover agents to use in parallel when moving tables or databases.
Example: 4
The default value is dynamically calculated by Data Mover.
netrace
[Optional] CLI netrace parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
netrace_buf_len
[Optional] CLI netrace_buf_len parameter. Any value greater than or equal to 0 generates a CLI trace log. Valid CLI value must be provided.
online_archive
[Optional] Allows read and write access to the source table(s) while the tables are being copied with Teradata DSA. Updates occur to the source table during the copy, but are not transferred to the target table. After a successful copy, the data contained in the target table matches the data that was in the source table at the beginning of the copy.
Valid Values
Value Description
True Enables online archive
False Disables online archive
Unspecified Default – the value is set to the value in the Data Mover daemon configuration file
Example: true
overwrite_existing_objects
[Optional] Job overwrites objects that already exist on the target.
Valid Values
Value Description
True Enables overwriting
False Disables overwriting
Unspecified Default – the value is set to the value in the Data Mover daemon configuration file
If the parameter is not specified, the value is set to the overwrite_existing_objects parameter value in the Data Mover daemon configuration file. If the parameter is specified as true or false, that value takes precedence over the parameter value in the Data Mover daemon configuration file.
Example: true
owner_name
[Optional] User who created the job.
Example: owner
Set if the daemon security is off or the user is the super user (dmcl_admin); otherwise, the actual user logged on overwrites the value.
read_permission
[Optional] Defines the username[s] and role[s] that have read permission for the created job.
response_timeout
[Optional] Amount of time, in seconds, to wait for response from the Data Mover daemon.
Example: 60
source_account_id
[Optional] Logon account ID for source database.
Spaces in the account name for the source or target account id causes the job to fail.
source_logon_mechanism
[Optional] Logon mechanism for source system. To log on to a source system, the user must provide at least one of the following:
  • source_user and source_password
  • source_logon_mechanism

Logon mechanisms are not supported for Teradata DSA jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. If -source_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -source_logon_mechanism with Teradata DSA specified for -force_utility results in an error.

Example: KRB5
source_logon_mechanism_data
[Optional] Additional parameters needed for the logon mechanism of the source system.
Example: joe@domain1@@mypassword
source_password
[Optional] Source Teradata logon password.
Example: 123456789
Not a valid parameter if -source_password_encrypted is also specified. If you do not specify a password for this parameter, the command prompts you to enter it interactively. Input is masked with a set number of asterisks, regardless of the length of the password.
source_password_encrypted
[Optional] Source Teradata encrypted logon password.
Example: 17894cc84b5637a88e36fa37a010e3662d18f64b8ce204bef8d63868ad417810
Not a valid parameter if -source_password is also specified.
source_sessions
[Optional] Number of sessions per data stream on the source database.
Example: 4
The default value is dynamically calculated by Data Mover.
source_tdpid
Source database.
Example: Checks
source_user
[Optional] Source Teradata logon id.
Example: TD_API_user
If you do not specify a logon id for this parameter, the command prompts you to enter it interactively.
Spaces in the user name for the source or target id causes the job to fail.
source_userid_pool
[Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as target_userid_pool if specifying both parameters in the same job definition.
Example: POOL-1
table
[Optional] Table to be copied.
Example: DB1.TABLE
target_account_id
[Optional] Logon account ID for target database.
Spaces in the account name for the source or target account id causes the job to fail.
target_logon_mechanism
[Optional] Logon mechanism for target system. To log on to a target system, the user must provide at least one of the following:
  • target_user and target_password
  • target_logon_mechanism

Teradata DSA does not support logon mechanisms. Use logon mechanisms only with Teradata PT API and Teradata JDBC jobs. If -target_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -target_logon_mechanism with Teradata DSA specified for -force_utility results in an error.

Example: KRB5
target_logon_mechanism_data
[Optional] Additional parameters that are needed for the target system's logon mechanism.
Example: my@domain2@@mypassword
target_password
[Optional] Target Teradata logon password.
Example: 212133344
Not a valid parameter if -target_password_encrypted is also specified. If you do not specify a password for this parameter, the command prompts you to enter it interactively. Input is masked with a set number of asterisks, regardless of the length of the password.
target_password_encrypted
[Optional] Target Teradata encrypted logon password.
Example: 30e458fce484cefef07724653f5046095208f69fcfbf76bf7290b8576192c2fe
Not a valid parameter if -target_password is also specified.
target_sessions
[Optional] Number of sessions per data stream on the target database.
Example: 4
The default value is dynamically calculated by Data Mover.
target_tdpid
[Optional] Target database.
Example: Leo
target_user
[Optional] Target Teradata logon id.
Example: TD_tar_User
If you do not specify a logon id for this parameter, the command prompts you to enter it interactively.
Spaces in the user name for the source or target id causes the job to fail.
target_userid_pool
[Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as source_userid_pool if specifying both parameters in the same job definition.
Example: POOL-1
tpt_debug
[Optional] TPT API trace debug log parameter. Any value greater than or equal to 0 generates a TPT API trace log. Valid TPT API value must be provided.
write_permission
[Optional] Defines the username[s] and role[s] that have write permission for the created job.

Usage Notes

Type datamove create -f create.xml to create a job. The job name is displayed on the screen when the create command completes.

The create command does not start the job. Use the start command to start the job, or the edit command to review the job scripts.

XML File Example

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<dmCreate xmlns="http://schemas.teradata.com/dataMover/v2009"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
   xsi:schemaLocation="http://schemas.teradata.com/dataMover/v2009/DataMover.xsd">
      <source_tdpid>sourceSys</source_tdpid> 
      <source_user>source_user</source_user>
      <source_password>source_password</source_password>
      <target_tdpid>targetSyc</target_tdpid>
      <target_user>target_user</target_user>
      <target_password>target_password</target_password>
      <data_streams>1</data_streams>
      <source_sessions>1</source_sessions>
      <target_sessions>1</target_sessions>
      <freeze_job_steps>FALSE</freeze_job_steps>
      <force_utility>DSA</force_utility>
      <log_level>99</log_level>
	<cloud_staging_area>
        <name>csareaname</name>
       </cloud_staging_area>
	<dsa_options>
		      <target_group_name>my_target_group</target_group_name>
		<parallel_builds>1</parallel_builds>
	  </dsa_options>
      <database selection="unselected">
          <name>db1</name>
          <table selection="included">
          <name>tb1</name>        
          </table>
      </database>
</dmCreate>