Purpose
The create command creates a job on the daemon from the syntax parameters and the object list of databases and tables from the query command. A job definition consists of the parameters and object list.
Parameters
- job_name
- [Optional] Name for this job. The name must be unique and up to 32 characters.
- source_tdpid
- Source Teradata Database.
- source_user
- [Optional] Source Teradata logon id.
- source_password
- [Optional] Source Teradata logon password.
- source_password_encrypted
- [Optional] Source Teradata encrypted logon password.
- source_logon_mechanism
- [Optional] Logon mechanism for source system. To log on to a source Teradata Database system, the user must provide at least one of the following:
- source_user and source_password
- source_logon_mechanism
Logon mechanisms are not supported for Teradata ARC jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. If -source_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -source_logon_mechanism with Teradata ARC specified for -force_utility results in an error.
- source_logon_mechanism_data
- [Optional] Additional parameters that are needed for the source system's logon mechanism.
- source_account_id
- [Optional] Logon account ID for source database.
- source_userid_pool
- [Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as target_userid_pool if specifying both parameters in the same job definition.
- target_tdpid
- [Optional] Target Teradata Database.
- target_user
- [Optional] Target Teradata logon id.
- target_password
- [Optional] Target Teradata logon password.
- target_password_encrypted
- [Optional] Target Teradata encrypted logon password.
- target_logon_mechanism
- [Optional] Logon mechanism for target system. To log on to a target Teradata Database system, the user must provide at least one of the following:
- target_user and target_password
- target_logon_mechanism
Logon mechanisms are not supported for Teradata ARC jobs. Use logon mechanisms only for Teradata PT API and Teradata JDBC jobs. If -target_logon_mechanism is specified and -force_utility is not used, Teradata PT API is used by default. Specifying -target_logon_mechanism with Teradata ARC specified for -force_utility results in an error.
- target_logon_mechanism_data
- [Optional] Additional parameters that are needed for the target system's logon mechanism.
- target_account_id
- [Optional] Logon account ID for target database.
- target_userid_pool
- [Optional] Job pulls the user from the specified credential pool. Available for any job type. Must use the same credential pool as source_userid_pool if specifying both parameters in the same job definition.
- job_priority
- [Optional] Specifies the execution priority for the job. Supported values are: HIGH, MEDIUM, and LOW, and UNSPECIFIED. If no value is specified, the default of MEDIUM is used at runtime.
- data_streams
- [Optional] Number of data streams to use between the source and target databases. Applies to jobs that use Teradata ARC, jobs that use TPT API, and Aster (to and from Teradata) jobs. All other protocols use a single data stream.
- source_sessions
- [Optional] Number of sessions per data stream on the source database.
- target_sessions
- [Optional] Number of sessions per data stream on the target database.
- response_timeout
- [Optional] Amount of time, in seconds, to wait for response from the Data Mover daemon.
- overwrite_existing_objects
- [Optional] Job overwrites objects that already exist on the target.Valid Values
- true - enables overwriting
- false - disables overwriting
- unspecified (default)
- freeze_job_steps
- [Optional] Freezes the job steps so that they are not recreated each time the job is started. Should only be set to true if the source and target environments do not change after the job is created.Valid Values
- true - the job steps are not recreated each time the job is started
- false - the job steps are recreated each time the job is started
- unspecified (default) - the value is set to false
- force_utility
- [Optional] Forces the Data Mover daemon to use a specific utility for all copy operations.
Valid Values
- arc
- jdbc
- tptapi
- tptapi_load
- tptapi_stream
- tptapi_update
- T2T
Copying data to an older version of the Teradata Database using Teradata ARC is not valid. You cannot use Teradata ARC if the source and target TDPIDs are the same . - log_level
- [Optional] Log level for log file output.
Valid Values
- 0
- 1
- 2
- 99
- online_archive
- [Optional] Allows read and write access to the source table(s) while the tables are being copied with Teradata ARC. Updates occur to the source table during the copy, but are not transferred to the target table. After a successful copy, the data contained in the target table matches the data that was in the source table at the beginning of the copy.Valid Values
- true - enables online archive
- false - disables online archive
- unspecified (default) - the value is set to the value in the Data Mover daemon configuration file
- table
- [Optional] Table to be copied.
- max_agents_per_task
- [Optional] Maximum number of Data Mover agents to use in parallel when moving tables, databases, or journals.
- broker.url
- [Optional] You may enter a broker URL to overwrite the default value specified in the commandline.properties file in order to connect to a different ActiveMQ server (and therefore a different daemon) at runtime.
- broker.port
- [Optional] You may enter a broker port to overwrite the default value specified in the commandline.properties file in order to connect to a different ActiveMQ server (and therefore a different daemon) at runtime.
- job_security
- [Optional] Defines the access parameters for the created job.
- owner_name
- [Optional] User who created the job.
- read_permission
- [Optional] Defines the username[s] and role[s] that have read permission for the created job.
- write_permission
- [Optional] Defines the username[s] and role[s] that have write permission for the created job.
- execute_permission
- [Optional] Defines the username[s] and role[s] that have execute permission for the created job.
- additional_arc_parameters
- [Optional] Specifies the additional ARC parameters that will be appended when executing each ARC task. There is a 2048-character limit.
Usage Notes
To create a job using the object list that was created with the query command, type datamove create -f parameters.xml. The job name is displayed on the screen when the create command completes. Remember the job name to use in other commands, such as the stop and start commands.
The create command does not start the job. Use the start command to start the job, or the edit command to review the job scripts.
XML File Example
For the create command, type datamove create -f parameters.xml.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <dmCreate xmlns="http://schemas.teradata.com/dataMover/v2009" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://schemas.teradata.com/unity/datamover.xsd"> <job_name>floyd_dmdev_create</job_name> <source_tdpid>floyd</source_tdpid> <source_user>dmguest</source_user> <source_password>please</source_password> <target_tdpid>dmdev</target_tdpid> <target_user>dmguest</target_user> <target_password>please</target_password> <data_streams>5</data_streams> <source_sessions>1</source_sessions> <target_sessions>1</target_sessions> <force_utility>arc</force_utility> <log_level>0</log_level> <database selection="unselected"> <name>dmguest</name> <table selection="included"> <name>test1</name> </table> <table selection="included"> <name>test2</name> </table> <table selection="included"> <name>test3</name> </table> </database> <query_band>Job=payroll;Userid=aa1000000;Jobsession=1122;</query_band> <job_security> <owner_name>owner</owner_name> <read_permission> <username>read_user1</username> <username>read_user2</username> <role>read_role1</role> <role>read_role2</role> </read_permission> <write_permission> <username>write_user1</username> <username>write_user2</username> <role>write_role1</role> <role>write_role2</role> </write_permission> <execute_permission> <username>execute_user1</username> <username>execute_user2</username> <role>execute_role1</role> <role>execute_role2</role> </execute_permission> </job_security> </dmCreate>