You can select advanced job save options from the Job Settings tab. Click Advanced to access job performance settings. Data Mover provides default values for these settings.
- Teradata Systems
- For Teradata systems, you can select these advanced job save options:
- Data Streams
- Specifies the number of data streams to use between the source and target databases for Teradata PT API jobs. For DSA jobs, specify the number of streams per database node. All other utilities use a single data stream.
- Source Sessions
- Specifies the number of sessions per data stream for the source system.
- Target Sessions
- Specifies the number of sessions per data stream for the target system.
- Max Agents per Task
- Specifies the maximum number of agents that Data Mover allocates at the same time to one task in jobs that use the Teradata PT API. If multiple agents are installed in the Data Mover environment, you can enter an integer value greater than one to improve performance for a job that copies large amounts of data. If you do not provide a value for Max Agents per Task, Data Mover dynamically calculates a value at runtime.
- Force Utility
- Forces Data Mover to use a specific Teradata utility or API operator for the copy job. Data Mover automatically selects the best utility for the job.
- Source Character Set
- Specifies the session character set that is used to communicate with the source system.
- Target Character Set
- Specifies the session character set that is used to communicate with the target system.
- Target Group Name
- Specifies a shared pipe target group to run DSA jobs instead of having Data Mover automatically select one. If the specified target group does not exist, the job fails.
- Parallel Builds
- Specifies the number of tables with indices that can be built concurrently when using DSA. The maximum number of concurrent builds is 5 (default value).
- Teradata and Hadoop Systems
- For Teradata to Hadoop and Hadoop to Teradata, you can select these advanced job save options:
- Force Utility
- Forces Data Mover to use a specific utility for Hadoop copy operations. The Data Mover daemon uses SQL-H to move the table. If SQL-H cannot, Teradata Connector for Hadoop (TDCH) is used to move the table.
- Transfer Method
- Teradata Connector for Hadoop that supports these options for data transfer from Teradata to Hadoop.
Teradata to Hadoop Option Description Default Data Mover selects AMPs by default if transfer method is not specified. Hash The underlying Hadoop connector retrieves rows in a given hash value range of a specified split-by column from a source table in Teradata, and writes those records into a target file in HDFS. Value The underlying Hadoop connector retrieve rows in a given value range of a specified split-by column from a source table in Teradata, and writes those records into a target file in HDFS. Partition The underlying hadoop connector creates a staging PPI table on source database if the source table is not a PPI table. Amp The underlying Hadoop connector retrieves rows in one or more AMPs from a source table in Teradata, and writes those records into a target file in HDFS. The Amp option is supported only if the Teradata Database is version 14.10 or later. - Number of Mappers
- Specifies the number of mappers Teradata Connector uses to pull data from Teradata Database.