16.20 - Teradata Data Mover 组件(不含 Aster 内容)概述 - Teradata Data Mover

Teradata® Data Mover 用户指南

Product
Teradata Data Mover
Release Number
16.20
Published
2021 年 11 月
Content Type
用户指南
Publication ID
B035-4101-107K-CHS
Language
中文 (简体)
下图显示了 Data Mover 的主要组件:
  • Data Mover 守护程序
  • Data Mover 代理
  • 用户界面:
    • 图形(Data Mover 门户组件)
    • 命令行
    • RESTful API
使用其中一个界面指定作业操作时,Data Mover 守护程序将生成作业步骤。然后,Data Mover 守护程序会在运行作业任务的 Data Mover 代理中为每个作业步骤分发任务。Data Mover 会设置为向 Teradata Ecosystem Manager 报告作业状态以监控和控制 Data Mover 作业。
Data Mover 中的组件
Data Mover 组件,显示受支持接口之间的数据移动架构和流。

Data Mover 部署选项

可使用以下平台部署 Data Mover:
  • Teradata Multi-Purpose Server (TMS)
  • Consolidated Teradata Multi-Purpose Server (CTMS) 上的虚拟机
  • IntelliCloud
  • Amazon Web Services
  • Microsoft Azure

Data Mover 数据传输选项

Data Mover 支持以下系统之间的数据传输:
源系统 目标系统
内部部署的 Teradata 内部部署的 Teradata
内部部署的 Teradata 内部部署的 Hadoop
内部部署的 Hadoop 内部部署的 Teradata
公共云(AWS 和 Azure)中的 Teradata 公共云(AWS 和 Azure)中的 Teradata
内部部署的 Teradata 公共云(AWS 和 Azure)中的 Teradata
公共云(AWS 和 Azure)中的 Teradata 内部部署的 Teradata
内部部署的 Hadoop 公共云(AWS 和 Azure)中的 Teradata
公共云(AWS 和 Azure)中的 Teradata 内部部署的 Hadoop
Teradata IntelliCloud 公共云(AWS 和 Azure)中的 Teradata
公共云(AWS 和 Azure)中的 Teradata Teradata IntelliCloud

Data Mover 守护程序

Teradata® Data Mover 的主要组件是 Data Mover 守护程序,可执行以下操作:

  • 处理来自其中一个 Data Mover 接口的传入请求
  • 查询 Teradata Database 系统以获取有关指定对象的信息
  • 基于作业规格,生成将对象从源系统复制到目标系统的计划
  • 将每个作业的工作分发给 Data Mover 代理
  • 捕获并存储作业信息,例如由每个作业实例生成的作业计划和输出
  • 将状态信息发送到 Teradata Ecosystem Manager 和服务器管理

Data Mover 守护程序使用缩小版 Teradata Database 来存储作业信息和元数据。Data Mover 守护程序和 Data Mover 命令行界面、Data Mover 代理或其他外部组件之间的所有通信都将通过 Java Message Service (JMS) 总线传输。这些组件之间传输的所有消息均为 XML 格式。

Data Mover 守护程序的任务之一是将作业状态信息发送到 Teradata Ecosystem Manager 和服务器管理,以便将关键故障立即报告给 Teradata。Data Mover 守护程序将为通过门户组件和命令行界面启动的作业生成 Teradata Ecosystem Manager 事件信息,然后将状态发送到 Teradata Ecosystem Manager(如果存在此产品,且用户已将 Data Mover 配置为执行此操作)。

Data Mover 守护程序的属性文件

daemon.properties 文件包含 Data Mover 守护程序的配置属性。

# Copyright (C) 2009-2019  by Teradata Corporation.
# All Rights Reserved.
# TERADATA CORPORATION CONFIDENTIAL AND TRADE SECRET
#----------------------------------------------------------------------------------------------
# File: "daemon.properties"
#
# Purpose: This file contains all of the properties used by the DM Daemon.
#
# Caution: Do not modify the property names/values in this file unless absolutely sure 
# of the implications to the DM Daemon.
#
# Note: Any line of text in this document that begins with a '#' character is a comment and 
# has no affect on the DM Daemon. However, comments should not be modified.
#
# All properties under LOGGING comment are used for logging purposes
#
#----------------------------------------------------------------------------------------------

# Purpose: The hostname or IP address of the machine running the 
# Java Message Service (JMS) Message Broker.
# Default: localhost
# Other examples include:
# broker.url=10.0.1.199
# broker.url=hostname
# broker.url=[fe80::114:23ff:fe1d:32fb]
broker.url=localhost

# Purpose: The port number on the machine in which the
# Java Message Service (JMS) Message Broker is listening.
# Default: 61616
broker.port=61616

# Purpose: When set to true, a connection to a 
# Secondary Java Message Service (JMS) Message Broker can be
# established in case the Primary Java Message Service (JMS) Broker fails.
# Default: false
cluster.enabled=false

# Purpose: The message time to live in milliseconds; zero is unlimited
# Be SURE to synchronize clocks of all dm hosts so messages do not expire unintentionally
# causing failed or hanging requests to Daemon.
# Default: 1 hr
jms.response.timetolive=3600000

# Purpose: A long-lived server port on the machine running the DM Daemon, which is used for inbound
# socket connections from DM Agents.  
# Default: 25168
arcserver.port=25168

# Purpose: When set to true, Fatal Error messages can be sent to TVI
# Default: true
tvi.useLogger=true

# Purpose: Scripts are used to run to collect TVI diagnostic bundles file. 
# DO NOT change this directory location.
tvi.diagnosticbundle.script.dir=/opt/teradata/datamover/support/diagnosticbundle

# Purpose: TVI diagnostic bundle files are saved to this directory
# This value need to be the same as SUPPORT_BUNDLE_DIR in /etc/opt/teradata/sm3g. sm3gnode.conf 
tvi.diagnosticbundle.dir=/var/opt/teradata/datamover/support/diagnosticbundle

# Purpose: The maximum number of jobs allowed to run on the daemon at the same time.
# Additional jobs are placed on the queue and executed when slots become available
# Default: 20
jobExecutionCoordinator.maxConcurrentJobs=20

# Purpose: The maximum number of jobs allowed in the job queue.
# Additional jobs are placed in a higher level memory queue until slots are available in the job queue.
# Default: 20
jobExecutionCoordinator.maxQueuedJobs=20

# Purpose: The hostname or IP address for the ViewPoint Authentication server.
# Default: http://localhost
viewpoint.url=http://dm-vp1

# Purpose: The port number for the ViewPoint Authentication server.
# Default: 80
viewpoint.port=80

# Purpose: The hostname and IP address for the Querygrid Manager servers.
# Support clustered QueryGrid managers, could have upto 2 urls, separate by comma.
# e.g querygrid.manager.urls=https://host1:9443,https://host2:9443
# Default: https://localhost:9443
querygrid.manager.urls=https://localhost:9443

#----------------------LOGGING-------------------------------

# Purpose: Set Logging level to info.  User has 6 options.  
# From most amount of logging to least: trace < debug < info < warn < error < fatal
rootLogger.level=info

# Purpose: Informs the logging application to use a specific appender and it's properties.  DO NOT CHANGE 
appender.rolling.type=RollingFile
appender.rolling.name=RollingFile
appender.rolling.layout.type=PatternLayout
appender.rolling.layout.pattern=%d [%t] %-5p %c{3}(%L) - %m%n
appender.rolling.policies.type=Policies
appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.rolling.strategy.type=DefaultRolloverStrategy
logger.rolling.name=com.teradata
logger.rolling.appenderRef.rolling.ref=RollingFile

# Purpose: Allow's user the ability to change the location of the log file.
# If changing log file location, please give the absolute path of the file;
# for example, /var/log/dmAgent.log
# for windows os: use slash instead of back slash: 
# For Example: C:/Program File/Teradata/Log/dmDaemon.log 
appender.rolling.fileName=/var/opt/teradata/datamover/logs/dmDaemon.log
appender.rolling.filePattern=/var/opt/teradata/datamover/logs/dmDaemon.log.%i

# Purpose: The max size of the logging file before being rolled over to backup files.
appender.rolling.policies.size.size=20MB
# Purpose: The number of backup logging files created, after the max number, the oldest file is erased.
appender.rolling.strategy.max=5

service_user=dmuser

# Purpose: Data Mover Rest Interface for DSA job status notification
dm.rest.endpoint=https://localhost:1443/datamover

# Purpose: DSA Rest Interface for DSA job status notification 
dsa.rest.endpoint=https://localhost:9090/dsa
#------------------------------------------------------------

Data Mover 代理

Data Mover 守护程序使用 Data Mover 代理来执行每个 Data Mover 作业的工作;因此,至少需要一个代理。多个代理可以并行运行任务,从而提高性能。为了最大程度地提高性能,Teradata 建议每个代理都位于自己的服务器上,而不是 Data Mover 守护程序的服务器上。但是,这不是必需的。Data Mover 代理与 Data Mover 守护程序可以在同一个服务器上运行。

Data Mover 守护程序将一项工作拆分成多个任务,这些任务置于 JMS 队列中以等待下一个可用的 Data Mover 代理。Data Mover 代理将使用以下实用程序之一来执行此任务,然后侦听队列以获取下一个任务:

  • Teradata Archive/Recovery (ARC)
  • Data Stream Architecture (DSA)
  • Teradata Parallel Transporter (TPT) API
  • Teradata JDBC 驱动程序
  • QueryGrid
  • Teradata Connector for Hadoop (TDCH)

Data Mover 代理的属性文件

agent.properties 文件包含 Data Mover 代理的配置属性。

# Copyright (C) 2009-2019 by Teradata Corporation.
# All Rights Reserved.
# TERADATA CORPORATION CONFIDENTIAL AND TRADE SECRET
#----------------------------------------------------------------------------------------------
# File: "agent.properties"
#
# Purpose: This file contains all of the properties used by the DM Agent.
#
# Caution: Do not modify the property names/values in this file unless absolutely sure 
# of the implications to the DM Agent.
#
# Note: Any line of text in this document that begins with a '#' character is a comment and 
# has no affect on the DM Agent. However, comments should not be modified.
#
# All properties under LOGGING Comment are used for logging purposes
#
#----------------------------------------------------------------------------------------------

# Purpose: The Agent Identifier
# Default: Agent1
agent.id=Agent1

# Purpose: The hostname or IP address of the machine running the 
# Java Message Service (JMS) Message Broker.
# Default: localhost
broker.url=localhost

# Purpose: The port number on the machine in which the
# Java Message Service (JMS) Message Broker is listening.
# Default: 61616
broker.port=61616

# Purpose: When set to true, a connection to a 
# Secondary Java Message Service (JMS) Message Broker can be
# established in case the Primary Java Message Service (JMS) Broker fails.
# Default: false
cluster.enabled=false

# Purpose: Port that will be used by ARC
# Default: 25268
arc.port=25268

# Purpose: The maximum number of tasks allowed to run on this agent at the same time.
# This property has the side-effect of reducing parallelism among multiple Agents if
# set too high, because one Agent will grab all the tasks on the queue
# Default: 5
agent.maxConcurrentTasks=5

#When set to true, Fatal Error messages can be sent to TVI
tvi.useLogger=true

# Purpose: Scripts are used to run to collect TVI diagnostic bundles file. 
# DO NOT change this directory location.
tvi.diagnosticbundle.script.dir=/opt/teradata/datamover/support/diagnosticbundle

# Purpose: TVI diagnostic bundle files are saved to this directory
# This value need to be the same as SUPPORT_BUNDLE_DIR in /etc/opt/teradata/sm3g. sm3gnode.conf 
tvi.diagnosticbundle.dir=/var/opt/teradata/datamover/support/diagnosticbundle

#----------------------LOGGING-------------------------------

# Purpose: Set Logging level to info.  User has 6 options.  
# From most amount of logging to least: trace < debug < info < warn < error < fatal
rootLogger.level=info

# Purpose: Informs the logging application to use a specific appender and it's properties.  DO NOT CHANGE 
appender.rolling.type=RollingFile
appender.rolling.name=RollingFile
appender.rolling.layout.type=PatternLayout
appender.rolling.layout.pattern=%d [%t] %-5p %c{3}(%L) - %m%n
appender.rolling.policies.type=Policies
appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.rolling.strategy.type=DefaultRolloverStrategy
logger.rolling.name=com.teradata
logger.rolling.appenderRef.rolling.ref=RollingFile

# Purpose: Allow's user the ability to change the location of the log file.
# If changing log file location, please give the absolute path of the file;
# for example, /var/log/dmAgent.log
# for windows os: use slash instead of back slash: 
# For Example: C:/Program File/Teradata/Log/dmAgent.log 
appender.rolling.fileName=/var/opt/teradata/datamover/logs/dmAgent.log
appender.rolling.filePattern=/var/opt/teradata/datamover/logs/dmAgent.log.%i

# Purpose: The max size of the logging file before being rolled over to backup files.
appender.rolling.policies.size.size=10MB
# Purpose: The number of backup logging files created, after the max number, the oldest file is erased.
appender.rolling.strategy.max=3

service_user=dmuser
#------------------------------------------------------------

Data Mover 门户组件

使用 Data Mover 组件,可以在图形用户界面中设置、启动、编辑和监控在 Teradata Database 系统之间复制数据库对象的作业。由于 Data Mover 组件的界面非常友好和直观,因此只需在源系统的数据库层级中浏览到要复制的对象并创建作业定义,即可通过 Data Mover 轻松完成复制任务。使用此组件时,无需编辑 XML 参数文件或记住通用命令语法。要使用 Data Mover 组件,必须已安装 Teradata Viewpoint。

命令行界面

命令行界面提供了一些设置命令,可用于设置和配置 Data Mover 系统(包括守护程序和代理);还提供了一些作业操作命令,可用于创建、运行、监控、更新、编辑和删除作业。设置命令提供的功能与 Data Mover 设置门户组件提供的功能相对应。作业管理命令提供的功能与 Data Mover 门户组件提供的功能相对应。要获取命令和有效参数的列表,请参阅关于 Data Mover 命令

界面中的每个命令都需要一组参数用于操作,这些参数可以作为命令行参数的一部分列出,也可以在 XML 文档中指定。如果在命令行和 XML 文件中定义了同一个参数,则命令行优先。可以使用命令行界面的多个实例。

RESTful API

RESTful API 提供了一个与命令行界面类似的功能。可以使用任何标准 REST 客户端从任何第三方或 Teradata 服务器访问 RESTful API。API 提供了一套编程方法来创建、运行、监控、更新、编辑、停止、清理和删除作业。有关详细信息,请参阅 Data Mover RESTful API