Usage Notes - Teradata Data Mover

Teradata Data Mover User Guide

Product
Teradata Data Mover
Release Number
16.00
Published
December 2016
Language
English (United States)
Last Update
2018-03-29
dita:mapPath
rdo1467305237457.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-4101
lifecycle
previous
Product Category
Analytical Ecosystem

You can use host names or IP addresses as values in failover.properties. The following table describes what happens in certain scenarios when using the automatic failover feature:

Scenario Result
The default password for dmuser has been changed on any of the servers specified in failover.properties. SSH setup fails, which prevents the master-slave components from starting correctly.
Invalid host names are specified in failover.properties. The SSH setup fails.
Config is run as a user other than ROOT. The SSH setup fails.
The JMS brokers on the master and slave daemon servers have not been set up in a network of brokers configuration. When the primary broker goes down, the daemon, agents, and command line will automatically connect to the secondary broker; however, the messages will not be consumed from the secondary broker.
The broker.url is not modified in daemon.properties, agent.properties and commandline.properties. the daemon, agent, and command line will not be able to connect to a secondary JMS broker when the primary JMS broker goes down.
The master.host and master.port values in sync.properties are not set correctly on both local and remote sync servers. The sync service will not be started correctly.
The slave sync service goes down after failover has been enabled. The updates on the master will be applied to the slave when the slave sync service is restarted using the dmcluster setslave command.
The master sync service goes down after failover monitoring has been enabled. A warning message will be displayed in the monitor log on the monitoring server. Any updates that occurred on the master when the master sync service was down will not be sent to slave. Master sync service can be restarted in master mode using the dmcluster setmaster command from the master daemon server.
Both JMS brokers goes down. No Jobs can be run until at least one JMS broker is started correctly.
Both daemon servers go down after failover monitoring has been enabled. The monitoring service will detect master daemon server failure and initiate a failover sequence on the slave daemon server. If it is unable to connect to the slave daemon server, it will exit, and the components need to be restarted using the dmcluster setmaster command from the master daemon server once the servers are up.
A master daemon goes down and a failover has been initiated. The daemon is restarted again using the dmdaemon service script. The monitoring service notices two daemon running and stop the daemon on the server that is not the current master. To properly restore the daemon that went down, use the dmcluster setslave command followed by the dmcluser setmaster command.

For more information, see Failing Back to the Old-Restored Master.

A slave daemon is started using the dmdaemon service script when the master daemon is still running. The monitoring service notices two daemons running and stop the daemon on the server that is not the current master. To properly set the components in slave mode, use the dmcluster setslave command from the slave daemon server.
The JMS broker on the master or slave daemon server goes down. Jobs can be run as long as one JMS broker is running on either the master or the slave daemon server. The Data Mover components will automatically reconnect to the active JMS broker. The JMS broker can be restarted using the service script (/etc/init.d/tdactivemq start).