About Logging SQL Updates on the Master Server - Teradata Data Mover

Teradata Data Mover User Guide

Product
Teradata Data Mover
Release Number
16.10
Published
June 2017
Language
English (United States)
Last Update
2018-03-29
dita:mapPath
kmo1482331935137.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-4101
lifecycle
previous
Product Category
Analytical Ecosystem

When starting the master synchronization service, dmSyncMaster.json is created in the specified path as the value of sql.log.directory in sync.properties. By default, the service writes the SQL updates file to /var/opt/teradata/datamover/logs/dmSyncMaster.json. Triggers installed on the repository tables regenerate INSERT statements. However, the daemon regenerates the UPDATE and DELETE statements. All the data is written to the DMAuditLog table. From there, the master sync service reads this data and inserts it into the dmSyncMaster.json file. Another process running on the master reads the SQL statements from dmSyncMaster.json and sends them to the slave.

When a slave server connects to the master server, the master synchronization service creates slave_<clientName>.lastread in the path that is specified as the value of sql.log.directory in sync.properties.slave_<clientName>.lastread tracks all SQL statements sent to the slave server. The service creates a specific .lastread file for each slave server that connects to the master server. For example, if your Teradata Data Mover environment has two slave servers, repos_bu1 and repos_bu2, and you run the synchronization service on each slave server, the service writes the following files:

  • /var/opt/teradata/datamover/logs/slave_repos_bu1.lastread
  • /var/opt/teradata/datamover/logs/slave_repos_bu2.lastread