An Intermediate File Logging job reads transactional data from MQ or JMS and performs continuous INSERT, UPDATE, and DELETE operations in a Teradata Database table, while simultaneously loading a duplicate data stream into an external flat file that can serve as an archive or backup file of the data that has been loaded.
Intermediate File Logging requires use of multiple APPLY clauses, one for the operator writing to Teradata Database and one for the operator writing to the external flat file.
The DataConnector operator is used twice in the job script:
- A DataConnector producer operator reads data from a transactional data source, either the JMS or MQ access module.
- A DataConnector consumer operator receives the data stream (a duplicate of what is being written to Teradata Database) from the DataConnector producer and writes it to an external flat file.
Note that the two DataConnector operator definitions differ in content, in addition to the common required attributes:
- The producer version requires specification of the following:
- Use the AccessModuleName and AccessModuleInitStr attributes in order to interface with the access module providing the transactional data.
- Set the OpenMode attribute to ‘read.’
- The consumer version requires specification of the following:
- Use the DirectoryPath attribute to specify the destination directory.
- Set the OpenMode attribute to ‘write.’
For a complete list of key DataConnector operator attributes, see the Teradata Parallel Transporter Reference (B035-2436).
For a typical application of Intermediate File Logging Example 5C in Job Example 5: Continuous Loading of Transactional Data from JMS or MQ.
For the sample script that corresponds to this job, see the following script in the sample/userguide directory:
PTS00011: Intermediate File Logging Using Multiple APPLY Clauses with Continuous Loading of Transactional Data from Different Access Modules.