list_configuration RESTful API - Teradata Data Mover

Teradata Data Mover User Guide

Product
Teradata Data Mover
Release Number
16.10
Published
June 2017
Language
English (United States)
Last Update
2018-03-29
dita:mapPath
kmo1482331935137.ditamap
dita:ditavalPath
ft:empty
dita:id
B035-4101
lifecycle
previous
Product Category
Analytical Ecosystem

Purpose

The list_configuration API displays configuration settings for the daemon, stored procedures, and table-driven interface of the daemon, and displays performance settings for the daemon.

There are two variations of the list_configuration RESTful API that display information for the following:
  • Specified property
  • All properties specified in the request body
The list_configuration RESTful API uses the following URL and method:
Item Description
URL

/datamover/daemonProperties/{propertyName}

/datamover/daemonProperties

Method GET

Request Parameters

The list_configuration RESTful API does not require specific request parameters.

Response Parameters

propertyName
Description: Configuration property name
JSON Data Type: String
Required: Yes
values
Description: Property values per system
JSON Data Type: JSON Array (valueType Object)
Required: Yes
unit
Description: Unit of the value
JSON Data Type: String
Required: No
description
Description: Additional information
JSON Data Type: String
Required: Yes
systemPairs
Description: System pairs used to force job direction
JSON Data Type: JSON Array (systemPairType Object)
Required: No
groupPools
Description: User group pools
JSON Data Type: JSON Array (userGroupType Object)
Required: No
targetUserPools
Description: Systems for target user pool
JSON Data Type: JSON Array (systemType Object)
Required: No
neverTargetSystems
Systems never used as the target system
JSON Data Type: JSON Array (String)
Required: No
defaultDatabases
Databases used as default target or staging databases at system level
JSON Data Type: JSON Array (systemLevelDatabaseType Object)
Required: No

Status Codes

If the command executes without error, the API returns status code 200 in the response header and nothing in the response body.

If an error occurs during execution, the API returns a non-200 status code in the response header and an error message in JSON format in the response body.

The status codes apply to all variations of the RESTful API.

Response Examples

This example shows a specified property.
{
    "propertyName": "agentCollector.agentHeartbeatWaitMillis",
    "values":
    [
        {
            "value": "600000",
            "system": "ALL"
        }
    ],
    "description": "Purpose: To set the amount of time to wait for an Agent heartbeat before assuming it has gone out of service in milliseconds. Default: 600000"
}
This example shows all properties specified in response body.
[
    {
        "propertyName": "agentCollector.agentHeartbeatWaitMillis",
        "values":
        [
            {
                "value": "600000",
                "system": "ALL"
            }
        ],
        "description": "Purpose: To set the amount of time to wait for an Agent heartbeat before assuming it has gone out of service in milliseconds. Default: 600000"
    },
    {
        "propertyName": "blocked.job.maxAllowedLimit",
        "values":
        [
            {
                "value": "5",
                "system": "dm-agent4"
            },
            {
                "value": "10",
                "system": "dm-agent5"
            }
        ],
        "description": "The maximum number of jobs that can be marked as BLOCKED and re-tried. If a job is detected as blocked when the blocked.job.maxAllowedLimit has already reached, then the job is added to the Job Queue. The value cannot be greater than 25% of the maximum concurrent job limit. Default is 5."
    },
    {
        "propertyName": "blocked.job.retry.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Detect any locks on the source/target objects being moved and retry running the job after a specified interval. Default is false."
    },
    {
        "propertyName": "blocked.job.retry.interval",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "unit": "HOURS",   
        "description": "Purpose: An interval to retry running any jobs blocked due to locks on source/target objects. Time unit can be specified as HOURS or MINUTES. Default is 1 Hour."
    },
    {
        "propertyName": "blocked.job.retry.maxInterval",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "unit": "HOURS",
        "description": The maximum interval for attempting to start any jobs blocked due to locks on source/target objects. Jobs will be marked as FAILED after this interval is exceeded if they are still blocked. Time unit can be specified as HOURS or MINUTES. Default is 1 Hour."
    },
    {
        "propertyName": "daemon.default.compareDDL.enabled",
        "values":
        [
            {
                "value": "unspecified",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Enable/Disable the default compareDDL behavior at the daemon level. Default value unspecified."
    },
    {
        "propertyName": "databaseQueryService.useBaseViewsOnly",
        "values":
        [
            {
                "value": "true",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Set all data dictionary queries on Teradata source and target systems to use the base views instead of X or VX views. Default: true"
    },
    {
        "propertyName": "deadlock.retry.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If a SQL query execution fails with DBS error (2631) due to a deadlock, then retry executing the query after a specified interval. Default is false."
    },
    {
        "propertyName": "deadlock.retry.interval",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "unit": "MINUTES",
        "description": "Purpose: An interval to retry executing a SQL query that fails with a DBS deadlock error (2631). Time unit can be specified as SECONDS or MINUTES. Default is 1 Minute."
    },
    {
        "propertyName": "deadlock.retry.maxAttempts",
        "values":
        [
            {
                "value": "10",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The maximum number of attempts to retry executing a SQL query that fails with a DBS deadlock error (2631). Default is 10."
    },
    {
        "propertyName": "different.session.charsets.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Determines whether or not specifying different source and target session character sets in a job is allowed. Default value false means this is not allowed."
    },
    {
        "propertyName": "event.table.default",
        "values":
        [
            {
                "value": "null",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Messages will be saved to this event table, unless the messages come from a different event table or the Job Definition explicitly overrides this parameter. Default: NULL."
    },
    {
        "propertyName": "hadoop.connector.max.task.slot",
        "values":
        [
            {
                "value": "2",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Specify the maximum number of concurrent hadoop connector tasks executed by DataMover.  Default is 2."
    },
    {
        "propertyName": "hadoop.default.mapper.export",
        "values":
        [
            {
                "value": "8",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Specify the number of mappers for Hadoop to Teradata jobs.  This property will only be used when hadoop.default.mapper.type is DataMover.  Default is 8."
    },
    {
        "propertyName": "hadoop.default.mapper.import",
        "values":
        [
            {
                "value": "20",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Specify the number of mappers for Teradata to Hadoop jobs.  This property will only be used when hadoop.default.mapper.type is DataMover.  Default is 20."
    },
    {
        "propertyName": "hadoop.default.mapper.type",
        "values":
        [
            {
                "value": "DataMover",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Determine which product will decide the default number of mappers for a Hadoop system.  Valid Values are TDCH and DataMover.  Default is DataMover."
    },
    {
        "propertyName": "hanging.job.check.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If enabled, an internal process will awaken periodically and review active jobs to see if any are hanging. Disabled by default."
    },
    {
        "propertyName": "hanging.job.check.rate",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Rate at which to check for hanging jobs (in hours). Default is 1 hour."
    },
    {
        "propertyName": "hanging.job.timeout.acquisition",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for acquisition phase. Default is 1 hour."
    },
    {
        "propertyName": "hanging.job.timeout.in.minutes",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Internal use only. If enabled, causes all hanging.job parameters to use minutes instead of hours. Default is disabled."
    },
    {
        "propertyName": "hanging.job.timeout.large.apply",
        "values":
        [
            {
                "value": "8",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for TPTAPI apply phase for large object. Default is 8 hour."
    },
    {
        "propertyName": "hanging.job.timeout.large.build",
        "values":
        [
            {
                "value": "8",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for ARC build phase for large object. Default is 8 hour."
    },
    {
        "propertyName": "hanging.job.timeout.large.initiate",
        "values":
        [
            {
                "value": "8",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for initiate phase for large object. Default is 8 hour."
    },
    {
        "propertyName": "hanging.job.timeout.medium.apply",
        "values":
        [
            {
                "value": "4",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for TPTAPI apply phase for medium object. Default is 4 hour."
    },
    {
        "propertyName": "hanging.job.timeout.medium.build",
        "values":
        [
            {
                "value": "4",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for ARC build phase for medium object. Default is 4 hour."
    },
    {
        "propertyName": "hanging.job.timeout.medium.initiate",
        "values":
        [
            {
                "value": "4",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for initiate phase for medium object. Default is 4 hour."
    },
    {
        "propertyName": "hanging.job.timeout.range.large.min",
        "values":
        [
            {
                "value": "10",
                "system": "ALL"
            }
        ],
        "unit": "GB",
        "description": "Purpose: Defines minimum size (in MB, GB, TB, default GB if unit not provided) for an object to be considered a large object. Default is 10 GB"
    },
    {
        "propertyName": "hanging.job.timeout.range.small.max",
        "values":
        [
            {
                "value": "5",
                "system": "ALL"
            }
        ],
        "unit": "MB",
        "description": "Purpose: Defines maximum size (in MB, GB, TB, default MB if unit not provided) for an object to be considered a small object. Default is 5 MB."
    },
    {
        "propertyName": "hanging.job.timeout.small.apply",
        "values":
        [
            {
                "value": "2",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for TPTAPI apply phase for small object. Default is 2 hour."
    },
    {
        "propertyName": "hanging.job.timeout.small.build",
        "values":
        [
            {
                "value": "2",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for ARC build phase for small object. Default is 2 hour."
    },
    {
        "propertyName": "hanging.job.timeout.small.initiate",
        "values":
        [
            {
                "value": "2",
                "system": "ALL"
            }
        ],
        "description": "Purpose: If new job progress not reported within this period (in hours), job will be aborted. Timeout specifically for initiate phase for small object. Default is 2 hour."
    },
    {
        "propertyName": "job.allowCommandLineUser",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: When set to true, Daemon will always allow CommandLine requests when the security level is Daemon. Default: false"
    },
    {
        "propertyName": "job.databaseClientEncryption",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: When set to true, utilities such as ARC, JDBC, and TPTAPI will initiate encrypted sessions to both the source and target database systems. Default: false. Note: There is a performance hit trade-off for the gain of encryption."
    },
    {
        "propertyName": "job.default.queryband",
        "values":
        [
            {
                "value": "ApplicationName=DM;Version=15.00;",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Provide a set of name/value pairs for the default query_band feature"
    },
    {
        "propertyName": "job.default.queryband.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Enable/Disable the default queryband feature. Default value: false"
    },
    {
        "propertyName": "job.force.direction",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: To force direction of data movement from source to target system."
    },
    {
        "propertyName": "job.force.direction",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: To force direction of data movement from source to target system."
    },
    {
        "propertyName": "job.never.target.system",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Prevent certain database systems from ever being a target system in a Data Mover job. Default: false."
    },
    {
        "propertyName": "job.onlineArchive",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: When set to true, online archiving is utilized for objects that merit the use of ARC. Default: false. Note: There a is a performance hit trade-off for the gain of object availability."
    },
    {
        "propertyName": "job.overwriteExistingObjects",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: When set to true, objects that already exist on the target database system are overwritten. Default: false."
    },
    {
        "propertyName": "job.securityMgmtLevel",
        "values":
        [
            {
                "value": "job",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The level of security management enabled. Valid choices are Daemon and Job.  Default: Job."
    },
    {
        "propertyName": "job.useGroupUserIdPool",
        "values":
        [
            {
                "value": "true",
                "system": "ALL"
            }
        ],
        "groupPools":
        [
            {
                "poolName":"dm4",
                "systems":
                [
                    {
                        "systemName": "system1",
                        "users":
                        [
                            {
                                "userName":"user1",
                                "password":"password1"
                            }
                        ]
                    },
                    {
                        "systemName": "system2",
                        "users":
                        [
                            {
                                "userName":"user1",
                                "password":"password1"
                            }
                        ]
                    }
                ]
            },
            {
                "poolName":"System15",
                "systems":
                [
                    {
                        "systemName": "system3",
                        "users":
                        [
                            {
                                "userName":"user2",
                                "password":"password2"
                            }
                        ]
                    },
                    {
                        "systemName": "system4",
                        "users":
                        [
                            {
                                "userName":"user3",
                                "password":"password3"
                            }
                        ]
                    }
                ]
            }
        ]
        "description": "Purpose: Use a source or target user from the pool of users. This enables changing password in a single place"
    },
    {
        "propertyName": "job.useSecurityMgmt",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: When set to true, some Data Mover commands will require the admin username and password to be specified when executing the command.  Refer to the User Guide for a complete list of commands affected by this parameter. Default: false"
    },
    {
        "propertyName": "job.useSyncService",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: To record any changes to the Data Mover repository tables (inserts/updates/deletes) in an audit log table. The value must be set to true in order to use the Sync service. Default: false."
    },
    {
        "propertyName": "job.useUserIdPool",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Use a target user from the pool of users. This enables running multiple arc tasks at the same time"
    },
    {
        "propertyName": "repository.purge.definition.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Enable/Disable the repository purge job definition feature. Default value: false"
    },
    {
        "propertyName": "repository.purge.enabled",
        "values":
        [
            {
                "value": "true",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Enable/Disable the repository purge feature."
    },
    {
        "propertyName": "repository.purge.history.unit",
        "values":
        [
            {
                "value": "days",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The unit for job history data to kept in the repository before purging should occur. The current supported values are days, weeks, months, and years. Default value: days."
    },
    {
        "propertyName": "repository.purge.history.unitcount",
        "values":
        [
            {
                "value": "60",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The number of units for job history data to kept in the repository before purging should occur. This value is combined with the value for repository.purge.history.unit to determine the amount of time before purging should occur for old jobs (for example, 60 days, 3 years, or 10 months). Default value: 60. The value of -1 will disable the purging by time"
    },
    {
        "propertyName": "repository.purge.hour",
        "values":
        [
            {
                "value": "1",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The hour when the daily repository purging should start. Default value 1 means 1am"
    },
    {
        "propertyName": "repository.purge.minute",
        "values":
        [
            {
                "value": "0",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The minute when the daily repository purging should start. Default value 0."
    },
    {
        "propertyName": "repository.purge.percent",
        "values":
        [
            {
                "value": "50",
                "system": "ALL"
            }
        ],
        "description": "Purpose: The percentage of repository permspace that needs to be available to determine when purging should occur. Default value 50 means the repository should be purged when more than 50% of the available permspace is in use. The value of -1 will disable the purging by percentage"
    },
    {
        "propertyName": "sqlh.max.task.slot",
        "values":
        [
            {
                "value": "2",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Specify the maximum number of concurrent SQL-H tasks executed by DataMover.  Default is 2."
    },
    {
        "propertyName": "system.default.database.enabled",
        "values":
        [
            {
                "value": "false",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Enable/Disable the default target/staging databases at the system level. Default value false means disabled."
    },
    {
        "propertyName": "target.system.load.slots",
        "values":
        [
            {
                "value": "5",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Controls maximum number of load slots that Data Mover can use at one time on target Teradata systems. Default: 5."
    },
    {
        "propertyName": "tmsm.frequency.bytes",
        "values":
        [
            {
                "value": "2147483647",
                "system": "ALL"
            }
        ],
        "unit":"BYTES",
        "description": "Purpose: Controls frequency of sending messages when using byte-based utilities (for example, ARC). Default: 2147483647 bytes."
    },
    {
        "propertyName": "tmsm.mode",
        "values":
        [
            {
                "value": "NONE",
                "system": "ALL"
            }
        ],
        "description": "Purpose: Controls how Data Mover directs messages. When set to BOTH, messages will be written to the TDI event tables. Default: NONE."
    }
]