IMPORT Command | Basic Teradata Query - IMPORT - Basic Teradata Query

Basic Teradata® Query Reference

Product
Basic Teradata Query
Release Number
17.10
Published
February 2022
Language
English (United States)
Last Update
2022-02-03
dita:mapPath
nnw1608578382132.ditamap
dita:ditavalPath
obe1474387269547.ditaval
dita:id
B035-2414
lifecycle
previous
Product Category
Teradata Tools and Utilities

Purpose

This control enables users to specify a file and its format, from which to process the record data for USING modifiers of subsequent SQL requests.

Syntax

The following figures show the syntax for mainframe-attached and workstation-attached systems.


IMPORT syntax (mainframe-attached systems)


IMPORT syntax (workstation-attached systems)

where the following is true:

AXSMOD
This element specifies that data is to be imported using an access module. This option is valid only for DATA and INDICDATA types on workstation-attached systems.
When an import file is opened, BTEQ sets one or more of the following attributes for the Access Module, depending on the session character set used:
  • CHARSET_NAME
  • CHARSET_NUMBER
  • BYTE_ORDER
character
This element is an optional field delimiter, which defines the character used to separate columns within a VARTEXT record. It must be enclosed in single quotation marks. A multi-byte character is allowed for Unicode sessions on workstation-attached systems. If a multi-byte delimiter is needed for Unicode sessions on mainframe-attached systems, use a HEX sequence.

If a delimiter is not specified, BTEQ assumes all VARTEXT fields are separated by a vertical bar '|' character, whose ASCII equivalent is 0x7C, TD_EBCDIC equivalent is 0x6A, and Unicode equivalent is U+007C.

DATA
INDICDATA
REPORT
The format of the data to be read by BTEQ.

If DATA or INDICDATA is specified, BTEQ requires the incoming data to be in FastLoad format, such as that produced by a BTEQ EXPORT DATA or BTEQ EXPORT INDICDATA command, or by FastExport.

The INDICDATA option for the IMPORT command cannot be used as the SET INDICATORMODE or the SET INDICDATA command.

If the REPORT format is specified, BTEQ requires the incoming data to be in the format produced by a BTEQ EXPORT REPORT command.

The REPORTWIDE keyword is also valid. This keyword specifies that the REPORT format for the data should be used, and changes the WIDTH setting to 32765. Since this keyword is deprecated, a SET WIDTH command should be used instead.

ddname
The name of the z/OS JCL DD statement that defines the mainframe-attached system file from which BTEQ reads data.
Use caution when using the keyword FILE instead of the keyword DDNAME for mainframe-attached systems, as the implementation is for the concept DDNAME, not the concept FILE.
DEFERCOLS
The number of Large Data Object (LDO) columns which are defined with AS DEFERRED in the USING clause. This number must not include LDO columns defined with AS DEFERRED BY NAME in the USING clause.

The columns defined with AS DEFERRED must be listed first in the USING clause.

The maximum value that can be supplied for the DEFERCOLS option is based on the maximum number of columns that the database supports for the USING clause.

The keyword LOBCOLS is an alias for DEFERCOLS. Both are identical in functionality.

DEFERLIMITS
For enabling potential performance improvements, when using a database with Teradata Database version 15.10 or higher, BTEQ can transfer an LDO non-deferred when it is defined with an AS DEFERRED qualifier in the USING clause. The DEFERLIMITS option can be used to adjust limit settings that control how and when BTEQ is to attempt to override deferred transfer for a column. It is up to the user to specify the most optimal limits for best performance.

This option indicates the limits for sending LDOs to the database through a deferred transfer versus a non-deferred transfer. This option takes two arguments. The first argument defines the limit (in bytes) for an individual LDO. The second argument defines the cumulative limit of all non-deferred LDO bytes for the current record. For an LDO to be sent as a non-deferred transfer it must be within both limits. If either limit is exceeded, a deferred transfer will occur.

Valid values for both DEFERLIMITS arguments are 0..7340032. The second value cannot be less than the first. The maximum value is based on the database's request message limit.

Non-deferred LDO transfers can be turned off completely by using values of "0,0".

If this option is not supplied, but the DEFERCOLS option is defined, then the following default values will be used for DEFERLIMITS:
  • 0,0 – when a PACK factor exists which is greater than one. This disables non-deferred LDO transfers so that the average number of packed records is not affected.
  • 1000,10000 – when a PACK factor is zero or one. For a non-deferred LDO transfer to occur, its size must be 1000 bytes or less, and the total number of bytes for all non-deferred LDO transfers (including itself) must be 10000 bytes or less in a given record.

The DEFERCOLS option is required and must be specified first in the IMPORT command.

filename
The name of the workstation-attached system file from which BTEQ reads data.

If the name of the file includes a comma, semicolon, or space character, enclose the entire file name in either single or double quotation marks.

File names are case-sensitive on systems running on a UNIX system, and they are not case-sensitive on systems running under Windows.

Whenever other command options are specified following the file name in the Windows environment, BTEQ requires the file name to be enclosed by either single or double quotation marks.

The keyword DDNAME is a valid substitute for the keyword FILE for workstation-attached systems. Note when doing so that the implementation is for the FILE concept, not the DDNAME concept.

'HEX sequence' xb
Provides a hexadecimal representation of a single character delimiter that is used to separate columns within a VARTEXT record for Unicode sessions only. It must be enclosed in single quotation marks and include a trailing "xb" or "XB". The hex digits (0-9, a-f, A-F) must represent a valid character within the session character set encoding. This is especially useful on mainframe-attached systems, where all input is EBCDIC based, even for Unicode sessions. For example, 'FFE5'xb can be used to represent the delimiter as a Fullwidth Yen Sign ('¥') when the session character set is UTF-16BE.

If a delimiter is not specified, BTEQ assumes all VARTEXT fields are separated by a vertical bar '|' character, whose ASCII equivalent is 0x7C, TD_EBCDIC equivalent is 0x6A, and Unicode equivalent is U+007C.

'init-string'
The name of the optional initiation string that is passed to the access module at the start of the import process. The string must be enclosed within single quotes.
Messages=RECORDNUMBERS
When used, BTEQ will emit messages that reflect which import file records were accepted and which were rejected during the import. The record number included in each message is a 1-based offset that represents its sequential position within the import file. These messages may therefore be useful for determining the location of rejected records which need follow-up actions.

When generation of record messages has been requested for an import and an SQL statement associated with a USING clause is successful, BTEQ will generate an "accepted" format record number message. For example...

*** Insert completed. One row added.
*** Accepted Import Record# 4

But if the statement is instead unsuccessful, BTEQ will generate a "rejected" format record number message. For example...

*** Statement Error 2802 Duplicate row error in MyUser.MyTable.
               Statement# 1, Info =0 
*** Rejected Import Record# 1 

If the statement was unsuccessful due to a request-level failure or error and more than one import record had been packed for the request's USING data and all records must be rejected, then each packed record will subsequently have its own record number message generated. For example...

*** Failure 2673 The source parcel length does not match data that was defined.
               Statement# 3, Info =4
*** Rejected Import Record# 6
*** Warning: All packed records for the request were rejected.
             The other record number message(s) follow.
*** Rejected Import Record# 4
*** Rejected Import Record# 5
modname
The access module library name. If the name contains embedded spaces, enclose it within single quotes. If the library cannot be found or an incorrect version is being loaded, use a fully qualified path name.
For example: '/opt/teradata/client/xx.xx/lib/np_axsmod.so' .
When using 64-bit BTEQ, the access module also needs to be built in 64-bit mode. For compiler options to be used to build in 64-bit mode, see "Building an Exit Module" in the NOTIFY command description.
RECORDLENGTH
Defines the record size limit for the current import. This overrides the current import-related RECORDLENGTH command setting, for the life of the import.

Valid arguments for the RECORDLENGTH option are:

  • MAX64 – Limits record sizes to 64,768 bytes. For DATA/INDICDATA imports, it is assumed each record within the import file begins with a 2-byte record length.
  • MAX1MB – Limits record sizes to 1,024,000 bytes. For DATA/INDICDATA imports, it is assumed each record within the import file begins with a 4-byte record length.

A DATA/INDICDATA import file must contain record lengths of the same size. Mixing records with 2-byte lengths and records with 4-byte lengths will result in unexpected behavior.

This option does not affect REPORT/VARTEXT imports, since they do not involve record lengths.

When exported data is subsequently imported, the RECORDLENGTH option settings must match, otherwise the import will fail.

If this option is not specified, BTEQ will take the value from the current import-related RECORDLENGTH command setting. If the RECORDLENGTH command was not previously submitted, then a default value of MAX64 will be used.

Submit the SHOW CONTROLS IMPORT command to see if a RECORDLENGTH override is in effect for the current import. Otherwise submit the SHOW CONTROLS RECORDLENGTH command to see the parent settings

RESET
Terminates the import. The import file is closed.
SKIP =n
The number of lines skipped from the beginning of the file before reading data. The value is zero (0) by default. The range of valid numbers is 0 to 2147483647.
TOTCOLS
The total number of columns referred to in the request's USING clause. This includes all LDO and non-LDO columns. The TOTCOLS keyword is valid only for INDICDATA formatted import data records containing LDOs which are defined with AS DEFERRED in the USING clause). .

The maximum value that can be supplied for the DEFERCOLS option is based on the maximum number of columns that the database supports for the USING clause.

VARTEXT
The record format of variable length character fields, with each field separated by a delimiter character.

Usage Notes

For mainframe-attached systems, BTEQ supports all sequential files with fixed, variable, or undefined length record formats.

When the record length of data used during an import operation is greater than 252 bytes, BTEQ displays a message similar to the following, indicating that the buffer size has been increased:

Growing buffer to 32000

The DATA and INDICDATA keywords specify the format of the data to be sent to a database.

DATA format is used in Record Mode, which does not send explicit null indicator bits. INDICDATA format is used in Indicator Mode, which does send explicit null indicator bits.

For the character sets KANJISJIS_0S, KANJIEUC_0U, TCHBIG5_1R0, and SDTCHBIG5_3R0, the default is Record (DATA) Mode, not Field Mode.

Do not use the IMPORT command in an SQL macro.

It is not recommended to specify a file that is already opened from a previous EXPORT command. Unexpected results might occur due to records being written and read at the same time.

  • It is the user’s responsibility to correctly describe the data records to be imported by way of the SQL request’s USING clauses.
  • If a USING clause is used in a multi-statement request, it must start the first statement. Otherwise, BTEQ does not recognize the USING clause and does not send the data parcel with the request, causing a database syntax error.
  • By default, a request that includes a USING clause can process up to n records, where n depends on the current value of the REPEAT and PACK factors, and that the import file is open and contains unread records. Refer to the PACK and REPEAT commands for more information.
  • Use the SHOW CONTROLS command to identify the current record format of the IMPORT command.
  • For Unicode sessions, the encoding of import files must match the encoding of the session character set.

Usage Notes for MESSAGES=RECORDNUMBERS

A single-statement SQL request gets iterated for as many times as the number of USING data import records that were packed up with the request. Therefore, when record number messages get generated for iterated requests, each resulting statement's message will reflect consumption of a different record number. But when record number messages are to be generated for a multi-statement request, the record number messages generated for the set of statements will all reflect consumption of the same import record. If the active REPEAT factor is greater than 1 and there are still import records to consume, the request will get resubmitted with the next USING data import record. The subsequent statements' messages will again all reflect the same record number being consumed. But it will be a different record from what the prior request's messages reflected.

The format and wording of the record number messages will not be subject to change as its understood that the text may be used as part of post-processing actions.

The maximum value for a record number is 18446744073709551615. After that number has been used to track a record, BTEQ will start over using the number "1". A warning will be produced whenever a rollover occurs.

DATA and INDICDATA Keywords

Workstation-Attached Systems

When using the keywords DATA or INDICDATA, each record must be in the following format:
  • The record begins with either a two-byte or four-byte length field, that contains an unsigned binary integer whose value is the number of bytes in the record (not including the bytes in the length field and not including the byte(s) in the end-of-record field). For two-byte record length fields, the RECORDLENGTH option value must be MAX64. For four-byte record length fields, the RECORDLENGTH option value must be MAX1MB.
  • The record follows with binary-formatted fields that contain the column values of the returned row in record or indicator format.
  • The record ends with an end-of-record (line) indicator, appropriate for the workstation and operating system. For example, for a UNIX system, the indicator is a new line. For Windows, the indicator is a carriage return/line feed pair. In all cases, the end-of-record character, or characters, are in the bit-format appropriate for the workstation. For example, in ASCII for an ASCII workstation.
Numeric data fields are corrupted if the data is reimported using a REPORT format export file.

When using a file originally exported using BTEQ’s EXPORT command as the source for IMPORT command across a different platform type, ensure that the endianness type of both platforms is the same. This can be verified using the SHOW CONTROLS BYTE ORDER command.

Mainframe-Attached Systems

When using the keywords DATA or INDICDATA, a record must consist of binary formatted fields that contain the column values of the selected row in record or indicator format. It does not contain a record length field or an end-of-record indicator, since that information is inherent in the way z/OS stores records.

For both workstation-attached and mainframe-attached systems a BOM is not valid at the beginning of a DATA (or INDICDATA) import file when a Unicode session character set is being used.

Importing Large Data Objects

LDOs (Large Data Objects) consist of CLOB, BLOB, XML, and JSON data types. LDOs can be sent from BTEQ to the database either inline, as deferred transfers, or as non-deferred transfers. How they are sent is dependent upon the use of the DEFERCOLS, TOTCOLS, and DEFERLIMITS options.

The following bulleted sections compare how LDOs can be transferred:

LDOs for Inline Transfer
  • Are not described with AS DEFERRED or AS DEFERRED BY NAME qualifiers in the SQL request's USING clause.
  • Are sent up front to the database along with the SQL request. Their actual value is stored in the main import data file along with non-LDO column values.
  • Are limited by FastLoad format's overall length indicator, less if more than one column is being transferred for each import record. The overall length indicator is limited to 64KB or 1MB, depending upon the current import RECORDLENGTH setting. Each LDO must still have an 8-byte (rather than 2-byte) length indicator present in the import record before their value. Otherwise, an alternate data type which requires a 2-byte length indicator must be used in the USING clause. For example, VARCHAR rather than CLOB. .
  • Do not require use of any transfer-related IMPORT command options but can only be provided using DATA or INDICDATA for the FastLoad format option.
LDOs for Deferred Transfer
  • Are described with AS DEFERRED or AS DEFERRED BY NAME qualifiers in the SQL request's USING clause.
  • Have their data stored in their own separate LDO files. The names for these files must be placed in the main import data file using a VARCHAR type value. The maximum size for an LDO file name is 1024 bytes. Each LDO gets separately elicited by the database after the SQL request has been received.
  • Are limited to 2GB size.
  • For AS DEFERRED columns, the DEFERCOLS (and possibly TOTCOLS) option need to be used. No transfer-related IMPORT command option need be used for AS DEFERRED BY NAME columns.
LDOs for Possible Non-Deferred Transfer
  • Can only be described with an AS DEFERRED qualifier and must be listed first in the SQL request's USING clause.
  • Have their data stored in their own separate LDO files. The names for these files must be placed in the main import data file using a VARCHAR type value. The maximum size for an LDO file name is 1024 bytes. LDOs will be sent up front to the database along with the SQL request, rather than being separately elicited by the database after the SQL request has been received, assuming sufficient space is available in the import request and the DEFERLIMITS values have not been exceeded. This reduces the number of parcels being sent between BTEQ and the database, which may improve performance.
  • The combined size of non-deferred LDOs and non-LDO data is limited to approximately 1MB per import request.
  • The combination of the DEFERCOLS option and the DEFERLIMITS  import setting, or the combination of the DEFERCOLS option and the PACK factor dictates which LDOs will be transferred non-deferred.   The TOTCOLS option must be specified if using the INDICDATA format.

A single import can include LDOs which are both defined with AS DEFERRED and AS DEFERRED BY NAME qualifiers. When this is the case, do not include the AS DEFERRED BY NAME columns in the DEFERCOLS value. For example, if an import includes three AS DEFERRED columns and two AS DEFERRED BY NAME columns, use DEFERCOLS=3.

LDOs defined with an AS DEFERRED qualifier must be listed first within the USING clause. AS DEFERRED BY NAME columns may occur anywhere in the USING clause after the AS DEFERRED columns.

BTEQ supports deferred and non-deferred LDO imports with the DATA, INDICDATA, and VARTEXT formats, but not the REPORT format. If exporting LDOs, deferred and non-deferred exports are only allowed with the INDICDATA format.

When using LDOs defined with the AS DEFERRED qualifier, BTEQ replaces VARCHAR file name values with 4-byte integer tokens before submitting the request to the database. When a single-byte file name is supplied, three bytes (two for the length of the VARCHAR value and one for the file name value) will get replaced by four bytes. This replacement means that the overall size of the import data record might become too large to successfully contain all generated tokens.

For deferred and non-deferred LDO imports using z/OS BTEQ, import data records must contain fully qualified dataset names, which can be sequential data sets or PDS members, such as USERID.CLOB.DATA1 or USERID.CLOB(DATA).

Non-deferred LDO transfers (through the DEFERLIMITS option) may change the effectiveness of packed records, when the PACK setting is greater than one. Non-deferred LDOs are passed to the database along with packed records, whereas deferred LDOs are sent at a later time. However, importing small LDOs through non-deferred transfers may offer better performance compared with deferred transfers, as determined by elapsed times. Some testing/tuning may be needed to determine the optimal way to import records containing LDOs. If performance is negatively affected, non-deferred transfers can be completely disabled by entering "DEFERLIMITS=0,0".

The following table shows which DEFERLIMITS values will be used in relation to the PACK setting:

PACK Setting DEFERLIMITS Option (default or specified) Actual Records Packed Maximum Bytes for an Individual LDO Cumulative LDO Bytes for a Record
0 or 1 Default (on) 1 1,000 10,000
0 or 1 User specified 1 User defined User defined
2+ Default (off) 1..PACK 0 0
2+ User specified 1..PACK User defined User defined

Using the DEFERLIMITS option does not guarantee that an LDO will be sent as a non-deferred transfer. It depends upon the arguments supplied, the size of the LDO files, and whether the LDO columns are define with AS DEFERRED or AS DEFERRED BY NAME within the USING clause.

Following is an example showing how this works. Assume 6 CLOBs are listed in a USING clause and 3 records will be imported. The DEFERLIMITS option is set to "2000,5000". Notice how BTEQ treats each LDO based on the USING clause qualifier used and the LDO size.

Record 1
   AS DEFERRED          (1000 bytes)   Non-deferred
   AS DEFERRED          (500 bytes)    Non-deferred
   AS DEFERRED          (1000 bytes)   Non-deferred
   AS DEFERRED          (500 bytes)    Non-deferred
   AS DEFERRED          (1000 bytes)   Non-deferred
   AS DEFERRED BY NAME  (500 bytes)    Deferred
Record 2
   AS DEFERRED          (3000 bytes)   Deferred
   AS DEFERRED          (1000 bytes)   Non-deferred
   AS DEFERRED          (6000 bytes)   Deferred
   AS DEFERRED          (2000 bytes)   Non-deferred
   AS DEFERRED          (5000 bytes)   Deferred
   AS DEFERRED BY NAME  (1000 bytes)   Deferred
Record 3
   AS DEFERRED          (1000 bytes)   Non-deferred
   AS DEFERRED          (2000 bytes)   Non-deferred
   AS DEFERRED          (2000 bytes)   Non-deferred
   AS DEFERRED          (1000 bytes)   Deferred
   AS DEFERRED          (2000 bytes)   Deferred
   AS DEFERRED BY NAME  (1200 bytes)   Deferred

When the DEFERLIMITS option is specified, BTEQ will display the number of deferred and non-deferred LDO transfers for each import. The message will look similar to:

*** Total LDOs: 50, Non-deferred: 18, Deferred: 32

Multiple sessions can be used to take advantage of the database's ability to handle requests in parallel.

BTEQ can assign up to 200 sessions for use. However, each session requires a dedicated file handle which may exceed a system's resources. If BTEQ is not able to open a file, the associated request is aborted. Therefore, the appropriate number of sessions to use should be determined in advance of running LDO import jobs for production environment purposes.

Notes:
  • The use of Unicode sessions to transfer AS DEFERRED columns is not supported.
  • The database currently does not support the JSON keyword within a USING clause. When importing JSON data, define the column as a CLOB type instead, within the USING clause.
  • For z/OS BTEQ, text LDOs which are imported in deferred or non-deferred mode will not contain end-of-record indicators (newline characters).

REPORT Keyword

If the keyword REPORT is used, each record must be in the format generated by the REPORT keyword of the BTEQ EXPORT command, except that certain records must be removed. For instance, title text and titledashes can be deleted manually using an editor, or they can be programmatically ignored by using the SKIP option.

For workstation-attached systems, each record must end with an end-of-record (line) indicator that is appropriate for the session character set being used and the platform on which BTEQ is running.

For REPORT format, BTEQ uses Field Mode to return all data values, including numeric data, in character format. Undesirable results might occur with numeric data fields when data is reimported that was exported for REPORT format.
  • The REPORT parameter is not supported under the Kanji character sets KANJISJIS_0S or KANJIEUC_0U, or the Chinese character sets TCHBIG5_1R0 or SDTCHBIG5_3R0.
  • On workstation-attached systems, a BOM is optional at the beginning of a UTF-8 or UTF-16 REPORT import file when a Unicode session character set is being used.

VARTEXT Keyword

BTEQ supports VARTEXT records by converting each delimited data item in the input record into a VARCHAR data item. An empty data item causes the data item's corresponding field to be nulled.

BTEQ interprets the following as empty data items:
  • Two adjacent delimiters
  • A delimiter as the very first character of the record
  • A delimiter as the very last character of the record
  • A record containing only a newline
  • A Unicode data file that only contains a Byte Order Mark
A character other than the default “|” can be specified as the data item delimiter. The following rules for specifying a valid optional character apply:
  • No control character other than TAB can be a delimiter.
  • Any character that appears in the data cannot be a delimiter.
  • Delimiters can only be a single-character sequence.
  • A multibyte character is allowed only for Unicode sessions.
  • The delimiter must represent a valid character within the VARTEXT data file, based on the session character set encoding.
Rules for All Platforms
  • The only acceptable data types for VARTEXT records are VARCHAR, VARBYTE, and LONG VARCHAR. Undesirable results occur if other data types are used.
  • Because VARTEXT files are opened in text mode, unexpected results might occur if VARTEXT data contains embedded control characters such as null (0x00), linefeed (0x0A) or sub (0x1A). Control characters are accepted for DATA and INDICDATA imports.
  • The number of data items in the input record must be at least equal to the number of fields defined in the USING clause. The database ignores any extra data items, as long as the bytes for the indicator bits do not increase.
  • If the number of bytes for the indicator bits that the database receives for a particular record is different from what is calculated for the USING clause, the record will fail. Typically, this is indicated by a 2673 error with a message stating the source parcel length does not match the data defined. A separate byte is added for every set of 8 fields encountered to provide the indicator bits for NULL values, with an extra indicator bits byte added for any remaining fields.
  • Each trailing delimiter at the end of an input record represents a null value. If the null increases the number bytes of indicator bits such that it differs from the USING clause, then that record will fail. There are two alternative ways to solve this problem so the expected number of bytes of indicator bits matches what BTEQ sends. The first alternative is to not use a trailing delimiter. The second alternative is to add a dummy last column to the USING clause.
  • Two consecutive delimiter characters specify that the corresponding field should be nulled. If the record starts with a delimiter, the first field is nulled.
  • There should always be one less delimiter than the total number of columns specified in the USING clause.
A BOM is optional at the beginning of a UTF-8 or UTF-16 VARTEXT import file when a Unicode session character set is being used.

Rules for Workstation Platform

Each record must end with an end-of-record (line) indicator, appropriate for the workstation and operating system. For example, for a UNIX system, the indicator must be a line feed character. For Windows, the indicator must be a carriage return/line feed pair. In all cases, the end-of-record character, or characters, must be in the bit-format appropriate for the workstation.

Rules for Mainframe Platform
  • If a space character is used as delimiter, no empty trailing fields are valid in the record. If one or more fields are null at the end of a record, and the delimiter character is a space, the format of the record is not preserved. The trailing nulls and spaces are truncated, resulting in a database error when the record is inserted into a table.
  • To make trailing spaces part of the data, end the record with a delimiter.
  • When using a Unicode session charset, the delimiter character defined within a z/OS BTEQ EBCDIC script must represent a code point equivalent to the actual Unicode delimiter found in the VARTEXT data file. There are certain EBCDIC characters that need special consideration to ensure the code points match. For example, EBCDIC has two vertical bar characters:
    • The EBCDIC solid bar character (0x4F) maps to 0x008D in UTF-16BE.
    • The EBCDIC broken bar character (0x6A) maps to 0x007C in UTF-16BE.

Use SET REPEATSTOP ON to stop importing data if an error is encountered during processing a VARTEXT record. By default it rejects the record and continues.

Opening and Closing Import Files

BTEQ opens the import file in response to an IMPORT command. BTEQ closes the import file whenever one of the following conditions occurs:
  • BTEQ encounters an end-of-file condition.
  • Another IMPORT command is issued.
  • The AXSMOD option was used and a LOGOFF command is issued.
  • An EXIT or QUIT command is issued.
  • BTEQ encounters an error while reading data from the import file.

For more information see I/O Errors and Abends.

IMPORT Examples

Example 1 – IMPORT

Use the following command to read from a data set defined by NEWDAT, with an expected data format of INDICDATA.

.IMPORT INDICDATA DDNAME=NEWDAT

Example 2 – IMPORT

Import data records from the same import file can be remapped as needed to accomplish updates to tables that each have differing row layouts.

.import data file = test.data;
using (c1 integer, c2 integer)
insert into table1 (c1, c2)
values (:c1, :c2);
   
using (c1 integer, c2 integer)
insert into table2 (c1, c3)
values (:c1, :c2);
   
using (c1 integer)
update table3 set c1 = :c1 where c2 = :c1;

Note that the import file test.data should contain at least three records so that one record is processed by each of the three requests having a USING clause.

where:

Table Definition
Table1 'ct table1 (c1 integer, c2 integer)'
Table2 'ct table2 (c1 integer, c2 char(1), c3 integer)'
Table3 'ct table3 (c1 integer, c2 integer, c3 integer)'

Example 3 – IMPORT

.PACK 3
.IMPORT data file = sample.data;
   
using (x integer, y integer)
insert into table1 (a,b)
values (:x, :y);
   
using (x integer, y integer)
insert into table2 (a,b)
values (:x, :y);

Assume that the import file sample.data contains 10 records. With the PACK factor set to 3 and the value of the REPEAT command being 1, the number of records processed by each of the two requests is three. The result is a total of six records are processed, and the import file has four records remaining.

How using the REPEAT command with the PACK clause affects record processing:

.IMPORT data file = sample.data;
.REPEAT 2 PACK 3
   
using (x integer, y integer)
insert into table1 (a,b)
values (:x, :y);
   
using (x integer, y integer)
insert into table2 (a,b)
values (:x, :y);

Because the value of the REPEAT command is 2 and the value of the PACK clause is 3, six records are inserted into table1 and one record is inserted into table2. Note that the REPEAT command applies only to the request that immediately follows it. The import file has three records remaining.

How using the PACK command and the REPEAT command affects record processing:

.SET PACK 3
.IMPORT data file = sample.data;
.REPEAT 2
   
using (x integer, y integer)
insert into table1 (a,b)
values (:x, :y);
   
using (x integer, y integer)
insert into table2 (a,b)
values (:x, :y);

The PACK command affects both requests containing a USING clause and the REPEAT command affects only the first request. The results are six records are inserted into table1, three records are inserted into table2, and the import file has one remaining record.

Table Definition:
table1 ‘ct table1 (c1 integer, c2 integer)’
table2 ‘ct table2 (c1 integer, c2 integer)’

Example 4 – IMPORT

This example reads data from a file named EXPORT.DAT with an expected data format of DATA, through an Access Module (library name is MYMOD). Note that an init-string is not used.

.IMPORT DATA FILE = 'EXPORT.DAT' AXSMOD MYMOD

Example 5 – IMPORT

The following shows both deferred and non-deferred transfers. If the test5.dat import file contains 3 records, then 9 individual LDO files must also exist (3 files per LDO column).

Since c1 and c2 are always 100 bytes or less, they will always be sent as non-deferred transfers because they will always be less than the specified DEFERLIMITS values. The last column will always be transferred in deferred mode because it is defined with AS DEFERRED BY NAME.

Notice that the DEFERCOLS value only includes the LDO columns defined with AS DEFERRED, and does not include the one LDO column defined with AS DEFERRED BY NAME.

.IMPORT DATA DEFERCOLS=2 DEFERLIMITS=500,800 FILE=test5.dat
.REPEAT *
USING (c1 CLOB(100) AS DEFERRED,
	c2 BLOB(100) AS DEFERRED,
	c3 INTEGER,
	c4 CHAR(10),
	c5 XML AS DEFERRED BY NAME)
INSERT INTO table1 VALUES (:c1, :c2, :c3, :c4, :c5);
.IMPORT RESET

Example 6 – IMPORT

The following shows how the PACK setting can affect the importing of LDOs. Since the import format below is INDICDATA, the TOTCOLS option must be used. Since the DEFERLIMITS option is not specified and the PACK value is greater than one, DEFERLIMITS will be turned off by default, and all LDOs will be sent to the database as deferred transfers.

.PACK 10 
.IMPORT INDICDATA DEFERCOLS=3 TOTCOLS=6 FILE=test6.dat 
.REPEAT * 
USING (c1 CLOB(100) AS DEFERRED,
	c2 CLOB(500) AS DEFERRED,
	c3 CLOB(5000) AS DEFERRED,
	c4 SMALLINT,
	c5 CHAR,
	c6 VARCHAR(10))
INSERT INTO table2 VALUES (:c1, :c2, :c3, :c4, :c5, :c6);
.IMPORT RESET

Example 7 – IMPORT

With z/OS BTEQ, a DDNAME must be defined for the main import data set, but not for individual LDOs. To import LDOs in deferred or non-deferred mode on z/OS, the main import data set must contain actual data set names for each LDO (defined as VARCHAR types). Since the PACK command is being used, the DEFERLIMITS option is included to ensure non-deferred transfers are considered. LDOs that are 500 bytes or less will be imported non-deferred, while all other LDOs will be imported in deferred mode.

//BTEQ 			EXEC PGM=BTQMAIN
//DAT1 			DD DSN=USERID.BTEQ.EXPFILE,DISP=SHR
//SYSPRINT 			DD DSN=USERID.BTEQ.OUTPUT,DISP=MOD
//SYSABEND 			DD SYSOUT=*
//SYSOUT 			DD SYSOUT=*
//SYSTERM 			DD SYSOUT=*
//SYSIN 			DD DATA,DLM=##
.LOGON xxx/yyy,zzz
.PACK 10
.QUIET ON
.IMPORT INDICDATA DEFERCOLS=5 TOTCOLS=7 DEFERLIMITS=500,2000 DDNAME=DAT1
.REPEAT *
USING (
	myclob1 CLOB(300) AS DEFERRED,
	myclob2 CLOB(1000) AS DEFERRED,
	myclob3 CLOB(1000) AS DEFERRED,
	myclob4 CLOB(1000) AS DEFERRED,
	myclob5 CLOB(1000) AS DEFERRED,
	mychar VARCHAR(10),
	myint INTEGER)
INSERT INTO my_table VALUES (:myclob1, :myclob2, :myclob3,
				:myclob4, :myclob5, :mychar, :myint);
.IMPORT RESET
.LOGOFF
.QUIT
##

Example 8 – IMPORT

The following shows how to override the RECORDLENGTH command settings with IMPORT's RECORDLENGTH option.

.RECORDLENGTH=MAX64
/* No override; Using 2-byte record lengths */
.IMPORT DATA FILE=test8a.dat
USING (c1 CHAR(8))
INSERT INTO test8_table VALUES (:c1);
.IMPORT RESET

/* Override record length; Using 4-byte record lengths */
.IMPORT DATA RECORDLENGTH=MAX1MB FILE=test8b.dat
USING (c1 CHAR(8))
INSERT INTO test8_table VALUES (:c1);
.IMPORT RESET

A hex-dump of the test8a.dat file might look like the following:

08006161 61616161 61610A            **aaaaaaaa*

The test8b.dat file might contain the same column value, but looks slightly different due to the larger record length value:

08000000 61616161 61616161 0A       ****aaaaaaaa*