USING Request Modifier - Teradata Database

SQL Data Manipulation Language

Product
Teradata Database
Release Number
15.00
Language
English (United States)
Last Update
2018-09-28
dita:id
B035-1146
lifecycle
previous
Product Category
Teradata® Database

USING Request Modifier

Purpose  

Defines one or more variable parameter names used to import data to Teradata Database from a client system or to export data from Teradata Database to a client system. Also specifies how imported or exported LOB data is to be handled.

Syntax  

where:

 

Syntax Element …

Specifies …

using_variable_name

the name of a USING variable that is to be referenced as a parameter in the SQL request associated with the modifier.

Each name specified must be unique.

During processing, each using_variable_name in the SQL request is replaced by a constant value read from the client application.

data_type

the data type of the constant value substituted for using_variable_name.

UDT types are not supported directly. See “USING Support for UDTs” on page 533 for details and information about conversion workarounds.

This can be used to set a default value.

See SQL Data Types and Literals for a list of data types.

data_type_attribute

one of the data type attributes listed below.

The only data type attributes that affect the handling of values imported by USING are the following:

  • CASESPECIFIC
  • NOT CASESPECIFIC
  • UPPERCASE
  • The server character set attribute (for example CHARACTER SET LATIN) cannot be part of data_type_attribute.

    Type attributes are Teradata extensions to the ANSI SQL:2011 standard.

    AS DEFERRED

    that LOB data is to be sent to Teradata Database from the client application when it is elicited, and separately from other row data.

    This option is only valid for LOB data.

    See “USING Support for Large Objects” on page 535 for more information.

    BY NAME

    to interpret the data in the USING row as a client file name. When the system detects the BY NAME option following AS DEFERRED, the following events occur in the order indicated:

    1 Teradata Database sends the file name back to the client.

    2 The client opens the file.

    3 The client sends the data in the file to Teradata Database.

    AS LOCATOR

    that the client application passes a locator for the LOB, generated from a previous SELECT request for the session, to Teradata Database.

    This option is only valid for LOB data.

    See “USING Support for Large Objects” on page 535 for more information.

    SQL_request

    the SQL request with which the USING request modifier is associated.

    This can be a multistatement request, any non-DDL single‑statement request, or an explicit transaction.

    You can pass the value for n in a TOP n operator in an SQL request by specifying a USING‑defined parameter. See “Example 11” on page 549.

    All USING variable names in the SQL request must be preceded by a COLON character.

    ANSI Compliance

    USING is a Teradata extension to the ANSI SQL:2011 standard.

    Required Privileges

    None.

    USING Is Not Supported for Embedded SQL

    Embedded SQL does not support the USING request modifier.

    Declare and use input host variables in your embedded SQL applications where you would use the USING request modifier interactively.

    Actions Performed by USING

    USING imports data rows from a client system as follows:

    1 Each value is retrieved from a client system‑resident data source, such as a disk file, that you must specify using a client utility such as BTEQ.

    For example, if you submit a USING‑modified request through BTEQ, you must precede the request with something like the following BTEQ command:

         .IMPORT DATA FILE=c:\temp\t1

    See Basic Teradata Query Reference for documentation on the .IMPORT command.

    2 The values are passed to Teradata Database as a data parcel along with the text of the request. Iterated requests can pack multiple data parcels with the text of a request.

    3 A value is substituted for each parameter name when the modified SQL request is processed.

    You can import data from a client system by preceding any of the following SQL statements with a USING request modifier:

  • DELETE
  • INSERT
  • MERGE
  • SELECT
  • UPDATE
  • For exporting Teradata Database data to a client, the process is essentially the reverse of that used for data import.

    You can export data to a client system using a client utility such as BTEQ and specifying the BTEQ .EXPORT command, then preceding a SELECT request to retrieve the data to be exported from Teradata Database with an appropriately specified USING request modifier.

    See Basic Teradata Query Reference for documentation on the .EXPORT command.

    USING Rows Size Restriction

    USING rows do not support the 1 MB spool row size.

    USING Variables

    The maximum number of USING variables you can specify in a USING request modifier is 2,550.

    Each USING variable name must be unique in a USING request modifier.

    The USING variable construct (:using_variable_name) is valid anywhere a character, numeric, or byte constant is valid. The constant values in the client system data record must match, item for item, the USING description of those values. Also, the constants in the data record must appear in the order in which they are defined in the USING clause.

    You cannot use a USING variable to specify a column position in a GROUP BY or ORDER BY clause.

    One USING clause can only modify one request. If several requests require USING variables, each request must be associated with its own USING row identifier.

    USING Support for Scalar UDFs

    You can pass USING variables in the arguments of a scalar UDF during invocation (see “Example 14: USING and SQL UDF Invocation” on page 550) when the data types of the USING variables match those of the UDF parameters, whether natively or after being explicitly cast. If the data types do not match, Teradata Database aborts the request and returns an error.

    You can avoid this problem by casting the data types of the USING variables to the data types of the UDF parameters.

    Valid Request Types

    A USING modifier can be associated with one of the following types of requests:

  • Any single‑statement request except a DDL or DCL statement. An iterated request is a special case of this query type and must always have an associated USING modifier.
  • A multistatement request.
  • An explicit transaction. If the modifier is associated with an explicit transaction, the USING keyword must appear as the first keyword in that transaction.
  • If the first statement or request references a USING variable, the USING clause should immediately precede the BEGIN TRANSACTION statement (see “Example 3” on page 547).

    USING and the EXPLAIN Request Modifier

    If you specify both a USING request modifier and an EXPLAIN request modifier (see “EXPLAIN Request Modifier” on page 572) for the same request, the EXPLAIN request modifier must precede the USING request modifier.

    USING Support for UDTs

    USING does not support UDTs explicitly. Client applications must transfer UDT data to the Teradata platform in its external type form, which is always a predefined data type. You can then use either an implicit or explicit cast to import UDTs by means of the USING request modifier.

    Implicit casts are invoked during assignment operations, including: INSERT, UPDATE, and parameter passing operations.

    When both of the following conditions are true, the system automatically converts the client value predefined data type to the appropriate Teradata platform value UDT by means of an implicit casting operation:

  • A host variable is involved in an assignment operation and is assigned to a UDT target
  • An implicit cast is defined. The implicit cast can be either of the following:
  • A system‑generated implicit cast.
  • An explicitly created implicit cast.
  • See “CREATE CAST” in SQL Data Definition Language for details.

    If no applicable implicit cast has been defined, you must perform explicit conversions using constructor methods, casts, UDFs, and so on, as indicated by the following examples:

  • The following example shows an implicit cast conversion from a client predefined data type to a Teradata platform distinct UDT:
  •      USING (europe_sales DECIMAL(9,2))
         INSERT INTO tab1 VALUES (:europe_sales, 1);
  • The following example shows an explicit conversion and construction operation using a constructor method:
  •      USING (street VARCHAR(20), zip CHARACTER(5))
         INSERT INTO tab2 VALUES (NEW address(:street, :zip), 2 );
  • The following example shows conversion by means of method invocation:
  •      USING (price DECIMAL(6,2))
         UPDATE tab1
         SET column1 = CAST(:price AS euro).roundup(0);

    Best practices for UDT design recommend that you duplicate all transform group functionality to enable optimum support for the USING request modifier. You can do this by coding equivalent predefined external data type‑to‑UDT and UDT‑to‑predefined external data type casts. The simplest way to accomplish this is to reference the same routines in the equivalent CREATE CAST and CREATE TRANSFORM statements. For details about how to use these statements, see “CREATE CAST” and “CREATE TRANSFORM” in SQL Data Definition Language.

    If you follow these guidelines, then iterated array processing is also supported. UDTs cannot be referenced for array processing directly in a USING request modifier, but can be populated by using the implicit cast (from predefined data type‑to‑UDT) mechanism.

    Distinct and structured UDTs differ in these requirements:

  • For a distinct UDT, if you plan to use only its system‑generated functionality, no work is required because the transform and implicit casting functionality has already been defined.
  • For a structured UDT, you must define the transform and implicit casting functionality explicitly with CREATE CAST and CREATE TRANSFORM statements, as indicated by the following examples for a UDT named address:
  • The following request creates the transform group:
  •      CREATE TRANSFORM FOR address client_io (
          TO SQL WITH SPECIFIC FUNCTION SYSUDTLIB.stringToAddress,
          FROM SQL WITH SPECIFIC METHOD toString);
  • The following request creates an implicit cast from VARCHAR(100) to address that duplicates the tosql functionality of the client_io transform group:
  •      CREATE CAST ( VARCHAR(100) AS address )
          WITH SPECIFIC FUNCTION SYSUDTLIB.stringToAddress 
          AS ASSIGNMENT;
  • The following request creates an implicit cast from address to VARCHAR(100) that duplicates the fromsql functionality of the client_io transform group:
  •      CREATE CAST (address AS VARCHAR(100))
          WITH SPECIFIC METHOD ToString AS ASSIGNMENT ;

    USING Support for Large Objects

    The USING request modifier supports passing LOBs to the Teradata platform. The documentation for each Teradata Database‑supported API specifies how this functionality is presented to the application.

    You can specify three different modes for handling large objects by the USING request modifier:

  • Inline
  • Deferred
  • Locator
  • Deferred and locator modes defer the transfer of data between client and server, transmitting non‑LOB data separately from LOB data. The appropriate deferred mode depends on the application. Neither deferred or locator mode passes the entire LOB field with the non‑LOB data from a row.

    Inline Mode

    With inline mode, an entire LOB field is passed along with the other fields in the row. Inline mode is the default.

    Deferred Mode

    Portions of the LOB are passed sequentially by the client application to Teradata Database until the entire LOB has been transmitted.

    Deferred mode means that the system passes a 4-byte integer token identifying the LOB with the non‑LOB data in the initial request. Then while in MultiPart Request mode, Teradata Database elicits the LOB data from the client with an ElicitData parcel, providing the token to identify the particular LOB data it wants. A maximum‑sized LOB can be passed in this way without it being necessary to know the LOB data size in advance.

    However, simple deferred mode does not work well with client utilities that might not know the location of the LOB data on the client. Because of this, it is best to use the BY NAME phrase whenever you specify the AS DEFERRED option.

    When you specify the AS DEFERRED BY NAME option, the LOB data is represented in the data sent with the request as a VARCHAR value limited to 1,024 bytes in length. Teradata Database processes AS DEFERRED BY NAME in the same manner as if it were a simple AS DEFERRED, with the exception that when the AMP wants to elicit the LOB data from the client, it uses a ElicitDataByName parcel, which identifies the LOB by the name passed by the client in the initial request, rather than an ElicitData parcel, which identifies the LOB by the LOB token that was originally passed by the client.

    Locator Mode

    A locator passes a LOB value reference from a client to the server application or user‑defined function without passing the LOB value itself. Its purpose is to minimize message traffic, enhancing system performance in the process. Queries that request LOB data are serviced with intermediate results that contain locators to LOB data, not the LOB data itself.

    When you use a locator as a parameter in a subsequent request, the result is exactly as if the associated LOB value had been used directly. Locators can be used in almost any context where a LOB value is valid.

    Locators are instantiated in the result set of an SQL query and bound to an application program or UDF variable by an application‑specific method.

    Locators exist for a limited time within the session in which they are created. Their precise life span depends on the specific application.

    LOB Transfer Mode Comparison

    The following table lists what the system passes in the various LOB modes:

     

    IN this mode …

    The data passed is …

    Inline

    the entire LOB along with all non‑LOB data.

    This is limited by the maximum request size of 1 MB, but even if that limit is not exceeded, the size of the transmitted LOB cannot exceed 64 Kbytes.

    Deferred

    a 64 Kbyte portion of the LOB.

    64 Kbyte portions of the LOB are passed sequentially by the client application to Teradata Database until the entire LOB has been transmitted.

    Consult the API manual for your application to determine its parcel size limit for a deferred LOB. Some APIs support parcels as large as 1 MB.

    Locator

    a value reference to the LOB on the Teradata platform.

    Locators are generated by a previous SELECT request within the current session and passed to the client application, which later passes it back to the Teradata platform in a CLIv2 response.

    The following table summarizes the uses for each mode:

     

    Mode

    Description

    Inline

    Also known as non‑deferred mode.

    Generally useful for small LOBs only because the entire LOB is transferred along with non‑LOB data when transmitted to or from Teradata Database.

    Inline mode transfers LOBs in the same way it transfers other field values. This mode does not require a special clause in the USING text.

    You can use inline mode to transfer multiple rows.

    The maximum total size of the request, when transmitted from a client application to Teradata Database, is 1 MB, which is the system maximum request size. However, the size of the transmitted LOB cannot exceed 64 Kbytes.

    You cannot transmit more than 64 Kbytes of data from a client application to the Teradata platform because that is the maximum row length for Teradata Database.

    There is no restriction on the size of LOB data that can be transmitted to Teradata Database from a client application.

    Deferred

    Specify this mode with a required AS DEFERRED clause.

    Transfers LOBs sequentially from client‑to‑server in 64 Kbyte fragments. After Teradata Database receives each LOB fragment, it sends an acknowledgement to the application, which then either sends the next LOB fragment or, if all the data has been sent, terminates the request.

    You can only transmit single rows in deferred mode.

    There is no limit on the size of the request.

    In this mode, the Teradata platform requests the application to transmit the LOB after validating, but before completing, the SQL request. The Teradata platform returns a failure if an SQL request that contains a deferred LOB is performed on every AMP in the system (usually, but not always, excluding single AMP systems). Note that this refers only to all-AMP requests. The Teradata platform also prevents overlapping deferred LOB requests within one query.

    If there are multiple LOBs identified as being transferred in deferred mode, the Teradata platform might not request them in the order in which they are specified in the USING clause.

    You can append a BY NAME option to an AS DEFERRED clause, which provides a smoother interface for handling deferred data.

    Locator

    Specify this mode with a required AS LOCATOR clause.

    Transfers non‑LOB data plus a locator to the Teradata platform‑resident LOB data from client-to-server.

    You can transmit multiple rows in locator mode.

    Once instantiated, a locator can be used by other requests made within the same session.

    Client Application Restrictions on USING With Large Objects

    The following client applications support USING with large objects with no restrictions:

  • CLIv2
  • Archive/Recovery
  • The following client applications support USING with large objects with the noted restrictions:

     

    Application

    Restrictions

    BTEQ

    Inline mode only.

    Teradata Parallel Transport

    Deferred and Locator modes for these operators only:

  • DataConnector
  • SQL Inserter
  • SQL Selector
  • The following Teradata Tools and Utilities applications do not support USING with large objects:
  • FastLoad
  • FastExport
  • MultiLoad
  • Embedded SQL
  • Teradata Database stored procedures do not support USING with large objects.
  • Consult the appropriate documentation for additional restrictions on LOB use with Teradata features.

    USING and DateTime System Functions

    In non-ANSI Teradata Database applications, when you access the system functions DATE or TIME, the values are obtained directly from the operating system and placed in a USING row for access by the query being performed. In this situation, the value for DATE is stored with type DATE and the value for TIME is stored with type REAL. These values are stored in known fixed fields in the USING row.

    The ANSI system functions CURRENT_DATE, CURRENT_TIME, and CURRENT_TIMESTAMP behave differently:

  • CURRENT_DATE is stored in the field previously used exclusively for DATE.
  • CURRENT_TIME is not stored in the field previously used for TIME because both its storage format and its content are different and are not interchangeable with those of TIME. Another system-value field is added to the USING row. Notice that CURRENT_TIME includes TIME ZONE data in addition to the more simple TIME data, so the differences between the two extend beyond storage format.
  • CURRENT_TIMESTAMP is also stored as an additional separate field in the USING row.
  • Both the legacy Teradata Database DATE and TIME functionality and the ANSI DateTime functionality are supported for the USING request modifier.

    ANSI DateTime Considerations

    Other than DATE values in INTEGERDATE mode, external values for DateTime and Interval data are expressed as fixed length CharFix character strings (in logical characters) in the designated client character set for the session.

    The type provided in a USING request modifier for any client data to be used as a DateTime value can be defined either as the appropriate DateTime type or as CHARACTER(n), where n is a character string the correct length for the external form of a DateTime or Interval data type, and the character string is to be used internally to represent a DateTime or Interval value.

    When a client creates USING data without being aware of the ANSI DateTime data types, those fields are typed as CHARACTER(n). Then when those USING values appear in the assignment lists for INSERTs or UPDATEs, the field names from the USING phrase can be used directly.

    An example follows:

         USING (TimeVal  CHARACTER(11), 
                NumVal   INTEGER,
                TextVal (CHARACTER(5)) 
         INSERT INTO TABLE_1 (:TimeVal, :NumVal, :TextVal);

    When you import ANSI DateTime values with a USING request modifier and the values are to be used for actions other than an INSERT or UPDATE, you must explicitly CAST them from the external character format to the proper ANSI DateTime type.

    An example follows:

         USING (TimeVal CHARACTER(11), 
                NumVal INTEGER)
         UPDATE TABLE_1 
         SET TimeField=:TimeVal, NumField=:NumVal
         WHERE CAST(:TimeVal AS TIME(2)) > TimeField;

    While you can use TimeVal CHARACTER(11) directly for assignment in this USING request modifier, you must CAST the column data definition explicitly as TIME(2) in order to compare the field value TimeField in the table because TimeField is an ANSI TIME defined as TIME(2).

    You can use both DateTime and Interval declarations to allow a USING request modifier to directly indicate that an external character string value is to be treated as a DateTime or Interval value. To import such values, you import their character strings directly into the USING request modifier.

    If you move values into a USING request modifier and the character string data cannot be converted into valid internal DateTime or Interval values as indicated by their type definitions, then the system returns an error to the application.

    Example of ANSI DateTime and Interval With USING

    The following USING request modifier expects to see an 11 character field containing a string such as '21:12:57.24'.

         USING ( TimeVal TIME(2), ... )

    You could import the same value using the following USING request modifier; however, it is unclear that the data is a time value with two decimal places:

         USING ( TimeVal CHARACTER(11), ... )

    ANSI DateTime and Parameterized Requests

    A CLIv2 StatementInfo parcel is used in both “Prepare then Execute” and “Parameterized Query” CLI modes to describe fields for export and import and to build a USING request modifier without a USING request modifier in the request text.

    In this mode of operation, the system observes the following rules:

  • DateTime and Interval exported values as seen by the client as CharFix data with the exception of DATE values when the session is in INTEGERDATE mode.
  • Imported values that are to be DateTime or Interval values are seen by the system as CharFix values with the exception of DATE values when the session is in INTEGERDATE mode.
  • When used in an INSERT or UPDATE request, CharFix values can be implicitly cast in the appropriate DateTime or Interval value. In this case, the system ensures that the value being assigned is CharFix and that it has the right length to represent the DateTime or Interval value to which it is being assigned.

    The system casts the value and accepts the result if the contents are a valid representation of a value of the indicated data type. In other cases, you must explicitly cast the USING parameter as the appropriate DateTime or Interval data type.

    Array Considerations for Specifying TOP n as a USING Parameter

    You cannot specify the n value of a TOP n specification as a USING parameter for arrays. If you attempt to do so, the request aborts and returns an error to the requestor. See “Example 12: Non-Support for Iterated Requests With TOP n” on page 550 for an example.

    Character String Definitions in a USING Request Modifier

    Default case specificity for character string comparisons depends on the mode defined for the current session.

     

    IF the session is in this mode …

    THEN the values default to this declaration for string comparisons …

    ANSI

    CASESPECIFIC

    Teradata

    NOT CASESPECIFIC

    You can add the explicit attribute NOT CASESPECIFIC to the definition of a CHARACTER or VARCHAR field in the USING phrase to override the default.

    The purpose of the NOT CASESPECIFIC attribute is to ease the transition to an ANSI session mode of operation. You should instead use the ANSI SQL:2011‑compliant UPPER function to perform case blind comparisons.

    data_type Considerations for a Japanese Character Site

    Be very careful with Japanese character data regarding the information that will be sent in the data parcel in client form-of-use. Consider the following example for illustrating the issues involved.

         USING (emp_name   CHARACTER(40) UPPERCASE, 
                emp_number INTEGER)
         INSERT INTO employee (:emp_name, :emp_number);

    This request specifies that the data parcel contains 40 bytes of CHARACTER data for emp_name and four bytes of INTEGER data for emp_number.

    Because the data_type describes the layout of the data parcel in terms of client form-of-use, this means that CHARACTER(40) indicates 40 bytes of CHARACTER information rather than 40 characters.

    For single‑byte character external data sets such as ASCII and EBCDIC, bytes and characters represent the same byte count.

    For character sets containing mixed single‑byte and multibyte characters (such as KanjiEUC), or only 2-byte characters, however, the numbers of bytes and numbers of characters are not identical.

    Notice that the GRAPHIC and VARGRAPHIC data types have character counts that are always half the number of bytes.

    For Kanji character sets containing only GRAPHIC multibyte characters, such as KanjiEBCDIC, character and byte units are identical.

    You can specify GRAPHIC USING data by specifying a [VAR]CHAR(n) CHARACTER SET GRAPHIC attribute for the data_type. This is identical to specifying the data_type as [VAR]GRAPHIC(n).

    There is no equivalent extension to CHARACTER SET syntax to account for CHARACTER data, because the data is in client rather than internal form‑of‑use.

    Non-GRAPHIC Character Data Representation

    Non-GRAPHIC CHARACTER(n) and VARCHAR(n) data types in a USING request modifier are defined in terms of bytes, where the maximum value of n is 64K.

    In all other SQL declarations, such as a column definition within a CREATE TABLE statement, non-GRAPHIC CHARACTER(n) and VARCHAR(n) data types are defined as follows:

     

    This server character set…

    Is defined in terms of …

    And the maximum value of n is …

    Latin

    characters

     

    64K

    Unicode

    32K

    Kanji1

    bytes

     

    64K

    Kanjisjis

    32K

    Caution:

    In accordance with Teradata internationalization plans, KANJI1 support is deprecated and is to be discontinued in the near future. KANJI1 is not allowed as a default character set; the system changes the KANJI1 default character set to the UNICODE character set. Creation of new KANJI1 objects is highly restricted. Although many KANJI1 queries and applications may continue to operate, sites using KANJI1 should convert to another character set as soon as possible. “”

    For details on the process of importing character data with a USING request modifier, see “Character Data Import Process” on page 545.

    Character String Assignment and GRAPHIC Columns

    The following table describes the behavior of character strings assigned to GRAPHIC and non-GRAPHIC columns.

    Under no circumstances can you import non-GRAPHIC data into character columns typed as GRAPHIC, nor can you import GRAPHIC data into character columns that are not typed as GRAPHIC.

     

    IF USING describes a character string for assignment to a column of this type …

    THEN it must describe the USING string as this type …

    GRAPHIC

  • GRAPHIC(n)
  • VARGRAPHIC(n)
  • CHARACTER(n) CHARACTER SET GRAPHIC
  • VARCHAR(n) CHARACTER SET GRAPHIC
  • LATIN

    UNICODE

    KANJISJIS

    KANJI1

  • CHARACTER(n)
  • VARCHAR(n)
  • CHARACTER and GRAPHIC Server Character Set Limitations for INSERT Operations

    This table describes the data limits on INSERT operations for various CHARACTER and GRAPHIC server character sets.

     

    Client Character Set Type

    USING Data Type

    Maximum Size for USING Data

    Server Character Set

    Maximum Column Size for INSERT (Characters)

    Single byte

    CHARACTER

    64,000 bytes

    LATIN

    64,000

    KANJI1

    UNICODE

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    Note that this includes Unicode data in both UTF-8 and UTF-16 sessions.

    32,000

    KANJISJIS

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    Multibyte

     

    CHARACTER

     

    64,000 bytes

     

    LATIN

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the system rejects the row. Although a Latin field can be defined as 64K, you cannot insert more than 32K multibyte characters into it because Teradata Database uses Unicode internally, and Unicode has a 32K limit. The load utilities are similarly limited.

    32,000

    KANJI1

    64,000

    UNICODE

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    Note that this includes Unicode data in both UTF-8 and UTF-16 sessions.

    32,000

    Multibyte (continued)

    CHARACTER

    64,000 bytes

    KANJISJIS

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    32,000

    GRAPHIC

    32,000 characters

    GRAPHIC

    32,000

     

    LATIN

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the system rejects the row. Although a Latin field can be defined as 64K, you cannot insert more than 32K multibyte characters into it because Teradata Database uses Unicode internally, and Unicode has a 32K limit. The load utilities are similarly limited.

    KANJI1

    UNICODE

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    Note that this includes Unicode data in both UTF-8 and UTF-16 sessions.

    KANJISJIS

    If the USING data row contains more than 32K logical characters (or more than 64K bytes when mixed single‑byte-multibyte strings are involved), the characters that exceed that limit are truncated.

    Character Data Import Process

    Consider the following table definition:

         CREATE TABLE table_1 (
           cunicode  CHARACTER(10) CHARACTER SET UNICODE,
           clatin    CHARACTER(10) CHARACTER SET LATIN,
           csjis     CHARACTER(10) CHARACTER SET KANJISJIS,
           cgraphic  CHARACTER(10) CHARACTER SET GRAPHIC,
           cgraphic2 CHARACTER(10) CHARACTER SET GRAPHIC);

    Suppose the session character set is KanjiShift-JIS and you submit the following request:

         USING 
          cunicode  (CHARACTER(10)),
          clatin    (CHARACTER(10)),
          csjis     (CHARACTER(10)),
          cgraphic  (GRAPHIC(10)),
          cgraphic2 (CHARACTER(10) CHARACTER SET GRAPHIC))
         INSERT INTO table_1(:cunicode, :clatin, :csjis, :cgraphic,
                             :cgraphic2);

    The USING request modifier indicates that the data parcel contains the following:

  • 10 KanjiShift-JIS bytes for the Unicode column
  • 10 bytes for the Latin column
  • 10 bytes for the KanjiSJIS column
  • 20 bytes for the first GRAPHIC column
  • 20 bytes for the second GRAPHIC column
  • Individual fields in the data parcel are converted from the client form-of-use to the server form‑of-use. After conversion, the first three fields are treated as Unicode literals and the last two fields are treated as GRAPHIC literals (see SQL Data Types and Literals).

    The column data is then converted to the target fields using implicit translation according to the rules listed in SQL Functions, Operators, Expressions, and Predicates.

    The conversion process is described by the following table.

     

    Stage

    Process

     

    These bytes are converted to server form-of-use …

    Treated as this kind of literal …

    Converted to this character data type …

    And stored in this column …

    1

    first 10 (1 - 10)

    UNICODE

    none

    cunicode

    2

    next 10 (11 - 20)

    UNICODE

    LATIN

    clatin

    3

    next 10 (21-30)

    UNICODE

    KANJISJIS

    csjis

    4

    next 20 (31 - 50)

    GRAPHIC

    GRAPHIC

    cgraphic

    5

    last 20 (51 - 70)

    GRAPHIC

    GRAPHIC

    cgraphic2

    Note that the meaning of the USING clause is independent of the session character set.

    UPPERCASE Option and Character Parameter Definition in USING

    When you specify UPPERCASE for a character parameter in a USING request modifier, it applies only to the single‑byte characters of any mixed single‑byte‑multibyte character set.

    To ensure uniform formatting as uppercase, do one of the following:

  • Define the column with the UPPERCASE attribute in CREATE TABLE or ALTER TABLE (see “ALTER TABLE” and “CREATE TABLE” in SQL Data Definition Language and SQL Data Types and Literals for details).
  • Use the UPPER function to convert the lowercase characters to uppercase (see SQL Functions, Operators, Expressions, and Predicates).
  • See “Character Data Import Process” on page 545.

    Example  

    In this example, the USING request modifier establishes three variable parameters whose constant values are used both for data input and as WHERE clause predicates in a multistatement request:

         .SET RECORDMODE ON
         .IMPORT DATA FILE = r13sales.dat;
     
         USING (var_1 CHARACTER, var_2 CHARACTER, var_3 CHARACTER)
         INSERT INTO testtabu (c1) VALUES (:var_1)
        ;INSERT INTO testtabu (c1) VALUES (:var_2) 
        ;INSERT INTO testtabu (c1) VALUES (:var_3) 
        ;UPDATE testtabu 
           SET c2 = c1 + 1 
           WHERE c1 = :var_1 
        ;UPDATE testtabu 
           SET c2 = c1 + 1 
           WHERE c1 = :var_2 
        ;UPDATE testtabu 
           SET c2 = c1 + 1 
           WHERE c1 = :var_3;

    Example  

    In this example the USING request modifier defines the variables :emp_name and :emp_number as, a CHARACTER constant and an INTEGER numeric constant, respectively. The USING variables are replaced by values from a client system data record when the system processes the accompanying INSERT request.

         .SET RECORDMODE ON
         .IMPORT DATA FILE = r13sales.dat;
     
         USING (emp_name   CHARACTER(40), 
                emp_number INTEGER) 
         INSERT INTO employee (name, empno)
         VALUES (:emp_name, :emp_number);

    The INSERT request (in Record Mode on an IBM mainframe) is transmitted to Teradata Database with an appended 44‑byte data record consisting of a 40‑byte EBCDIC character string followed by a 32‑bit integer.

    Example  

    In this example, the USING request modifier defines a variable parameter for use with an explicit transaction that reads character strings from a disk file and inserts them in signed zoned decimal format.

    The USING request modifier precedes the BEGIN TRANSACTION statement, while the BEGIN TRANSACTION statement and the request associated with the USING clause are entered as one multistatement request.

         .SET RECORDMODE ON
         .IMPORT DATA FILE = r13sales.dat;
     
         USING (zonedec CHARACTER(4)) 
         BEGIN TRANSACTION 
        ;INSERT INTO dectest (colz = :zonedec (DECIMAL(4),FORMAT '9999S')) ; 
     
         USING (zonedec CHARACTER(4)) 
         INSERT INTO Dectest
            (colz = :zonedec (DECIMAL(4), FORMAT '9999S')) ; 
     
         USING (zonedec CHARACTER(4)) 
         INSERT INTO Dectest
            (colz = :zonedec (DECIMAL(4), FORMAT '9999S')) ; 
         END TRANSACTION;

    In BTEQ applications, you can combine USING request modifiers with the .REPEAT command to perform multiple insertions automatically.

    Example : Inline Mode Processing of a Large Object

    The following example passes the BLOB values for column b inline:

         .SET INDICDATA ON
         .IMPORT DATA FILE=mar08sales.dat
     
         USING (a INTEGER, 
                b BLOB(60000)) 
         INSERT INTO mytable VALUES (:a, :b);

    Example : Deferred Mode Processing of a Large Object

    The following example passes the CLOB values for column b in deferred chunks:

         .SET INDICDATA ON
         .IMPORT DATA FILE=mar08sales.dat
     
         USING (a INTEGER, 
                b CLOB AS DEFERRED) 
         INSERT INTO mytable VALUES (:a, :b);

    Example : Deferred Mode Processing of CLOBs Using the DEFERRED BY NAME Phrase

    The following example shows the use of the BY NAME phrase for deferred mode processing of large objects using BTEQ and CLOBs.

         .SET INDICDATA ON
     
         CREATE TABLE tabf (
           i1 INTEGER, 
           v1 VARCHAR(256));
     
         INSERT INTO tabf (1, 'c:\temp\vconfig.txt');
     
         .EXPORT INDICDATA FILE=scg0720.dat
     
         SELECT * 
         FROM tabf;
     
         .EXPORT reset
     
         CREATE TABLE tabc (
         i1 INTEGER, 
         c1 CLOB);
     
         .IMPORT INDICDATA FILE=scg0720.dat
     
         USING (a INTEGER, 
                b CLOB AS DEFERRED BY NAME) 
         INSERT INTO tabc (:a, :b);

    Example : DEFERRED MODE Processing of BLOBs Using the DEFERRED BY NAME Phrase

    The following example shows the use of the BY NAME phrase for deferred mode processing of large objects using BTEQ and BLOBs.

         CREATE TABLE tabf2 (
           i1 INTEGER, 
           v1 VARCHAR(256));
     
         INSERT INTO tabf2 (1, 'c:\temp\data.dat');
     
         .SET INDICDATA ON
         .EXPORT INDICDATA FILE=scg0720.dat
     
         SELECT * 
         FROM tabf;
     
         .EXPORT RESET
     
         CREATE TABLE tabb (
           i1 INTEGER, 
           c1 BLOB);
     
         .IMPORT INDICDATA FILE=scg0720.dat
     
         USING (a INTEGER, 
                b BLOB AS DEFERRED BY NAME) 
         INSERT INTO tabb (:a, :b);

    Example : Locator Mode Processing of a Large Object

    The first example shows how locator b is used to copy an existing, Teradata platform‑resident BLOB from its current base table location into a base table named mytable without any data transferred to the client.

         USING (a INTEGER, 
                b BLOB AS LOCATOR) 
         INSERT INTO mytable VALUES (:a, :b);

    The second example shows the BLOB data identified by locator b being returned to the client.

         .SET INDICDATA ON
         .EXPORT INDICDATA FILE=udbsales.dat
     
         USING (b BLOB AS LOCATOR) 
         SELECT :b;
     
         .EXPORT RESET

    Example : Using a Locator Multiple Times Within a Session

    This example shows the same locator being used in more than one request within a session:

         USING (a CLOB AS LOCATOR)
         SELECT :a;
     
         USING (a INTEGER, 
                b CLOB AS LOCATOR)
         INSERT INTO tab2 (:a, :b);

    Example : Iterated Requests

    The following example shows one way of performing an iterated request in BTEQ:

         .IMPORT DATA FILE = r13sales.dat;
         .REPEAT RECS 200 PACK 100
     
         USING (pid INTEGER, pname CHAR(12))
         INSERT INTO ptable VALUES(:pid, :pname);

    The .REPEAT command specifies that BTEQ should read up to 200 data records and pack a maximum of 100 data records with each request.

    Example  

    This example passes the value for n in the TOP n operator in its SELECT request using the INTEGER parameter a as defined in the USING request modifier.

         .SET RECORDMODE ON
         .IMPORT DATA FILE=employee.dat
     
         USING (a INTEGER) 
         SELECT TOP :a * 
         FROM employee;

    Example : Non-Support for Iterated Requests With TOP n

    The following request indicates the lack of support for specifying a TOP n value as a parameterized variable in a USING request modifier for iterated arrays.

         .REPEAT 1 PACK 5
         BTEQ -- Enter your DBC/SQL request or BTEQ command:
     
         USING (a INTEGER, b INTEGER, c INTEGER)
         SELECT TOP :a * 
         FROM t1;
     
         *** Starting Row 0 at Tue Aug 05 11:46:07 2008
         *** Failure 6906Iterated request:Disallowed statement type (TOP N).
                       Statement# 1, Info =0
         *** Total elapsed time was 1 second.

    Example : Dynamic UDT Expressions

    The following example shows one way to use a dynamic UDT expression in a table UDF with a USING request modifier:

         .IMPORT INDICDATA FILE lobudt003.data
     
         USING(p1 INTEGER, p2 INTEGER, p3 BLOB)
         SELECT * 
         FROM TABLE(dyntbf003(:p2, 
                    NEW VARIANT_TYPE(:p2 AS a,:p3 AS b))) AS t1 
         ORDER BY 1;
     
          *** Query completed. 2 rows found. 3 columns returned.
          *** Total elapsed time was 1 second.
     
                R1         R3                                          R2
         ---------    -------    ----------------------------------------
                10          0    1111111111111111111111111111111111111110
                11          1    1111111111111111111111111111111111111110

    Example : USING and SQL UDF Invocation

    This example invokes the SQL UDF value_expression in a USING clause‑based SELECT request.

         USING (a INTEGER, b INTEGER)
         SELECT test.value_expression(:a, :b) AS cve 
         FROM t1 
         WHERE t1.a1 = :a 
         AND   t1.b1 = :b;

    This example invokes the SQL UDF value_expression in a USING clause‑based DELETE request.

         USING (a INTEGER, b INTEGER)
         DELETE FROM t1 
         WHERE test.value_expression(:a, :b) > t1.a1;

    This example, which invokes the SQL UDF value_expression in a USING clause‑based SELECT request with mismatches of the data types of the arguments, aborts and returns an error.

         USING (a CHARACTER(10), b CHARACTER(10))
         SELECT test.value_expression(:a, :b) AS cve
         FROM t1 
         WHERE t1.a1 = :a 
         AND   t1.b1 = :b;

    Related Topics

  • Basic Teradata Query Reference
  • Teradata SQL Assistant for Microsoft Windows User Guide