MULTIFILE - Analytics Database - Teradata Vantage

Database Utilities

Deployment
VantageCloud
VantageCore
Edition
Enterprise
IntelliFlex
VMware
Product
Analytics Database
Teradata Vantage
Release Number
17.20
Published
June 2022
ft:locale
en-US
ft:lastEdition
2024-05-02
dita:mapPath
ymn1628096214445.ditamap
dita:ditavalPath
qkf1628213546010.ditaval
dita:id
vkz1472241255652
lifecycle
latest
Product Category
Teradata Vantage™

The MULTIFILE command instructs dul to generate or read from multiple split files of defined size during an UNLOAD or LOAD operation, respectively.

Syntax

MULTIFILE { ON [ n ] | OFF }

Syntax Elements

OFF
Multiple split files are not generated as part of UNLOAD operation. By default, unloaded information is not split into multiple files. Use MULTIFILE OFF if you have previously used MULTIFILE ON, and no longer want the unloaded information split into multiple files.
ON
Multiple split files are generated as part of UNLOAD operation. The split files are numbered as FN, FN1, FN2, FN3... Multiple split files are considered as part of LOAD operation if they exist in a numbered fashion (FN, FN1, FN2, FN3...).
n
An optional argument which defines the maximum size in megabytes of the split files that should be created during UNLOAD operation. N is a valid integer from 1 through 2000. If n is not specified with MULTIFILE ON command, by default 2000 is assumed.

Usage Notes

  • The MULTIFILE command is helpful when the unloaded file size is extremely large. It causes the information to be split into several smaller files, which can be more easily transmitted to the Teradata Support Center.
  • When using MULTIFILE ON command, the split files that are generated during an UNLOAD operation could be of different sizes, but will always be less than the maximum that has been set (either via use of the n option or default 2000 MB).

Example: Using MULTIFILE to split large amounts of crashdump data

To load the operation:

Dump Unload/Load - Enter your command:
.LOAD crash_20000606_143623_01 FILE = single_unload.gz;
.LOAD crash_20000606_143623_01 FILE = single_unload.gz;
**** Creating table ‘crash_20000606_143623_01’.
**** Table has been created.
**** Loading data into ‘crash_20000606_143623_01’
**** Logging on Amp sessions.
**** Growing Buffer to 4118
**** Starting Row 10000 at Sun Jul 26 22:58:12 2015

**** Starting Row 20000 at Sun Jul 26 22:58:12 2015

**** Starting Row 30000 at Sun Jul 26 22:58:12 2015

**** END LOADING phase...Please stand by...

Loading data into crash_20000606_143623_01 completes successfully.

To UNLOAD operation using MULTIFILE ON 2 (meaning split files’ maximum size would be 2 MB):

Dump Unload/Load - Enter your command:
.MULTIFILE ON 2
.MULTIFILE ON 2
*** Multi-file flag is ON
Maximum split file size is 2097152

Dump Unload/Load - Enter your command:
.UNLOAD crash_20000606_143623_01 FILE = multi_unload;
.UNLOAD crash_2000606_143623_01 FILE = multi_unload;
*** Logging on Amp sessions.
*** All processors selected ...
*** Returned data consists of 2098 blocks
*** Unloading data from crash_20000606_143623_01
the number of blocks received 1000

the node number is 30720
the instigating node is 33
the time the error occured is Tue Jun 23 06:23:30 2015

the event is 6649854, severity is 15 and category 0
Severity = (15)
Category = None

the number of blocks received 2000
*** Unloading Crashdumps table completed.
*** The number of blocks unloaded: 2098

Split files after the UNLOAD operation:
--------------------------------------
multi_unload.gz 2070423
multi_unload1.gz 2068977
multi_unload2.gz 2069766
multi_unload3.gz 2072556
multi_unload4.gz 1287700

To load back with multiple split files:

Dump Unload/Load - Enter your command:
.MULTIFILE ON
.MULTIFILE ON
*** Multi-File flag is ON
Maximum split file size is 2097152000 bytes

Dump Unload/Load - Enter your command:
.LOAD testreload FILE = multi_unload;
.LOAD testreland FILE = multi_unload;

*** Creating table ‘testreload’.
*** Table has been created.
*** Loading data into ‘testreload’
*** Logging on Amp sessions.
*** Growing Buffer to 4118
*** Starting Row 10000 at Sun Jul 26 23:05:05 2015

*** Starting Row 20000 at Sun Jul 26 23:05:06 2015

*** Starting Row 30000 at Sun Jul 26 23:05:07 2015

*** END LOADING phase ... Please stand by ...
Loading data into testreload completes successfully.