Block-Level Compression Usage Notes
Teradata recommends that when you are using BLC that you set the CREATE TABLE or ALTER TABLE DATABLOCKSIZE option for each affected table to the maximum setting for your system. Specifying the maximum DATABLOCKSIZE for a table results in effective compression with the fewest compressed data blocks and the fewest required compression/decompression operations.
Block-level compression can result in some operations using considerably more CPU while operating on compressed tables (for example, queries, insert/updates, archive and restores, and the Reconfiguration and CheckTable utilities). Unless the system is very CPU rich, these operations will impact other workloads and could lengthen elapsed response times.
Use BLC only for large tables, for example, those tables which, in uncompressed form, are more than 5 times as large as system memory. Although you can use BLC on smaller tables, the CPU cost may outweigh the space benefits, depending on your system load and capability.
BLC can reduce the I/O demand for I/O-intensive DSS queries on compressed tables. This may be useful in situations where CPU is available for the decompression and workload management can keep the I/O intensive DSS queries to an appropriate level of consumption.
To improve the degree of compression for tables targeted for BLC, Teradata recommends defining these tables to use the maximum supported data block size. For more information on large data blocks, see: Carrie Ballinger, 1 MB Data Blocks, Teradata Database Orange Book 541-0010379A02, 2014.
Note: To restore a Data Stream Architecture archive made on a source system with hardware-based block-level compression, install the driver package for the hardware compression cards, teradata-expressdx, on the target system. Do this even if the target is not set up for hardware compression, so the target can read the compressed archive.