Normalization as a Logical Process
It is important to understand that database normalization is a logical process; therefore, the term denormalization is a misnomer when it is applied to the physical design of the database rather than its logical derivation. The application of large scale denormalization to the physical database model is not necessary to support an enterprise data warehouse based on Teradata Database. Because most advocates of data warehousing begin with the assumption that a denormalized physical database design is a necessity, you might find this statement to be confusing. The reason for this assumption is that most commercially available database engines are not capable of delivering the level of performance required to support massive data warehouses based on normalized physical design schemas. Because of its innate parallel architecture, Teradata easily supports multiterabyte databases within a physical design that is directly derived from a fully normalized logical design (see “Born To Be Parallel” on page 25).
Because database management systems map logical relations directly to physical tables, it can sometimes appear difficult to separate the logical model from its physical realization. Nevertheless, you should always design a fully normalized logical model, then, only if necessary to achieve performance levels that are otherwise not possible to realize, denormalize the physical design.
Because of the widespread usage of the term, this manual often refers to denormalization in the context of physical database design. This is done with the understanding that the logical and physical models of a database are independent things, that the physical model applies only to implementation, and that the term denormalization strictly applies only to logical modeling. In spite of this, industry usage of the term nearly always applies to the physical design.