The goal of common factors or classical factor analysis is to account in the new factors for the maximum amount of covariance or correlation in the original input variables. In the common factor model, each of the original input variables is expressed in terms of hypothetical common factors plus a unique factor accounting for the remaining variance in that variable. The user must specify the desired number of common factors to look for in the model. This type of model represents factor analysis in the fullest sense. Teradata Warehouse Miner offers maximum likelihood factors (MLF) for estimating common factors, using expectation maximization or EM as the method to determine the maximum likelihood solution.
A potential benefit of common factor analysis is that it may reduce the original set of variables into fewer factors than would principal components analysis. It may also produce new variables that have more fundamental meaning. A drawback is that factors can only be estimated using iterative techniques requiring more computation, as there is no unique solution to the common factor analysis model. This is true also of common factor scores, which must likewise be estimated.
As with principal components and principal axis factors, the derived factors are orthogonal or independent of each other, but in this case by design (Teradata Warehouse Miner utilizes a technique to insure this). The same is not necessarily true of the factor scores however. Refer to Factor Scores for more information.
These three types of factor analysis then give the data analyst the choice of modeling the original variables in their entirety (principal components), modeling them with hypothetical common factors alone (principal axis factors), or modeling them with both common factors and unique factors (maximum likelihood common factors).