MCA examines the relationships between several categorical independent variables and a single dependent variable, and determines the effects of each predictor before and after adjustment for its inter-correlations with other predictors in the analysis. It also provides information about the bivariate and multivariate relationships between the predictors and the dependent variable.
The dependent variables must be measured on an interval scale or must be a dichotomy. Predictor variables must be categorical, preferably with six or fewer categories.
See Andrews, F. M., J. N. Morgan, J. A. Sonquist and L. Klem. Multiple Classification Analysis. Second edition. Ann Arbor: Institute for Social Research, The University of Michigan, 1973 for a complete description of the methodology used.
Dependent Variable Statistics: For the dependent variable (Y):
Standard deviation (square root of unbiased estimator of the population variance.)
Sum of Y
Sum of Y-squared
Total sum of squares
Explained sum of squares
Residual sum of squares
Number of cases used in the analysis
The sum of weights
Independent Variable Category Statistics: For each category of an independent variable:
The number of cases (raw, weighted, and percentages)
Mean and standard deviation
Deviation of the category mean (unadjusted and adjusted)
Adjusted class mean MCA coefficient
Eta and eta squared
Partial beta and beta-squared coefficients
Unadjusted and adjusted sum of squares
Bivariate frequency tables for every pair of predictors (optional)
One-Way Analysis of Variance Summary Statistics: If only one independent variable is specified, the following are printed:
Adjusted eta and eta squared
Total sum of squares
Between-mean sum of squares
Within-groups sum of squares
F value (degrees of freedom are printed)
The major interpretation in a MCA is of the adjusted and unadjusted coefficients printed out for each subclass. In a population where there was no correlation among the predictors, the observations in one class of characteristic A would be distributed over all classes of the other characteristics in a fashion identical to the way in which those in other classes of A were distributed. Hence, the unadjusted mean Y for each subclass of A would be an unbiased estimate of the effect of belonging to that class of characteristic A. In the real world, however, characteristics are correlated. Young people are more likely to be in lower income groups, and in higher education groups than are older people. The multivariate process is essentially one of adjusting for these "non-orthogonalities." The adjusted means are estimates of what the mean would have been if the group had been exactly like the total population in its distribution over all the other predictor classifications. It is useful not only to have the "pure" effects of each class adjusted for all the other characteristics, but also to see how these adjusted effects differ from the unadjusted effects.
The adjusted coefficients for any predictor may be considered an estimate of the effect of that predictor alone "holding constant" all other predictors in the analysis. Differences between the adjusted and unadjusted coefficients can be analyzed, and explanations for these differences may often be found in the two-way tables of predictors. It is often valuable to compare the coefficients within a predictor to see whether there is a pattern or, possibly, a lack of pattern which is of theoretical interest.
Presentation of Results
It is most informative to present first the etas and betas, measures of the relative importance of each predictor singly and in competition with the others, and then to present the unadjusted and adjusted sub-group averages, together with a detailed description of what the subclasses represent and with the number of cases in each. The number of cases it is an indicator of the potential variability of the estimates. Multiple R2 unadjusted and multiple R2 adjusted are also usually reported.
Examples of presentation of MCA results can be found in Barfield and Morgan (1969), Blumenthal, Kahn, Andrews and Head (1972), Johnston and Bachman (1972), Johnston (1973), Katona, Strumpel and Zahn (1971), Morgan. David, Cohen and Brazes' (1962), Mueller (1969), and Pelz and Andrews (1966).
Example: Predicting income (V268) from occupation, marital status, and education.