A Review of Higher-Order Factor Analysis Interpretation Strategies

The purpose of the present paper was to summarize exploratory second and third-order factor analyses and explain interpretation strategies for the higher-order factors, specifically, Gorsuch’s product matrix, the Schmid and Leiman solution, and Thompson’s orthogonally rotated product matrix solution. Exploratory factor analysis is a multivariate technique to reveal information about latent constructs from the measured variables. When researchers choose an oblique rotation, they believe either their factors are correlated or the best solution will result from an oblique rotation. Whenever primary factors are correlated, extracting higher-order factors from an inter-factor correlation matrix is vitally important to understand data from a different perspective. The SAS syntax is provided along with heuristic datasets to assist interested researchers in exploring the techniques. Advantages of each method are discussed.


INTRODUCTION
The concept of factor analysis has been known for over a hundred years.After Spearman (1904) published his seminal study, factor analysis became one of the most widely used statistical techniques.Because of the wide usage of factor analysis, numerous favorable attempts to improve factor analysis techniques have occurred over the years (cf.Henson, Capraro, & Capraro, 2004).For example, there are several analytical (i.e., empirical) factor rotation techniques that provide objective factor analysis results for the same data across Moreover, looking at data from multiple perspectives can provide additional information.All different perspectives are components of each other and "... each is needed to see patterns at a given level of specificity versus generality" (Thompson, 2004, p. 73).While FOFs give the specific features of data, HOFs provide general features of data.FOFs are tightly focused areas of generalization with a great deal of accuracy whereas SOFs increase the breadth of generalization at the cost of some degree of accuracy, which can be larger or smaller depending on the data (Gorsuch, 1983).

LOGIC FOR EXTRACTING HIGHER-ORDER FACTORS
The process for extracting HOFs is almost the same as for extracting FOFs.For example, both principal component and principal axes methods can be used in the procedure for higher-order factor extraction.The methods used to decide the number of FOFs, which are the eigenvalues greater than one rule (Kaiser, 1960), Cattell's scree plot (1966), and parallel analysis (Horn, 1965), can also be used to determine the number of HOFs (Gorsuch, 1983).
Researchers new to exploratory factor analysis (EFA) may construct FOFs by using default settings in any of many statistical software packages, such as SPSS and SAS.However, no default settings to extract HOFs exist in SPSS or SAS.Indeed, one must use a syntax editor to conduct our analysis if HOFs are needed.Even though researchers new to EFA can extract HOFs by using a syntax editor, they need to know how to interpret results and how to obtain the solutions for easier interpretation, especially when HOFs are extracted.

INTERPRETATION OF HIGHER-ORDER FACTORS
Interpreting the EFA results when HOFs are extracted requires some special strategies to avoid misguided or deficient interpretations.The HOFs, by their nature, are not observed variables.FOFs are abstractions of observed variables, and HOFs are abstractions of abstractions.It is important not to interpret SOFs in terms of abstractions (i.e., FOFs); one should instead interpret SOFs in terms of measured variables (Thompson, 2004).When one only extracts FOFs, there should not be any concern about interpreting them in terms of measured variables because FOFs are extracted from IVCM.However, when one extracts SOFs or TOFs, one should use an IFCM rather than an IVCM, so the ability to interpret those SOFs or TOFs in terms of measured variables should be a concern.Gorsuch (1983) emphasized the purpose of interpreting HOFs: To avoid basing interpretations upon interpretations of interpretations, the relationships of the original variables to each level of the higher-order factors are determined....Interpreting from the variables should improve the theoretical understanding of the data and produce a better identification of each higher-order factors.(pp.245-246) Because interpretation of HOFs is vitally important to remaining grounded in reality, the interpretation should be in terms of measured variables.However, this is not a trivial or self-evident process.Generally, there are three strategies to interpret HOFs in terms of measured variables: Gorsuch's (1983) product matrix (PM), Thompson's (1990) orthogonally rotated PM, and the Schmid and Leiman (1957) solution (SLS).

75
In Gorsuch's PM, rows represent the measured variables and columns represent the SOFs.This PM is obtained by post multiplying the first-order factor pattern coefficient matrix with the second-order factor pattern coefficient matrix.The number of columns in the first-order factor pattern coefficient matrix is equal to the number of rows in the second-order factor pattern coefficient matrix, so multiplication of these matrixes is mathematically admissible and produces a PM.The number of rows in this PM is equal to the number of rows in the first-order factor pattern coefficient matrix, and the number of columns in the PM is equal to the number of columns in the second-order factor pattern coefficient matrix.One should interpret the SOF pattern coefficients in terms of measured variables because multiplication of the matrixes provides measured variables as rows in the PM.What is being done through this multiplication procedure is partitioning the explained variance of measured variables in terms of SOFs.
Even though the mathematical procedure behind factor analysis is matrix algebra, we can provide an analogy to what the PM represents in terms of measured variables by using simple arithmetic for heuristic purposes.Let us consider a basic math problem: What is the result of 7/10 of 8/10 of 100?By starting with the latter portion, one can solve this problem.Eight-tenths of 100 is equal to 80, and then 7/10 of 80 is equal to 56, which is our final solution.In terms of EFA, one can think of 100 as being analogous to an IVCM or all of the variance; the result of the first step (8/10 of 100), 80, is analogous to explained variance by the extracted FOFs from IVCM in terms of measured variables; and the last step (7/10 of 80), 56, is analogous to the explained variance by SOFs in terms of measured variables.Also, one can represent the explained variance by SOFs in terms of FOFs via the above analogy.The explained variance by FOFs is 80, and explained variance by SOFs is 56.If we think of 80 as representing the information in the IFCM, the result of 56/80 will give the explained part by SOFs in terms of FOFs, not measured variables.The result is .7,and we can indicate that 70% of the information of the IFCM can be represented by SOFs.
When TOFs are extracted from an obliquely rotated second-order IFCM, a triple product matrix (TPM) can be estimated in terms of measured variables.For example, let us say there are 30 variables, 12 FOFs extracted from the 30 by 30 IVCM, 3 SOFs extracted from the 12 by 12 IFCM, and 1 TOF extracted from the 3 by 3 second-order IFCM.The TPM would be: P 30*1 = P 30*12 * P 12*3 * P 3*1 (1)  P 30*1 represents the TPM (vector) by 30 rows for the measured variables and 1 column for the TOF. P 30*12 represents the first-order factor pattern coefficient matrix. P 12*3 represents the second-order factor pattern coefficient matrix. P 3*1 represents the third-order factor pattern coefficient matrix (vector).This resulting TPM can be interpreted in terms of measured variables for TOFs.Even though only one TOF is represented in the above heuristic situation, the number of TOFs might be more than one.Thompson (1990) suggested the PM can be orthogonally rotated for easier interpretation when there is more than one SOF extracted.In EFA, easier interpretation oftentimes indicates simple structure.Therefore, it is reasonable that rotation of the PM would also be useful.

Thompson's (1990) Orthogonally Rotated Product Matrix
Even though Thompson's criterion was produced for SOFs, rotating a TPM orthogonally can be useful when TOFs are extracted.The procedure for a TPM was illustrated in the section concerning Gorsuch's PM.However, in that section, the TPM has only one column because of one extracted TOF.To rotate a TPM, one must have at least two TOFs.If one has a TPM and at least two columns, then one of the orthogonal rotation procedures might be applied to provide simple structure.Schmid and Leiman (1957) produced an elegant solution for the interpretation of EFA results by revealing the hierarchical structures of variables when HOFs are extracted from an IFCM.In higher-order factor analysis, HOFs are extracted from IFCM or inter-factor covariance matrixes.This procedure indicates that explained variance in terms of measured variables by factors on any level cannot be more than the explained variance by one level below the factors.For example, if there are three levels in an EFA, TOFs cannot explain more than SOFs, and SOFs cannot explain more than FOFs.The SLS partitions the explained total variance as non-overlapping pieces according to the level of factors by starting from highest-order factors to lowest (first)-order factors.Thompson (2004) explained the procedure:

Schmid and Leiman (1957) Solution
... Schmid and Leiman (1957) proposed an elegant method for expressing both first-order and the second-order factors in terms of the measured variables, but also residualizing (removing) all variance in the first order factors that is also present in the second-order factors.(p.74) According to the different levels of factors, the non-overlapping partitioning variance procedure indicates that the factors on different levels are orthogonal (i.e., at right angles or uncorrelated) to each other.For example, TOFs are orthogonal to SOFs and FOFs, and SOFs are orthogonal to FOFs.However, the SLS does not imply that factors on a given level are orthogonal to each other.For instance, FOFs might not be orthogonal; they might be correlated, as is the possibility for SOFs and TOFs (Gorsuch, 1983).
Application of the SLS is possible when TOFs or factors higher than TOFs are extracted.There are many examples of applications of the SLS in higher-order EFA when SOFs are extracted (e.g., Borrello & Thompson, 1990;Cook, Heath, & Thompson, 2001;Cook & Thompson, 2000;Thompson, Wasserman, & Matula, 1996).Even though researchers can find studies to understand the procedure for SLS solution when SOFs are extracted, obtaining the procedure for SLS when TOFs are extracted may not be easy.Thus, we provided two heuristic examples to show how to obtain solutions to interpret results in the case of extracted SOFs (i.e., heuristic example 1), and TOF (i.e., heuristic example 2).

HEURISTIC EXAMPLES FOR HIGHER-ORDER FACTOR ANALYSIS
Interpretation strategies were provided by two heuristic examples.When number of levels increase in EFA, obtaining solutions to interpret the results can be challenging, so we provided two different examples for researchers.In the first example, the highest extracted factors were SOFs; and in the second example, the highest extracted factor was TOF.In both examples, principal component analysis was used to extract factors.Researchers, not a specific method (e.g., eigenvalue greater than one rule, parallel analysis, or scree plot) defined the number of factors.In all Promax rotations, kappa (K) power was set to 4. Results were not interpreted in the subsequent examples; instead, examples showed how to obtain solutions for easier interpretations.SAS syntax was provided for interpretation  Appendix A).In the first example, the highest levels of factors extracted were SOFs; a TOF was extracted for the second heuristic example.

Heuristic Example 1
Extraction Procedure For this heuristic example, LibQUAL+ TM data (Thompson, 2004, pp. 163-167)  When one of the oblique rotation strategies (e.g., Promax) was used to rotate factors, correct interpretation required thinking about pattern and structure coefficients simultaneously (Thompson, 2004).Table 1 represents the first-order pattern and structure coefficients.
FOFs were extracted by using row data, but we could also extract FOFs by using an IVCM.Both methods would provide the same solution in terms of FOFs.The principal component method was applied for the extraction method.The first step of this method was calculating the correlation matrix for the variables (Thompson, 1984); thus, the results would have been exactly the same if enough decimals had been used for the correlations in the correlation matrix.  2 represents the IFCM for the FOFs.This IFCM was used to estimate SOFs.Also, this IFCM could be used for another purpose: Post multiplication of the pattern coefficient matrix (Table 1) by the IFCM (Table 2) provides the structure coefficients (Table 1) of the factors in a given level.Thus, if computer programs do not provide a structure coefficients matrix, we would have to compute the structure coefficients for the FOFs.for a variable equals the sum of the squared coefficients across SOFs (e.g., 0.805 = 0.897 2 + -.002 2 ).
Table 3 gives SOFs and communality coefficients (h 2 ) that are extracted from the IFCM (Table 2).The aim of this present paper was to illustrate interpretation strategies.As indicated before, researchers, and not the use of one of the usual methods (e.g., eigenvalues greater than 1 rule, scree plot, parallel analysis), determined the number of factors in any level in the present study.If usual methods for the number of factors had been used, the number of extracted factors likely would have been different.Therefore, like with traditional exploratory factor analysis one examines the observed variables that load onto a factor to determine the name for that factor.When moving one step up to second-order factors one MUST consider both the previous factor names and the variables contained within each first factor.Therefore, it is possible that the researcher may learn that discrete factors at level one are really subsumed within a higher order structure that was more difficult to detect without HOF analysis.In Example 1, first-order factors 1, 2, and 3 are now subsumed under the first second-order factor while the fourth first-order factor contributes to the second-order factor alone.While this example is not an ideal condition, it does provide a practically important nuance of what can happen during HOF analysis.
For heuristic example 1, FOFs and SOFs were extracted, and now interpretation strategies can be provided based upon these factors.

Gorsuch's Product Matrix
A PM was obtained by post-multiplying the FOF pattern coefficients (Table 1) by SOF pattern coefficients (Table 3).In this way, SOFs can be interpreted in terms of measured variables.

Thompson's Orthogonally Rotated Product Matrix
Quartimax was chosen to rotate the PM orthogonally.When interpretation of a PM is challenging, rotating the PM orthogonally may provide simple structure (Thompson, 1990).Table 4 shows both Gorsuch's PM and Thompson's orthogonally rotated PM.Note.Product matrix (first two columns) orthogonally rotated by using Quartimax (last two columns).

Schmid and Leiman Solution
To be able to find the SLS, we need to create a new matrix, called the augmented matrix (A).The augmented matrix (Table 5) includes SOFs on the first columns and square roots of (SQRT) uniqueness of SOFs in the later columns.Uniqueness is calculated as the remaining variance after the communality coefficient is subtracted (i.e., Uniqueness= 1-h 2 ).The SQRT of uniquenesses (u 2 ) is placed on the diagonal of the uniqueness matrix.4, and the remaining unexplained variance by factors as a set is the uniqueness of first variable.
After calculating the augmented matrix, the SLS matrix can be estimated by postmultiplying the second-order pattern coefficient matrix by the calculated augmented matrix.This solution provides non-overlapping quantities at different levels.The first two columns of the SLS (Table 6) are exactly equal to those of the PM (Table 4).If the first two columns of the SLS are exactly equal to the first two columns of the PM, the SLS provides more information than the PM.However, if the primary interest is to interpret SOFs in terms of measured variables, not the remaining explained variance by FOFs after SOFs are extracted, interpreting the PM or the SLS matrix would result in no difference because the first two columns of the SLS matrix do not differ from those of the PM.If someone would like to interpret SOFs and remaining information on FOFs, it is suggested that one would interpret the more complete SLS and not the PM.Heuristic example 1 can be visually represented in terms of one variable in Figure 1.The largest rectangle represents the variation for only one variable.The remaining six rectangles show the explained part of the variation of the variable by FOFs and SOFs.It is important to notice that SOFs I and II explain the variation also accounted for by FOFs.This situation indicates that SOFs are extracted from the first-order IFCM.The second point is that FOFs are overlapping each other, which means that they are correlated.Thus, we can know that they were rotated obliquely.In terms of SOFs, they are not overlapping, so they were uncorrelated.Additionally, one can visually see what the SLS does.As explained previously, the solution makes SOFs and FOFs orthogonal to each other because SOFs capture the variation of variables, which is also explained by FOFs.The remaining variance in the first-order IFCM, after SOFs are extracted, accounted for by FOFs in the SLS.

Heuristic Example 2
Extraction Procedure For this heuristic example, 15 variables (t1 to t15) from Holzinger and Swineford (1939) data set (pp. 81-91) were used.In this example, FOFs, SOFs, and a TOF were extracted.The variable labels that have been used in this present example are contained in Appendix A.
FOFs were extracted from row data and rotated by Promax.Based on the Promax rotation, one needs to interpret both pattern and structure coefficients (Table 7) because factors were rotated obliquely (Thompson, 2004).Oblique rotation (i.e., Promax) allowed the initially uncorrelated FOFs correlated, so SOFs should be extracted from the IFCM among FOFs (Gorsuch, 1983).Table 8 provides the IFCM.SOFs were correlated to each other because they were rotated by Promax.Thus, the IFCM of SOFs can be used to extract TOFs.Table 10 illustrates the IFCM of SOFs.To extract the TOF, the correlation matrix of SOFs was used.Only one TOF was extracted, so there is no possibility of rotating the TOF.Also, there was only one matrix for factor coefficients, called factor pattern/structure coefficients.Table 11 shows the TOF pattern/structure coefficients.There are three hierarchical levels in heuristic example 2, so interpretation of factors might be more challenging than EFA studies that have two hierarchical levels.The logic of interpretation strategies of HOFs in this heuristic example was the same as in the previous heuristic example, which has SOFs at the highest level.However, heuristic example 2 requires more work to obtain interpretable solutions.In the subsequent sections, we explain how to obtain interpretable solutions when there are TOFs in the analysis.The TOF analysis indicates that a single higher factor exists for the data and there is a possibility of altering the theoretical framework of the study.The potential here is that by understanding that a single TOF subsumes all prior factors that the theoretical framework could be revisited to accommodate the findings and could provide transformative insights into the nexus of the applied research and the theoretical data analytic strategy.

Gorsuch's Product Matrix
When TOFs are extracted, two separate PMs can be calculated.One of them is for representing TOFs in terms of observed variables and the other is for interpreting SOFs in terms of observed variables.The matrix, which represented TOFs in terms of measured variables, can be called a TPM because all pattern coefficients from three levels were multiplied by keeping the principles of matrix multiplication.For instance, in this heuristic example, the number of FOFs was 6 (P 15*6 ); the number of SOFs was 3 (P 6*3 ); and the number of TOFs was one (P 3*1 ).The TPM was as follows: P 15*6 * P 6*3 * P 3*1 (2) This matrix multiplication produces the P 15*1 matrix (vector).This matrix indicates that measured variables were rows in the matrix and the TOF was a column.
Another PM is similar to the previous example's PM.To interpret SOFs in terms of measured variables, a SOF's pattern matrix can be multiplied by a FOF's pattern matrix.
P 15*6 * P 6*3 (3) However, this second-order PM did not provide a complete picture of the dynamics of the SOFs in terms of measured variables because the TOF was extracted from the second-order IFCM.Interpretation of this matrix may provide some indication about SOFs in terms of

83
measured variables, but the interpretation does not take into account TOF extraction from these SOFs.To account for the TOF in the interpretation of SOFs, the SLS is more appropriate than the PM.

Thompson's Orthogonally Rotated Product Matrix
In this heuristic example, only one TOF was extracted, so rotation was not possible for the TPM (P 15*1 ).However, if there were more than one TOF, orthogonal rotation might provide easier interpretation for TOFs in terms of measured variables.
A second-order PM can be rotated orthogonally for easier interpretation.Table 12 provides a TPM, a second-order PM, and Thompson's orthogonal solution to the secondorder PM.  includes three different matrixes.The first column includes triple product matrix, the other two matrixes are respectively second-order product matrix, and orthogonally (i.e., varimax) rotated second-order product matrix.

Schmid and Leiman Solution
When a TOF is extracted, estimation of the SLS requires a few more steps.The first step requires creating an augmented matrix (Table 13) for the TOF.In the second step, this Augmented Matrix I is post multiplied by second-order factor pattern coefficients to make the TOF and SOFs orthogonal to each other.This multiplication procedure removes all variance in the SOFs that is also present in the TOF.Specifically, the variance, which is common in both TOF and SOFs, is used by the higher-level factor (i.e., TOF).Table 14 provides the results of this multiplication.In the third step, creating another augmented matrix for SOFs is required.In the present example, SOFs were rotated obliquely to extract a TOF.Thus, to estimate communality coefficients, which were necessary to estimate uniquenesses in the augmented matrix, we used both pattern and structure coefficients of SOFs.Table 9 contains pattern, structure, and communality coefficients of SOFs.A communality coefficient for any variable is estimated by summing the multiplication of pattern coefficients and structure coefficients across each factor (e.g., 0.570 = (0.074*0.348) + (0.728*0.751) + (-0.031*0.077)).
The Augmented Matrix II can now be created by using estimated communalities.The matrix in Table 14 (Orthogonalized Third-and Second-Order Factors) will be augmented by adding the square root of uniqueness of SOFs as diagonals to the right side of the matrix.Table 15 shows the Augmented Matrix II.0.000 0.000 0.000 0.000 0.000 0.663 Note.The square root of the uniquenesses are shown in the last 6 columns as diagonals.For example, 0.665 is the square root of 1-0.570 (from Table 9).The final step in the solution is post-multiplying the first-order pattern matrix by the Augmented Matrix II.This resulted in the SLS (Table 16).The first column is the same as the TPM, so when we are interpreting the TOF in terms of measured variables, one should only interpret the SLS or the TPM, not both (Thompson, 2004).The second and third columns of the SLS matrix are not the same as the second-order PM because the SLS explained variance by the TOF, which is also shown in SOFs, is removed from SOFs.However, in the SOF product matrix, all explained variance by SOFs are represented by factors as a set.There is no subtraction from variables due to the TOF.

DISCUSSION
The single most important caveat when extracting HOF is that if one is extracting the next higher level of factor results in the same structure then simple structure has been achieved and no additional aggregation is possible.That is, the FOFs provide the most plausible condition for the data in hand.In example 1 where 3 of the 4 FOFs were contained on the first SOF and only the fourth FOF was on the second SOF.This implies that the FOF analysis contained three factors that were contained in a SOF and one FOF.The interpretation can indicate that the three FOFs represent a single higher level of abstraction while the single FOF is not abstractable, given the data, to a higher level.So attempting a yet higher level of abstraction would be unwarranted from the results contained in the second level factor analysis.
Whenever HOFs are extracted, the interpretation of these factors is important in factor analysis studies to gain additional insight to factor structure (Thompson, 2004).Gorsuch's PM, Thompson's orthogonally rotated PM, and the SLS are useful tools for interpreting HOFs in terms of measured variables.Each method has its own advantages.If the primary interest is interpreting HOFs in terms of measured variables, the SLS and the PM give the same solutions because columns of highest orders in the SLS are exactly equal to the highest order PM.In another situation, if either the SLS or the PM cannot achieve simple structure of higher orders, Thompson's orthogonal rotated PM would be the more easily interpreted solution (Thompson, 1990).
The SLS provides more information than the other two methods if the interest is not solely interpreting HOFs in terms of measured variables.While SLS provides the solution for interpretation in terms of measured variables, it also gives independent contributions of lower-order factors after taking into account HOFs.Thus, the SLS provides additional perspectives on the data that cannot be obtained by the PM or Thompson's orthogonally rotated PM.
Results obtained by EFA can be used in confirmatory factor analysis (CFA) for the theoretical construct.If existence of HOFs is shown in EFA, researchers most likely will need to construct these HOFs in their CFA models.Thus, analyzing and interpreting HOFs are important for researchers' studies and for other researchers who employ CFA based on the primary results from the EFA.As oppose to CFA, EFA does not require researchers to have prior assumptions about the nature of constructs, CFA require researchers to have some background knowledge or prior-assumptions about the nature of construct.Through conducting CFA, researchers have pre-assumptions about the nature of construct they are interested in.The prior assumptions such as the number of factors, which variable reflect given factors, and whether the factors are correlated (Thompson, 2004).The necessary information obtained by EFA enables researchers to test hierarchical relations between constructs (Kline, 1998), so that they can build realistic models.
In example 1, applied researchers can be "data blind" to their own work.The term "data blind" refers to the myopathy associated with being so close to one's own data that he or she fails to recognize the possibility of that unobserved variation can play important roles.For example, higher order factors are one such possibility that can be overlooked by researchers who become so deeply committed to their observed variables that they do not consider the potential of HOFs to provide new and possibly transformative insights into the phenomena under investigation.Unfortunately, once the study is published HOFs are lost to the community and there is no current strategy for estimating HOFs if they were not considered in the original manuscript.There is one exception, if authors report the correlation matrix.This has two considerations.First authors should report the correlation matrix for all the observed variables because with this information all HOFs can be extracted in ex post facto analyses.Second, if authors do not report the correlation matrix for all their observed variables then they should report the correlation matrix for the first order (obliquely rotated) factors and one additional higher order level can be extracted in ex post factor analyses.

___________________________________________________________________________________________________________________
ISSN: 1309 -6575 Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi Journal of Measurement and Evaluation in Education and Psychology 77 methods (see

Table 1 .
was used.In Example 1, FOFs and SOFs were extracted.Promax Rotated First-Order Factor Pattern Coefficients

Table 2 .
First Order Inter-Factor Correlations

Table 4 .
Gorsuch's Product Matrix and Thompson's Orthogonally Rotated Product Matrix

Table 5 .
Augmented Matrix Number of uniquenesses columns equal number of first order factors.For example, 0.441 =SQRT (1-0.805).The value, 0.805, represent the communality coefficient from Table

Table 7 .
First Order Factors Pattern and Structure Coefficients ___________________________________________________________________________________________________________________ ISSN: 1309 -6575 Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi Journal of Measurement and Evaluation in Education and Psychology 81

Table 8 .
First-Order Inter Factor Correlation Matrix SOFs were extracted from the first-order IFCM and Promax rotation.Table9displays the pattern and structure coefficients of these SOFs.

Table 9 .
Promax Rotated Second-Order Pattern, Structure, and Communality Coefficients

Table 11 .
Third-Order Factor Pattern/Structure Coefficients and Communalities Communality coefficients (h 2 ) are equal to square of TOF patterns.

Table 12 .
Triple Product Matrix, Second-Order Product Matrix, and Orthogonally Rotated

Table 13 .
Augmented Matrix I Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi Journal of Measurement and Evaluation in Education and Psychology 84 Note.Number of uniquenesses is equal to number of SOFs.Uniquenesses are estimated by SQRT of 1-h 2 .h 2 are given in Table 11.

Table 14 .
Orthogonalized Third-and Second-Order Factors

Table 15 .
Augmented Matrix II

Table 16 .
Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi Journal of Measurement and Evaluation in Education and Psychology 85 Schmid and Leiman Solution for Heuristic Example II ___________________________________________________________________________________________________________________ISSN: 1309 -6575