LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Permutation Randomization Methods for Testing Measurement Equivalence and Detecting Differential Item Functioning in Multiple-Group Confirmatory Factor Analysis

Photo by dawson2406 from unsplash

Abstract In multigroup factor analysis, different levels of measurement invariance are accepted as tenable when researchers observe a nonsignificant (&Dgr;)&khgr;2 test after imposing certain equality constraints across groups. Large samples… Click to show full abstract

Abstract In multigroup factor analysis, different levels of measurement invariance are accepted as tenable when researchers observe a nonsignificant (&Dgr;)&khgr;2 test after imposing certain equality constraints across groups. Large samples yield high power to detect negligible misspecifications, so many researchers prefer alternative fit indices (AFIs). Fixed cutoffs have been proposed for evaluating the effect of invariance constraints on change in AFIs (e.g., Chen, 2007; Cheung & Rensvold, 2002; Meade, Johnson, & Braddy, 2008). We demonstrate that all of these cutoffs have inconsistent Type I error rates. As a solution, we propose replacing &khgr;2 and fixed AFI cutoffs with permutation tests. Randomly permuting group assignment results in average between-groups differences of zero, so iterative permutation yields an empirical distribution of any fit measure under the null hypothesis of invariance across groups. Our simulations show that the permutation test of configural invariance controls Type I error rates better than &khgr;2 or AFIs when the model contains parsimony error (i.e., negligible misspecification) but the factor structure is equivalent across groups (i.e., the null hypothesis is true). For testing metric and scalar invariance, &Dgr;&khgr;2 and permutation yield similar power and nominal Type I error rates, whereas &Dgr;AFIs yield inflated errors in smaller samples. Permuting the maximum modification index among equality constraints control familywise Type I error rates when testing multiple indicators for lack of invariance, but provide similar power as using a Bonferroni adjustment. An applied example and syntax for software are provided.

Keywords: type error; factor analysis; invariance; permutation

Journal Title: Psychological Methods
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.