My name is Leland Lockhart, and I am a graduate student at the University of Texas at Austin and a research assistant at ACT, Inc.’s National Center for Educational Achievement (NCEA). The NCEA is a department of ACT, Inc., a not-for-profit organization committed to helping people achieve education and workplace success. NCEA builds the capacity of educators and leaders to create educational systems of excellence for all students. We accomplish this by providing research-based solutions and expertise in higher performing schools, school improvement, and best practice research that lead to increased levels of college and career readiness.
In applied research, unfamiliarity with advanced procedures often leads researchers to conduct inappropriate assessments. More specifically, unfamiliarity with the cross-classified family of random effects models frequently causes researchers to avoid this approach in favor less complicated methods. The results are frequently biased, leading to incorrect statistical inferences. This has direct implications for the field of program evaluation, as inaccurate conclusions can spell doom for both a program and an evaluator.
Hot Tip: Use cross-classified random effects models (CCREMs) when lower-level units are identified by some combination of higher-level factors. For example, students are nested within neighborhoods, but neighborhoods often feed students into multiple high schools. In this scenario, because neighborhoods are not perfectly nested within high schools, students are cross-classified by neighborhood and high school designations. Use the following steps to diagnose and model cross-classified structures:
1) Examine the data structure. Is a lower-level unit nested within higher-level units? If so, what is the relationship between the higher-level units? If they may not be perfectly hierarchically related, use a cross-classified random effects model.
2) Include the appropriate classifications. Many applied researchers simply avoid cross-classified analyses by ignoring one of the cross-classified factors. This severely limits the generalizability of your results and drastically alters statistical inferences.
3) Provide parameter interpretations. Properly specified CCREMs are analogous to regression analyses. Interpret the parameters in the same fashion, being sure to provide non-technical interpretations for lay audiences.
4) Have software do the heavy lifting. Fitting CCREMs is incredibly easy in a variety of statistical packages. HLM6 provides a user-friendly point-and-click interface, while SAS provides more flexibility for the programming savvy.
5) Use previously applied CCREMs. Peer reviewed methodological journals are rife with exemplar CCREMs and the procedures used to estimate them. When in doubt, follow the steps outlined in the methods section of a relevant journal article.
Rad Resource: Beretvas, S. N. (2008). Cross-classified random effects models. In A. A. O’Connell & D. B. McCoach (Eds.), Multilevel modeling of educational data (pp. 161-198). Charlotte, NC: Information Age Publishing. This chapter provides an excellent introduction to CCREMs for those familiar with multiple regression analyses.
This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.
Multilevel Rasch represents another application of CCREMs, whereby persons and items are treated as crossed random effects in order to estimate person abilities and item difficulties on the same theta metric. In contrast, marginal maximum likelihood calibration treats items as fixed effects, even though responses are nested in items sampled from a larger population of items. See the following entries for more information and an example of multilevel Rasch with R:
http://blog.lib.umn.edu/moor0554/canoemoore/2010/02/multilevel_rasch.html
http://blog.lib.umn.edu/moor0554/canoemoore/2010/02/multilevel_rasch_estimation_r.html