Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Quantitative Methods: Theory and Design TIG Week: Method Effects of Keying and Wording in Psychometric Instruments: A Quantitative Method in Evaluation by Lin Ma

Greetings, AEA365 readers! I’m Lin Ma, a Senior Data Analyst at Rocky Vista University in Colorado, where I am responsible for data management and data analysis associated with student assessments and program evaluation measures. I hold a Ph.D. from the University of Denver’s Morgridge College of Education, Research Methods and Statistics Program. With over five years of experience in data analysis within higher education, I have worked extensively with both qualitative and quantitative data to support program evaluation and student success. 

Photo of the author

I’ve been examining the method effects of keying and wording in self-report psychometric instruments, building on the work of various researchers over the years (Horan et al., 2003; Kline, 2016; da Silva et al., 2022). From what I’ve gathered, a significant issue identified in previous literature (Campbell & Fiske, 1959; Kenny, 1995; Marsh, 1991) is how the method effects of wording and keying can potentially affect the substantive construct of interest. Specifically, negative keying or wording methods have been found to contaminate a single dimension of interest when measuring a latent construct (Carmines & Zeller, 1979; Chen, 2017; Zhang & Savalei, 2016). In this blog, I will explain how keying and wording impact the latent construct of interests and share some lessons learned from the analysis and some key insights that evaluators can use when creating or reviewing surveys for their evaluations.   

Cool Tricks

Structural Equation Models

Recognizing a gap in the existing research regarding keying methods, specifically Negative Keying (NK) and Positive Keying (PK) in self-report psychometric assessments, I embarked on a study to examine these two keying methods through several Structural Equation Models (SEMs). Furthermore, I noted a lack of research assessing the combined effects of keying and wording methods—Negative Keying Negative Wording (NKNW) and Positive Keying Positive Wording (PKPW)—in self-report psychometric instruments. Thus, the aim of my study was to assess the impact of method effects of these combined keying and wording approaches and evaluate the construct validity of the psychometric instruments within this context.

Lessons Learned

Keying and Wording

My study enhances our understanding of the method effects of keying and wording by offering a fresh perspective on these method effects as a quantitative approach in evaluation. My study provides detailed explanations of item keying and item wording, distinguishing them from prior research. By applying several SEMs, I also contribute to the discussion on method factor identification. Specifically, I seek to deepen the understanding of the effects of keying methods (NK and PK) in psychometric assessments, which has been either obscured or unaddressed in previous research.

Furthermore, my study explores the differences between keying and wording, examining their effects and assessing the reliability of keying and wording method factors. By evaluating the factorial structure, testing dimensionality, and calculating measurement parameter estimates, my findings aim to bridge the gap in detecting method factors associated with keying and wording, particularly NKNW and PKPW. Ultimately, my findings contribute to evaluating and validating the impact of NKNW and PKPW on the dimensionality of a latent structure of interest using self-report psychometric data.

The findings draw attention to the keying and wording methods in self-report psychometric instruments when applying the quantitative method to conduct program evaluation. As evaluators, when creating or reviewing surveys for evaluations, we should rigorously select and test evaluation instruments by assessing keying and wording method factors and calculating measurement parameter estimates. This ensures that the chosen evaluation tools are reliable and of high quality.


The American Evaluation Association is hosting Quantitative Methods: Theory and Research Evaluation Week. The contributions all this week to AEA365 come from our Quant TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.