Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

American Journal of Evaluation Week: Meet AJE’s Experimental Methodology Section Editor by Carl Westine

Hello!  My name is Carl Westine and I am an associate professor of Educational Research, Measurement, and Evaluation at University of North Carolina at Charlotte.  I’m also the new Section Editor for Experimental Methodology with the American Journal of Evaluation (AJE). Today I want to call your attention to some scope refreshing regarding this section and encourage you to submit your manuscripts that pertain to advancing evaluation theories, methods, and practice in the area of experimental methodology.

The Experimental Methodology section was originally conceptualized under the leadership of George Julnes, and led by Laura Peck since 2020.  In founding the Experimental Methodology section (see AJE Volume 41, Issue 4), Julnes pointed to four conditions (i.e., values) that support appropriate and effective use of experimental evaluations: potential information value, legal and ethical value, practical value, and portfolio value.  In leading this section, my aim is to continue to publish articles that contribute to our understanding of these conditions and increase an evaluator’s ability to address threats to these values in practice.  I envision contributions primarily coming through articles that advance the design and analysis of experimental methods directly (where participants are randomly assigned to treatment and comparison conditions), but also want the section to be inclusive of other strong quasi-experimental designs (for example that are recognized by the What Works Clearinghouse) that lead to credible evidence. I see practical value in research related to these designs too, which also prioritize causal inference, as they also obviously relate to and can inform the practice of randomized experiments.

Evaluations that use an experimental design (with randomization of treatment and control units) are distinctive and involve their own methods, including design, analysis and practice. The Experimental Methodology section provides a forum for scholarly discussions of these methods, which derives from evaluations employing these or similar designs as well as methodological and empirical research exploring designs that yield causal inference. Examples of issues relevant to evaluation design include sample size and the design’s power to detect effects, examination of costs and optimal allocation of resources associated with these designs, appropriate units of randomization and analysis (and implications of sample clustering), and the external validity of experimentally designed evaluations. Experiments’ distinctive analytic issues involve impact estimation and accurate standard error computation, within study comparisons (under what conditions do various quasi-experiments reproduce experiments’ results), and strategies for getting inside the black box (including analyses of mediators using experimental data). Issues relevant to evaluation practice involve integration of randomization into program operations, opportunities and challenges for implementing multiple-arm or multi-site experiments, strategies for addressing disruptions or pushback that occurs during experimental evaluations, and implementing low-cost experiments.

Check it Out!

The Experimental Methodology section aims to appear in 1-2 issues each year.  Manuscripts submitted to this section are typically between 20-30 pages. Our most recent section appeared in 2023 (see Volume 44, Issue 1).  Please check it out for examples of some great work in this area. This specific issue tackled research on planning better designs through informed power analyses.


The American Evaluation Association is hosting the American Journal of Evaluation (AJE). All posts this week are contributed by evaluators who work for AJE. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on theAEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org . AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.