Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

PD Presenters Week: Mike Trevisan and Tamara Walser on Evaluability Assessment

Hello from Mike Trevisan and Tamara Walser! Mike is Dean of the College of Education at Washington State University and Tamara is Director of Assessment and Evaluation in the Watson College of Education at the University of North Carolina Wilmington. We’ve published, presented, and conducted workshops on evaluability assessment and are excited about our pre-conference workshop at AEA 2014!

Evaluability assessment (EA) got its start in the 1970s as a pre-evaluation activity to determine the readiness of a program for outcome evaluation. Since then, it has evolved into much more and is currently experiencing resurgence in use across disciplines and globally.

We define EA as the systematic investigation of program characteristics, context, activities, processes, implementation, outcomes, and logic to determine

  • The extent to which the theory of how the program is intended to work aligns with the program as it is implemented and perceived in the field;
  • The plausibility that the program will yield positive results as currently conceived and implemented; and
  • The feasibility of and best approaches for further evaluation of the program.

EA results lead to decisions about the feasibility of and best approaches for further evaluation and can provide information to fill in gaps between program theory and reality—to increase program plausibility and effectiveness.

Lessons Learned:  The following are some things we and others have learned about the uses and benefits of EA—EA can:

  • Foster interest in the program and program evaluation.
  • Result in more accurate and meaningful program theory.
  • Support the use of further evaluation.
  • Build evaluation capacity.
  • Foster understanding of program culture and context.
  • Be used for program development, formative evaluation, developmental evaluation, and as a precursor to summative evaluation.
  • Be particularly useful for multi-site programs.
  • Foster understanding of program complexity.
  • Increase the cost-benefit of evaluation work.
  • Serve as a precursor to a variety of evaluation approaches—it’s not exclusively tied to quantitative outcome evaluation.

Rad Resources:

Our book situates EA in the context of current EA and evaluation theory and practice and focuses on the “how-to” of conducting quality EA.

An article by Leviton, Kettel Khan, Rog, Dawkins, and Cotton describes how EA can be used to translate research into practice and to translate practice into research.

An article by Thurston and Potvin introduces the concept of “ongoing participatory EA” as part of program implementation and management.

An issue of New Directions for Evaluation focuses on the Systematic Screening Method, which incorporates EA for identifying promising practices.

A report by Davies describes the use of EA in international development evaluation in a variety of contexts.

Want to learn more? Register for Evaluability Assessment: What, Why and How at Evaluation 2014.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.