Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Chicagoland Evaluation Association Week: Strengthening Evaluation Practice by Attending to Evaluative Criteria by Ceily Moore, Mikayla Strasser, and Rebecca Teasdale

Welcome to The Chicagoland Evaluation Association Week on AEA365! This week’s postings reflect the diversity of our Local Affiliate members and their work, using a lens of cultural responsiveness evaluation with various types of communities. We are excited to share some of our projects along with lessons learned, hot tips, and rad resources from our projects. 

Casey Solomon-Filer, Vice President, and Asma Ali, Past- President


Rebecca Teasdale
Mikayla Strasser
Ceily Moore

Hello! We are Ceily Moore and Mikayla Strasser, educational psychology doctoral students, and Rebecca Teasdale, assistant professor of educational psychology, at the University of Illinois at Chicago. Our research group has developed a model of evaluative criteria that can be used to make definitions of success more explicit in evaluations and bring a broader range of voices and values into the evaluation process.

To know if a program is high quality, evaluators need a yardstick to tell us what quality or success looks like. Evaluative criteria serve as that yardstick. Evaluators use criteria to represent definitions of success and determine what questions to ask, data to collect, and conclusions to draw. For example, evaluators may focus on outcomes of a program to see if it had the desired effects. And the same evaluation may investigate whether the stakeholders had a satisfactory experience or if the program matched the participants’ needs. In that case, evaluators would also be using criteria that capture participants’ experience with or the relevance of the program. 

Rad Resource 

While criteria underpin all aspects of the evaluation, they aren’t always explicit. We recently published a framework that outlines the types of criteria available to evaluators and potential sources of those criteria (e.g., program leaders, funders, participants, etc.). We are continuing to refine that framework in our research group. 

Hot Tips

Through refining and applying the framework, we have identified recommendations that can guide evaluation practice: 

  1. Look beyond desired outcomes. Multiple types of criteria can be used to define success for a program. It is our job as evaluators to ensure we are including criteria that best align with the focus of the program and stakeholders’ varying values and represent a holistic picture of program success.
  2. Broaden the sources of criteria. Evaluators often look to program leaders and funders to determine criteria for the evaluation. However, excluding the perspectives of participants and program staff means we also exclude criteria that represent their values. Evaluators should ask ourselves, “Whose criteria are being represented in this evaluation? Why? Who is missing?” 
  3. Consider implications for data collection methods. After deciding how to define success for a given program, evaluators have to decide how to best collect data on those criteria. Through our research, we have found that the different types of criteria are associated with different data collection methods. For example, evaluators often use surveys or questionnaires to examine outcomes and to investigate stakeholders’ experience with the program. Evaluations that examine stakeholders’ experience often also use interviews and focus groups to collect data, while those that focus on outcomes often analyze documents or secondary data. This shows that incorporating different definitions of success may require different data collection methods.  

More Rad Resources

We believe evaluation can be strengthened by attending to evaluative criteria. To learn more about criteria and values, we recommend:


The American Evaluation Association is hosting Chicagoland Evaluation Association (CEA) Affiliate Week. The contributions all this week to AEA365 come from CEA members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.