We are Lori Wingate and Daniela Schroeter from Western Michigan University (WMU). Until recently we both worked at the WMU’s Evaluation Center (Daniela is now in the School of Public Affairs and Administration), where we had the opportunity to work on evaluations, conduct research on evaluation, and teach evaluation to a broad audience and in a lot of different contexts. We noticed there was a dearth of guidance on evaluation questions. We also observed a lot of problems related to evaluation questions, such as evaluations being conducting without explicit evaluation questions; evaluation questions being asked, but not answered; and evaluation questions being misunderstood as item-level questions. Since we were coteaching a workshop on evaluation questions, the time was ripe for creating a checklist on evaluation questions. That was almost three years ago, and we are finally ready to share our checklist with the world.
Our checklist identifies criteria for effective and appropriate evaluation questions. We posit that evaluation questions should be evaluative, pertinent, reasonable, answerable, and specific; and that a set of evaluation questions should be complete. We operationalize each of these constructs in terms of how they apply to evaluation questions. For example, for questions to be pertinent, they should be directly relevant to the program’s design, purpose, activities, or outcomes; the purpose of the evaluation; and what evaluation users need to find out from the evaluation. For additional clarity, we also discuss what evaluation questions should not be. For example, we contrast pertinent evaluation questions with those that are peripheral (i.e., about minor, irrelevant, or superficial aspects of the program or stakeholder interests).
Get the checklist here.
Rad Resources: Two alternative evaluation questions checklists are available to the public:
- Maureen Wilce and Sarah Gill’s (2013) Good Evaluation Questions: A Checklist to Help Focus Your Evaluation. We didn’t know this existed until we had already created ours back in 2013 and we find our two checklists to be complementary rather than competitive. For example, this CDC checklist attends to the role of stakeholders in developing questions, whereas ours focuses strictly on the quality of questions, regardless of the process used to develop them.
- USAID’s (undated) Checklist for Defining Evaluation Questions provides guidance for identifying, prioritizing, and writing evaluation questions by listing potential sources, raising questions regarding the importance of questions, and offering tips for question formulation.
Hot Tip: Our Evaluation Questions Checklist for Program Evaluation identifies six criteria for appropriate and effective evaluation questions. For those who want to learn more, the checklist includes an annotated bibliography of select resources on the topic.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
3 thoughts on “Lori Wingate and Daniela Schroeter on Introducing the Evaluation Questions Checklist for Program Evaluation”
Hello. The link to the “check list” doesn’t work.
Can you offer any advice on helping clients and others understand the difference between evaluation questions and item level questions, or the questions one then asks in a survey or interview? With surprising frequency clients and even other evaluators assume we’ll just ask the evaluation questions in our interviews.
Very helpful information on guides to asking useful evaluation questions, which is crucial for getting useful results.
Here’s a tip sheet with a flowchart for asking useful evaluation questions that we developed at Meaningful Evidence, LLC. The flowchart considers:
1) Is it a question that funders/stakeholders/staff need answered?
2) Does your program theory/logic model support your hypothesis?
3) Is it an unanswered question that people in your field say needs answering?
4) Can we answer the question with available data, time, and budget?