Hi! I’m Sheila B. Robinson, AEA365’s Lead Curator. I’m also an educator with Greece Central School District, and the University of Rochester’s Warner School of Education.
Today, I’ll share lessons learned about evaluation planning and a fabulous way to get ready for summer (learning about evaluation, of course!).
Rudyard Kipling wrote, I keep six honest serving-men, (They taught me all I knew); Their names are What and Why and When, And How and Where and Who.
The “5 Ws and an H” have been used by journalists, researchers, police investigators, and teachers (among many others, I’m sure) to understand and analyze a process, problem, or project. Evaluators can use them to frame evaluation planning as well.
Lesson Learned: Use these questions to create an outline of an evaluation plan:
What: What is your evaluand and what is the focus of the evaluation? What aspects of the program (or policy) will and will NOT be evaluated at this time? What programmatic (or policy) decisions might be made based on these evaluation results? What evaluation approach(es) will be used?
Why: Why is the evaluation being conducted? Why now?
When: When will evaluation begin and end? When will data be collected?When are interim and final reports (or other deliverables) due?
How: How will the evaluation be conducted? How will data be collected and analyzed? How will reports (or other deliverables) be formatted (i.e. formal reports, slides, podcasts, etc.) and how will these (and other information) be disseminated?
Where: Where is the program located (not only geographic location, but also where in terms of contexts – political, social, economic, etc.)?
Who: Who is the program’s target population? Who are your clients, stakeholders, and audience? Who will be part of the evaluation team? Who will locate or develop measurement instruments? Who will provide data? Who will collect and analyze data and prepare deliverables? Who are the primary intended users of the evaluation? Who will potentially make decisions based on these evaluation results?
Can you think of other questions? I’m sure there are many more! Please add them in the comments
Hot Tip: Register for the American Evaluation Association’s Summer Evaluation Institute June 2-5, 2013 in Atlanta, GA to learn more about 20+ evaluation-related topics.
Hot Tip: Want to learn more about evaluation planning? Take my Summer Institute course It’s not the plan, it’s the planning (read the description here).
Rad Resource: Susan Kistler highlighted a few institute offerings here.
Rad Resource: I think this course: Every Picture Tells a Story: Flow Charts, Logic Models, LogFrames, Etc. What They Are and When to Use Them with Thomas Chapel, Chief Evaluation Officer at the Centers for Disease Control and Prevention, sounds exciting. Read the description here.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
In the group of Who questions, you lead off with Who is the target population. Although that is one question to answer, it’s not the most important one, which you don’t mention: that question is Who is impacted?, not, Who did the program mean to impact? Finding side effects is a moral imperative for evaluators, not just good practice.
Thank you for your comment Michael. Your questions are indeed important. This may just be semantics, but I think of “Who is (actually) impacted by the program” and “Who did the program mean to impact” as evaluation questions. I think of developing evaluation questions as part of the evaluation planning process, and didn’t get so far in this post as to include them here. In continuing this list of evaluation planning questions, I would add, “WHAT are the evaluation questions?” and then add those two “WHO” questions under that category.
I think that we should almost always be asking whether a program is reaching the intended population and if it is not, investigating why that is (i.e. potential barriers to access). A thorough understanding of who is and who is not impacted and in what ways is critical to program evaluation.
Thanks for this post. Readers may also be interested in the upcoming Coffee Break webinar series from BetterEvaluation and AEA which will introduce the Rainbow Framework – another useful tool for planning evaluations.
Register here: http://comm.eval.org/coffee_break_webinars/CoffeeBreak/BetterEvalSeries
More info on the framework here: http://betterevaluation.org/plan
cheers,
Simon Hearn
Thank you Simon! I’m thoroughly enjoying Better Evaluation’s tools and resources and hope to participate in the webinar series.