Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hi! I am Pam Lilleston, Assistant Director of the Office of Research, Evaluation and Reporting at New Jersey’s Department of Children and Families, where I oversee evaluations conducted by our own group of internal evaluators, as well as outside research institutions. In this role, I have learned a lot about how evaluators can make findings actionable for program and policy-focused decision-makers. Here are some hot tips to help ensure the results from your next evaluation have a real-world impact:
Hot Tip #1: Know who the stakeholders are and answer their evaluation questions
Evaluation findings are only useful if they answer questions that are meaningful to the people who have a stake in the program being evaluated. Make sure you identify early on who the key stakeholders are in the evaluation and the questions they need answered. Think about the people being served by the program as well as senior leaders within the implementing agencies.
Hot Tip #2: Align evaluation and decision-making timelines
Ask your program partners what decisions the evaluation’s findings should inform and when. Often data is needed to make important decisions about a program – like continued funding or expansion- before an evaluation is complete. Narrow the scope of the evaluation so that data can be collected, analyzed and shared within the necessary timeframes or be open to providing interim findings to decision-makers even when they feel “rough” or incomplete. Some information is better than none when a big decision needs to be made.
Hot Tip #3: Set your evaluation up to answer “why?”
Everyone wants to know if their program achieved its intended outcomes. However, when evaluation results are unexpected or even contradict each other, it’s important that decision-makers understand why. Build measurement of processes, short-term outcomes and intermediate outcomes into your evaluation from the beginning. Include a qualitative component to explore implementation of the program and how contextual factors may have influenced the outcomes.
Hot Tip #4: Co-create recommendations
Don’t be afraid to engage evaluation stakeholders in interpreting data and developing recommendations based on the findings. Recommendations shouldn’t be created in a vacuum: they should be relevant to the political, programmatic and policy context, and feasible for stakeholders to implement. These same stakeholders can give you the best lens into what’s possible.
Hot Tip #5: Understand the needs of the audience
Identify who the audiences for your evaluation are and create deliverables that meet their needs. Evaluators can make important findings inaccessible by burying them in long, jargony reports. Consider research briefs, short presentations, updates during regular meetings and blogs delivered in plain language as key deliverables of your evaluation project.
Hot Tip #6: Ask for a seat at the table
Don’t let the submission of your evaluation report mark the end of your relationship with the program stakeholders. Evaluators can play a valuable role in helping decision-makers understand evaluation findings, communicating nuanced results, and providing input into how results might inform program or policy-related considerations. Find out where decisions are being made about the program you evaluated and ask for a seat at the table.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
1 thought on “How to Make Evaluation Results Actionable for Decision-Makers by Pam Lilleston”
Thank you for sharing how evaluators can make findings actionable for program and policy-focused decision-makers.
I am currently enrolled in a course titled Program Inquiry and Evaluation (Queens University). The tips that you suggest seem realistic and impactful for stakeholders. Through my experiences as an Education Administrator, I certainly understand research briefs, short presentations, or an update during a regular meeting, compared to a long jargon-filled report. Sifting through project findings can be time-consuming and complicated if there is an unclear result.
I particularly appreciated your suggestion to narrow the scope of evaluation so that data can be collected, analyzed and shared within the necessary timeframes, and how sharing some information is better than none when a big decision needs to be made. On various occasions, receiving some information was better than none (for example: Behaviour management strategies from an occupational therapist even though the final report was not complete). Implementing a set timeframe supports accountability and focus for program evaluators and stakeholders.
As I learn more about program inquiry and evaluation, it appears there is substantial research around program evaluation and perhaps not as much around effectively sharing results. Once an evaluation is complete, Weiss argues that even when program staff know about findings, understand them, and see their implications for improving the program, many factors can interfere with their using results for program improvement (Weiss, 1998). I believe in order for a program evaluation to be successful and complete, the evaluation should be ‘planned, conducted, and reported in ways that encourage follow-through by stakeholders so that the likelihood that the evaluation will be used is increased’ (Sanders, R.)
When suggesting evaluation stakeholders to interpret data and develop recommendations based on the findings, I wonder about potential for bias or influencing evaluation results. I agree that stakeholders can give the best lens into what’s possible. How do we ensure results are not skewed during the review process (once the evaluation is complete)?
Thank you for your time,
The Joint Committee on Standards for Educational Evaluation, James R. Sanders, Chair (ed.): The Program Evaluation Standards, 2nd edition. Sage Publication, Thousand Oaks, USA, p.23-24; 63; 81-82,125-126
Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.