Hello everyone! Yvonne M. Watson here. I’m a long-time member (almost 15 years) of AEA and a doctoral student at The George Washington University’s Trachtenberg School of Public Policy and Public Administration. I’d like to share a few brief lessons learned on the topic of Evaluation Users and Evaluation Use, one of four focus areas for the 2017 Conference theme Evaluation: From Learning to Action.
Perhaps the greatest thrill of victory and agony of defeat for any evaluator is the use of the evaluation report and findings. Many of the evaluation field’s pioneers, thought leaders, and emerging practitioners have written extensively on this topic. Understanding the many facets of use including evaluation users, uses, barriers and the facilitation of greater use can help evaluators strategically invest their time and resources to ensure the evaluation is designed with the intended use and user in mind. Here are a few things to consider.
Lessons Learned:
Know Your Audience. Understanding the intended user is critical. Evaluation users can include managers and staff responsible for managing and administering federal, state and local government programs, and non-profit and for profit organizations. Funders, academic researchers, Congressional members and staff, policy makers, citizens groups, and other evaluators are also intended users of evaluations.
Understand How the Evaluation will be Used. Carol Weiss offered the field four categories of use for evaluation findings. Instrumental use involves the use of evaluation findings for decision making to influence a specific program or a policy more broadly. Evaluation findings that generate new ideas and concepts, promote and foster learning about the program is considered conceptual/ enlightenment use. External influence on other institutions and organizations involves the use of evaluation results by entities outside of the organization that commissioned the evaluation. Evaluation findings that are used symbolically or politically to “justify preexisting preferences and actions” is considered political use. The use of evaluation findings for accountability, monitoring and development were introduced by Michael Quinn Patton.
Explore the Potential Barriers to Use. Barriers might limit the use of the evaluation: timeliness (results not available when needed to inform decision-making); insufficient resources (lack of resources to implement recommendations); or the absence of a learning culture (culture of continuous learning and program improvement).
Consider Strategies to Facilitate Use. Design your evaluation with the intended use and user in mind. Michael Quinn Patton introduced the field to Utilization-Focused Evaluation which emphasizes evaluation design that facilitates use by the intended users. Lastly, clearly communicate evaluation results. Recently, data visualization has emerged as a strategy to address evaluation use by communicating the research and findings in a way that will help evaluation users and make decisions.
Rad Resources:
Have We Learned Anything New About the Use of Evaluation , Carol Weiss
Utilization-Focused Evaluation , Michael Quinn Patton
AEA Data-Visualization and Reporting Topical Interest Group
We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.
Hello Yvonne! I am a Professional Masters of Education student at Queen’s University in Ontario and one of our assignments was to find an article on AEA 365 that we found interested and to make a connection with the author. I landed upon your article as we have just done a module on evaluation uses and I wanted to learn more about it, as I think it is something that is extremely important. Within the various other articles that I read users were touched upon, however I really like how they were at the for front of your article. I also love how you brought up the fact that the more we know about the users and what the intended use is, the better we can develop an evaluation model that is more suitable. I often times think that this is a piece that is not considered to the extent that it should be. If we are creating an evaluation for someone, we should include them in the process since they are the ones going to be implementing the change! I think a challenge is to ensure that as evaluators you are including all necessary stakeholders in the process and providing those involved with the necessary information. Often times communication can be our biggest challenge, which is why I am glad it was a lesson you had learned. For one of our readings I too read the Weiss article where the four categories for evaluation findings were discussed. I think that they all have merit and could assist with evaluation use and assist with the reason we are evaluating in the first place. You brought up many great barriers when it comes to evaluation use and offered some great strategies. A question I have is whether you would recommend that evaluators check back in after the results have been presented in order to see if there are additional questions and whether the results are actually being used for their intended purpose? Thank you so much for the great article and I look forward to reading more of your submissions.
Candice Brown
Article: LAWG Week: Learning about Evaluation Users and Uses by Yvonne M. Watson