Hello everyone! Yvonne M. Watson here. I’m a long-time member (almost 15 years) of AEA and a doctoral student at The George Washington University’s Trachtenberg School of Public Policy and Public Administration. I’d like to share a few brief lessons learned on the topic of Evaluation Users and Evaluation Use, one of four focus areas for the 2017 Conference theme Evaluation: From Learning to Action.
Perhaps the greatest thrill of victory and agony of defeat for any evaluator is the use of the evaluation report and findings. Many of the evaluation field’s pioneers, thought leaders, and emerging practitioners have written extensively on this topic. Understanding the many facets of use including evaluation users, uses, barriers and the facilitation of greater use can help evaluators strategically invest their time and resources to ensure the evaluation is designed with the intended use and user in mind. Here are a few things to consider.
Know Your Audience. Understanding the intended user is critical. Evaluation users can include managers and staff responsible for managing and administering federal, state and local government programs, and non-profit and for profit organizations. Funders, academic researchers, Congressional members and staff, policy makers, citizens groups, and other evaluators are also intended users of evaluations.
Understand How the Evaluation will be Used. Carol Weiss offered the field four categories of use for evaluation findings. Instrumental use involves the use of evaluation findings for decision making to influence a specific program or a policy more broadly. Evaluation findings that generate new ideas and concepts, promote and foster learning about the program is considered conceptual/ enlightenment use. External influence on other institutions and organizations involves the use of evaluation results by entities outside of the organization that commissioned the evaluation. Evaluation findings that are used symbolically or politically to “justify preexisting preferences and actions” is considered political use. The use of evaluation findings for accountability, monitoring and development were introduced by Michael Quinn Patton.
Explore the Potential Barriers to Use. Barriers might limit the use of the evaluation: timeliness (results not available when needed to inform decision-making); insufficient resources (lack of resources to implement recommendations); or the absence of a learning culture (culture of continuous learning and program improvement).
Consider Strategies to Facilitate Use. Design your evaluation with the intended use and user in mind. Michael Quinn Patton introduced the field to Utilization-Focused Evaluation which emphasizes evaluation design that facilitates use by the intended users. Lastly, clearly communicate evaluation results. Recently, data visualization has emerged as a strategy to address evaluation use by communicating the research and findings in a way that will help evaluation users and make decisions.
Have We Learned Anything New About the Use of Evaluation , Carol Weiss
Utilization-Focused Evaluation , Michael Quinn Patton
We’re looking forward to November and the Evaluation 2017 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to email@example.com.