Hello! I’m Christy Metzler, Director, Program Evaluation for NeighborWorks America®, a Congressionally chartered community development intermediary. As an internal evaluator, I often work closely with program staff to generate actionable learning about our programs and services. I find that more meaningful participation of the program staff throughout the evaluation process promotes richer strategic conversations, yields actionable and useful recommendations, and ultimately contributes to organizational effectiveness and impact.
Hot Tip #1: Connect to business planning. Work with program staff to identify where they are in their business planning cycle and be intentional in connecting evaluation findings to the business plan. Participatory sense-making sessions can be a natural launch pad for discussing program strategy and business plan priorities. Allow the time and space for these discussions.
Hot Tip #2: Make it inclusive. In designing evaluation efforts, find ways to include program staff across multiple levels of the organizational structure, from senior vice president to line staff. Each position has a unique perspective to offer and can expose challenges that may not be evident to others.
Hot Tip #3: Imbed program staff. Solicit a program operations staff member to play a key role with the data collection or other evaluation activities where possible. Not only does the involvement in the evaluation effort build evaluation capacity, but it also lends greater credibility to the effort, increases ownership of the process and can better support program staff in making program improvements after the evaluation is completed.
Lesson Learned: Remain flexible and responsive to program staff. In a recent evaluation effort, what started out as an implementation review expanded, upon the staff’s suggestion, to include a review of business data being regularly used and strategic conversations taking place in order to identify knowledge gaps and barriers to implementation of business plans. As a result, the evaluation was more relevant and useful for business planning efforts.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
5 thoughts on “Cultivating Program Staff’s Inner Action Hero: Participatory Strategies that Promote Evaluation Use by Christy Metzler”
I want to thank you for posting this article, your three tips and lessons learned have helped to solidify my knowledge on participatory strategies in evaluation utilization.
Your tip on business cycle planning was particularly insightful and once I read the tip it become obvious that it is something that should be incorporated in all evaluations but seems to be consistently overlooked by a lot of evaluators.
I can appreciate your tip on making the evaluation inclusive as insights between staff are definitely different when you compare front line staff to senior management. I would provide the recommendation that meetings be held at different times as a lot of front line staff will struggle to speak negatively about a program when senior management is in the meeting room.
With respect to imbedding staff in the process, again, another great idea that would definitely increase the evaluation utilization! I would add that a further consideration would be to make sure the staff that are being imbedded in the process be high performing staff that are vested in the program and want to see it succeed. This may be difficult as it would be hard for most programs to allow high performing staff leave from their responsibilities to the program in order to complete the evaluations.
I am a student at Queen’s University in the Professional Masters of Education program. Currently I am taking “Program Inquiry and Evaluation” and have spent the past few weeks learning about different aspects of program evaluation, including designing my own program evaluation for StrongStart BC.
Most recently we have been looking at the importance of evaluation use and how to support evaluation use in our own program evaluation design. Prior to reading your post my only information on program use as possible dilemmas associated with program use was through course readings. The importance of including stakeholders in the evaluation process, was something that resonated strongly with me as this was a common theme throughout many of the assigned articles. This is something I focused on when designing my program evaluation and it was encouraging to see your thoughts on participatory strategies that promote evaluation use.
You provided further detail and examples in your post that extended my thinking in how to be inclusive in the evaluation process. In my program evaluation design, I focused heavily on the collaboration between the evaluator and primary intended user, however your Hot Tip #2, “to include program staff across multiple levels of the organization structure” because it provides for different perspectives and can expose challenges unseen to other levels is excellent advice. Being more inclusive in who is involved in the program evaluation provides a well-rounded analysis and understanding of how the program is doing.
Additionally, I feel Hot Tip #3 to be equally as important. Imbedding program staff to be a part of data collection and other evaluation activities provides the staff with a purpose in the evaluation which helps them take ownership when it comes to evaluation use. You also mention that this leads to greater credibility which is an excellent point because the evaluation process then includes those who use the program and not simply an outsider who may be unfamiliar with the ins and outs of the program. This tip was something I definitely focused on in my program evaluation design prior to reading your post, by involving the primary intended user in data collection, so it is reassuring that you also see value in this.
Your post has confirmed the impact collaborative program evaluation can have on evaluation use and you provided excellent examples which are easy to understand, simple to incorporate and useful to any program evaluation. I will definitely be using your Hot Tips.
I am currently practicing evaluation for a course I am studying and chose to examine a program at the college I work for. The program’s success involves the support from all faculty.
The program’s goals are to provide added writing support for international students in postsecondary. Prior to the program, a small focus groups of 8-10 instructors was held to help determine their needs and the students’ needs. However, 8-10 faculty is a small fraction of the full amount of staff. The success of the program is in part created by postsecondary faculty suggesting the program to students who may be struggling as ESL learners, and completing a specific needs form for students to bring to a one-to-one writing session. My desire is to create collaboration in my evaluation approach but it becomes complicated when there is a very large audience. Other than presentations and various means of communications, what are efficient ways to create ‘buy-in’, investment, and empowerment with a large group of stakeholders? Although the faculty are not direct beneficiaries, they are indirect ones.
Collaboration and even communication across departments and educational disciplines like in my example, I believe does create not only a sense of ownership, and diminishes feelings of department isolation or interdisciplinary separation, but can also strengthen organizational culture.
Thank you for your blog post about collaboration in evaluation use. I am currently doing an evaluation project for an evaluation course I am taking, and the project I chose was an internal one at the postsecondary institution I am employed. The project entails the support of all postsecondary faculty suggesting and encouraging international students to use the new Writing Support Program specifically for foreign students.
Without the buy-in from faculty, the program will not be as successful. They are indirect beneficiaries of the intervention (fewer incidences of misunderstood assignments, extra English assistance, etc), but key stakeholders. The dilemma I have encountered is how to involve faculty from across many disciplines and departments in the evaluation process beyond various communications? The program was bourne from a focus group of approximately 10 postsecondary faculty who tend to have higher numbers of international students but this is just a small fraction of teachers. What do you suggest are ways of incorporating and empowering large numbers of staff across different departments?
After reading different evaluative approaches, I do believe that whenever possible, collaboration creates ownership, investment, and motivation. It removes the the idea of separation or perception of evaluation programs being distinct. The probelm is when the organization is expansive and stakeholders numerous. Any tips on involving large numbers of staff in the evaluatin process while maintaining evaluation integrity?
I appreciated your blog post which offered practical strategies for ensuring evaluation use. I am currently enrolled in a course entitled “Program Inquiry and Evaluation” as part of a Master’s Program. Before taking the course, I had never considered utilization as a central component of Program Evaluation. I always thought that the role of the evaluators was limited to gathering, analyzing and interpreting data; in my mind, it was up solely to organizations hiring the evaluators to determine how an evaluations’ findings should be utilized. My preconceptions were misguided; Patton (2011) suggests that utilizing the data collected to encourage and support change should be the main objective of the evaluation.
What I really appreciated about your post was that it offered examples of what promoting utilization looks like in practice. Engaging users in the development of the evaluation is an important step that evaluators can take to promote utilization. You noted that it is helpful to search for different ways to ensure that different staff members from multiple levels of the organizational structure are involved in contributing to the design of evaluation. Not only does this enrich the framework for the evaluation (as we are given a more holistic representation of what challenges we may encounter), it can help protect against participants subverting the evaluation process as it provides them with a personal stake in the evaluations success.
Your post also led me to further investigate the usefulness of participatory sense making sessions to promote utilization. By allowing stakeholders to participate in these sessions we can better encourage them to buy into the evaluative process as they will gain an appreciation of how the evaluation will be used to develop their community (Patton, 2011). I have researched a few approaches to directing participatory sense making sessions and have noted that it can be used a helpful tool in designing program evaluations.
I also appreciated the strategy you suggested for maintaining stakeholder’s levels of engagement throughout the process of the evaluation. By enlisting the help of participants in the data gathering process and other evaluation activities, stakeholders are not only more engaged as the evaluation is being conducted but are also given an avenue to communicate and collaborate with evaluators during the evaluation. Stakeholders can communicate challenges in implementation, areas for improvement and strategies for accurately directing the program towards its intended goals. As you mentioned, this will lead to stakeholders having an increased sense of ownership in the evaluation process and can support utilization after the evaluation has been completed.
Thanks for all of the suggestions you offered for increasing stakeholder participation throughout the evaluation process. It was helpful to get some practical suggestions about how use can be cultivated and encouraged throughout the process of evaluation.
Patton, M. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: The Guildford Press.