Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Fueling Evaluation Momentum Across Multiple Sites with Limited Funding by Heidi Kahle, Tori Pilman, Kirsten Albers, and Jenica Reed


Hi, we are Heidi Kahle, Tori Pilman, Kirsten Albers, and Jenica Reed from Deloitte’s Evaluation and Research for Action Center of Excellence. As evaluation consultants, we work with federal clients to design and implement evaluations and share learnings to inform future action in the field. While we often evaluate across funded programs, at times funding is fragmented or lacking, which can greatly impact the ability to continue evaluation efforts.  

When sites are implementing a new intervention or establishing an evidence base, building momentum and engagement for evaluation across sites can be critical to sustaining their efforts, especially when funding is limited. Interventions can take years to fully implement, limiting sites’ ability to evaluate impact and contribute to that evidence base. This is where a formalized community of practice (CoP) or network of sites can play a role to help fill the gap. By creating a research and evaluation-focused working group within a CoP across sites, we have seen increased momentum and collaboration to share challenges, identify solutions, and build the evidence base.  

Lessons Learned

Drawing upon our experience supporting federal health agency programs, we have identified potential strategies to build momentum across sites in a virtual environment, even when there is limited funding available: 

Build participant ownership of the process. We use a participant-driven process to amplify voices of all sites, build collective agreement on next steps, and enable ownership of the process across sites. Since sites have no common funding mechanism or requirement of engagement, we have set structures that allow for meaningful engagement while relying on sites’ volunteerism. This can include low-effort ways for building ownership, such as presenting to peers at virtual meetings or serving in a low-lift leadership role in the CoP.   

Create dedicated space and time. We have created space for consistent, virtual meetings of a formalized research and evaluation working group. This has provided accountability and community-building to sustain efforts over time, knowing that regularly convening and engaging can be crucial to sustaining the work. We also provide open discussion focused on peer-to-peer sharing, such as breakout rooms with discussion questions or panel Q&A sessions. The World Bank Group CoP Toolkit has great resources on getting a CoP started and hosting interactive, productive gatherings.  

Understand the current state. When the group launched, our team first analyzed the current state of research and evaluation for this intervention to identify strengths, gaps, and future areas of focus. We brought the analysis to the larger group to validate findings, share context from individual sites, and gain buy-in. Across the group, common themes emerged on which aspects of evaluation were creating challenges for sites, allowing for collaborative problem-solving and data-driven prioritization. Analyzing the current state early on has also made it easier to engage the group in developing a plan for pursuing research and evaluation moving forward.  

Get creative with ways of providing feedback and validating information. When engaging a diverse set of sites or audiences, we recommend using prioritization exercises, live polls, and other preference mapping in real-time. We also advise sharing materials in advance to allow for more actionable and meaningful feedback.   

Building a collaborative culture of evaluation across sites with differing priorities and backgrounds is no easy feat, but we encourage you to consider these approaches to help build momentum for evaluation in a larger community, even when funding is limited. We invite those who are interested in learning more from others about sustaining and driving momentum for evaluation across sites in a CoP to comment or share below.  


The American Evaluation Association is hosting the Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week. The CMME TIG is encompasses methodologies and tools for designs that address single interventions implemented at multiple sites, multiple interventions implemented at different sites with shared goals, and the qualitative and statistical treatments of data for these designs, including meta-analyses, statistical treatment of nested data, and data reduction of qualitative data. The contributions all this week to AEA365 come from our CMME TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.