Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Synthesizing Meaningful Insights Across Recipients for Large Cooperative Agreements by Aundrea Carter, Molly Linabarger, Lauren Toledo, and Dee Dee Wei


Hi! We are Aundrea Carter, Molly Linabarger, Lauren Toledo, and Dee Dee Wei from the Evaluation and Research for Action Center of Excellence at Deloitte Consulting LLP. We collaborate with clients to develop and implement evaluations of large grants with sites across the United States. 

Often, grantees are required to evaluate and report on their progress and outcomes to funders who use the information to tell the greater story about the effectiveness and impact of program activities. Over the years, our team of evaluators have had the privilege to support funders in these types of cross-site process and outcome evaluations and we’ve learned a few lessons along the way. 

Our goals with these types of analyses include reviewing and comparing information across funding recipients to:

  • Document progress towards cooperative agreement objectives;
  • Identify facilitators and barriers to program implementation;
  • Recognize program outcomes and accomplishments; and
  • Develop insights into the types of programmatic and evaluation technical assistance needed by funding recipients. 

Funding recipients also benefit from learning about how their grantee counterparts are approaching program implementation, reaching key populations, and facing and overcoming similar challenges and barriers. These insights can be difficult to systematically extract, synthesize, and disseminate in meaningful ways due to the overwhelming amount of information provided by funding recipients and variation in implementation, measurement, and reporting styles. To address these challenges, we have developed a five-step process to facilitate identification of key findings:

Step 1. Work with funders to develop categories of key areas of inquiry to extract from recipient reports. Often, these are based on the reporting guidance provided to recipients but may also be driven by emerging areas of interest. 

Step 2. Develop an extraction form. This is often in Excel, MAXQDA, or other qualitative software that allows us to categorize and compare similar information across recipients. 

Step 3. Extract the data. The extraction process is often iterative and requires frequent collaboration and coordination across the extraction team as we compare the different and unstructured ways recipients provide information.

Step 4. Review and summarize the data. In this step, we focus on emerging themes related to key areas of inquiry such as how recipients identify and reach populations of focus, key implementation activities, partnerships, challenges to implementation and data collection/reporting, and short-term, intermediate, and long-term outcomes. 

Hot Tip

Because recipients are often not required to report on their evaluations in a specific, structured way, it is especially difficult to conduct analyses on program reach and other outcome measures. Instead, we conduct a qualitative assessment on these types of indicators (e.g., the types of populations reached rather than the total number of people reached by the cooperative agreement) and work to triangulate our findings with other primary and secondary data (e.g., recipient-reported performance measures), which are typically standardized across recipients and allow for aggregation. 

Step 5. Report and disseminate findings. Finally comes the task of reporting our findings in a meaningful and easy to digest way. We are moving away from long, detailed evaluation reports and turning to visualizations and slide decks to get at the heart of the findings and allow us to disseminate relevant excerpts of information to interested parties, including program outcomes, best practices for implementation, and recommendations for program improvement.

We’d love to hear from you in the comments about how you have conducted this type of work to synthesize evaluation reports across funding recipients, especially if you’ve explored ways to use GenAI to facilitate extraction and synthesis of unstructured recipient data.


The American Evaluation Association is hosting the Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week. The CMME TIG is encompasses methodologies and tools for designs that address single interventions implemented at multiple sites, multiple interventions implemented at different sites with shared goals, and the qualitative and statistical treatments of data for these designs, including meta-analyses, statistical treatment of nested data, and data reduction of qualitative data. The contributions all this week to AEA365 come from our CMME TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.