Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Getting Gritty: Putting in the Work to Collect Quality Data for Multisite Evaluations by Felicia Seibert


Hello! I’m Felicia Seibert, an evaluator and member of the Evaluation and Research for Action Center of Excellence at Deloitte Consulting LLP. I support federal health agencies with data collection, including pulling in large quantities of program implementation data from medical providers nationwide. I work with federal grantees, ensuring that the data systems in place are secure and accessible, and facilitate collection of clean, complete data. My goals are to bolster the evidence for the programs being implemented, and let the data tell the story of those individuals who are doing personal work – changing behavior to improve their health outcomes – and the medical providers’ stories who are steadfastly facilitating these changes.  

Now, I hope to have captured the attention of data-interested readers! To highlight the theme of cluster, multisite and multi-level evaluations, I must first pause, take a deep breath, and acknowledge lessons learned due to the inherent challenges of collecting data within this evaluation type. 

Lessons Learned

Ensure Data Quality  

How do practitioners collecting and analyzing data from multiple sites ensure that data submitted are clean and complete?  

Data quality can change from one point of collection to the next, even if data is collected multiple times per year in quick succession. Leveraging technology to build out systems which flag erroneous, missing, or incomplete data at the time of submission is a critical first line of defense. Grantees can then work within the system, going metric by metric to update flagged values. The next line of defense is technical assistance (TA), “just in time” resources and toolkits which highlight common data errors and detail solutions step-by-step to improve data quality.  

Plan for Data Management  

Where will data be stored, and how can I access files securely? When should data systems be updated?  

Data management leverages data systems to securely house millions of rows of data and make it accessible to authorized users with secure login. Consider establishing shared protocols with partners, detailing data storage, secure file sharing and a plan for destroying data at a set time after project end. Build time into project plans each year for data system updates. Upkeep of data systems will likely be made annually as technology advances and may be facilitated using automation and AI.               

At the end of the day, data should be put to work. Ensuring data quality, and building out systems to house data securely, ultimately facilitates evaluation. The stories of program implementation and its outcomes are only as good as the data that is inputted. Laying the groundwork is everything.  

Hot Tips

It Takes a Village:  

If your technical skill is building data systems, and your superpower is leveraging cloud-based technologies to demonstrate that power, connect with practitioners familiar with the pain points of using these systems and work out the kinks before data collection commences.  

If your technical skill is statistical analysis and your superpower is drawing insights from quantitative data, connect with skilled qualitative minds to develop a complete evaluation plan, inform the data to be collected, and craft TA resources to continually improve data quality. 

Get Gritty: 

Planning for multisite data collection and analysis requires a certain level of grit. Anticipate challenges, coalesce colleagues with complementary superpowers, and get deep into the weeds, asking questions like, “How long will it take to build improved data systems?” “Do grantees find collection of certain data points especially burdensome?” “How can we craft data visualizations that produce ‘aha moments?’” There may be opportunities to improve data collection year over year, but aim to tackle tough topics now, and data quality will go from good to better.


The American Evaluation Association is hosting the Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week. The CMME TIG is encompasses methodologies and tools for designs that address single interventions implemented at multiple sites, multiple interventions implemented at different sites with shared goals, and the qualitative and statistical treatments of data for these designs, including meta-analyses, statistical treatment of nested data, and data reduction of qualitative data. The contributions all this week to AEA365 come from our CMME TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.