Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Cluster, Multi-site and Multi-level Evaluation

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Fueling Evaluation Momentum Across Multiple Sites with Limited Funding by Heidi Kahle, Tori Pilman, Kirsten Albers, and Jenica Reed

Hi, we are Heidi Kahle, Tori Pilman, Kirsten Albers, and Jenica Reed from Deloitte’s Evaluation and Research for Action Center of Excellence. As evaluation consultants, we work with federal clients to design and implement evaluations and share learnings to inform future action in the field. While we often evaluate across funded programs, at times funding …

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Fueling Evaluation Momentum Across Multiple Sites with Limited Funding by Heidi Kahle, Tori Pilman, Kirsten Albers, and Jenica Reed Read More »

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Exploring the Data Utility of Publicly Available Individual-Level Data Sets to Understand Self-Reported Experience with Social Determinants of Health by Michele D Sadler, Hope Gilbert, and Shilpa Londhe

Hi, we are Michele D Sadler and Hope Gilbert from Deloitte’s Evaluation and Research for Action Center of Excellence, along with Shilpa Londhe from New York University. As evaluation consultants, we know that social determinants of health (SDOH) data are critical in identifying and evaluating the scope and magnitude of non-medical experiences which influence health. Most often, researchers use SDOH …

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Exploring the Data Utility of Publicly Available Individual-Level Data Sets to Understand Self-Reported Experience with Social Determinants of Health by Michele D Sadler, Hope Gilbert, and Shilpa Londhe Read More »

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Exploring the Potential of NLP in Evaluation: Techniques, Use Cases, and How to Get Started by Juliette Berlin, Amy Shim, and Sarah Bergman

Part 1: Rationale for using NLP for Evaluation  Hello, we are Juliette Berlin, Amy Shim, and Sarah Bergman, evaluation and data specialists with Deloitte Consulting LLP. With the recent buzz surrounding Generative AI (GenAI), there is increasing interest in how to best leverage emerging technologies to advance evaluation needs. Here, we discuss a fundamental driver …

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Exploring the Potential of NLP in Evaluation: Techniques, Use Cases, and How to Get Started by Juliette Berlin, Amy Shim, and Sarah Bergman Read More »

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Getting Gritty: Putting in the Work to Collect Quality Data for Multisite Evaluations by Felicia Seibert

Hello! I’m Felicia Seibert, an evaluator and member of the Evaluation and Research for Action Center of Excellence at Deloitte Consulting LLP. I support federal health agencies with data collection, including pulling in large quantities of program implementation data from medical providers nationwide. I work with federal grantees, ensuring that the data systems in place …

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Getting Gritty: Putting in the Work to Collect Quality Data for Multisite Evaluations by Felicia Seibert Read More »

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Synthesizing Meaningful Insights Across Recipients for Large Cooperative Agreements by Aundrea Carter, Molly Linabarger, Lauren Toledo, and Dee Dee Wei

Hi! We are Aundrea Carter, Molly Linabarger, Lauren Toledo, and Dee Dee Wei from the Evaluation and Research for Action Center of Excellence at Deloitte Consulting LLP. We collaborate with clients to develop and implement evaluations of large grants with sites across the United States.  Often, grantees are required to evaluate and report on their …

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Synthesizing Meaningful Insights Across Recipients for Large Cooperative Agreements by Aundrea Carter, Molly Linabarger, Lauren Toledo, and Dee Dee Wei Read More »

CMM TIG Week: Google Tools for Multi-site Evaluation by Audrey Roerrer

Hi, I’m Audrey Rorrer and I’m an evaluator for the Center for Education Innovation in the College of Computing and Informatics at the University of North Carolina at Charlotte, where several projects I evaluate operate at multiple locations across the country.  Multisite evaluations are loaded with challenges, such as data collection integrity, evaluation training for local project leaders, and the cost of resources. My go-to resource has become Google because it’s cost-effective both in terms of efficiency and budget (it’s free). I’ve used it as a data collection tool and resource dissemination tool.

CMM TIG Week: Online Activity Logs: Low Cost and High Impact for Multisite Evaluations by Jonathan Margolin

My name is Jonathan Margolin, and I am a senior researcher in the Education Program at American Institutes for Research, where I work primarily in the State and Local Evaluation Center. One common challenge when evaluating the implementation of educational programs is to understand how the program is interpreted and adapted by teachers and schools. This issue is particularly challenging when the program is being implemented in dozens of sites across the country, where it is often not feasible to conduct in depth case studies or collect other implementation data. One low cost and highly efficient approach to capturing data on implementation is to provide teachers with online logs with which to record classroom activities. We used this approach in our recent evaluation of The CryptoClub, an informal program involving cryptography and mathematics (more information about the program is available here).

CMM TIG Week: Supporting Evaluation Practice in Organizations by Monica Hargraves

My name is Monica Hargraves and I work with Cooperative Extension associations across New York State as part of an evaluation capacity building effort in the Cornell Office for Research on Evaluation (CORE).  My work with Extension is shaped, in part, by insights we gained through a Concept Mapping research project we did in late 2008.  We wanted to explore, from practitioners’ perspectives, what factors contribute to supporting evaluation practice in an organization.

CMM TIG Week: Cross Classified Random Effects Models in Evaluation by Leland Lockhart

My name is Leland Lockhart, and I am a graduate student at the University of Texas at Austin and a research assistant at ACT, Inc.’s National Center for Educational Achievement (NCEA).  The NCEA is a department of ACT, Inc., a not-for-profit organization committed to helping people achieve education and workplace success. NCEA builds the capacity of educators and leaders to create educational systems of excellence for all students. We accomplish this by providing research-based solutions and expertise in higher performing schools, school improvement, and best practice research that lead to increased levels of college and career readiness.

CMM TIG Week: Using Literature Review in Cluster Evaluation by Mika Yoder Yamashita

My name is Mika Yoder Yamashita. I am the qualitative evaluation lead for the Center for Educational Policy and Practice at Academy for Educational Development. Our Center has been conducting process and outcome evaluations of the federally funded program, Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP).  This program aims at increasing college access among disadvantaged students.  As we are evaluating programs implemented in several sites, we are beginning to explore the possibility of conducting a multi-site evaluation. Today I will share my Center’s thoughts on how we can effectively approach conducting a multi-site evaluation that uses qualitative data to understand the process of program implementation. Then I will share how we use the literature to guide our data collection and analysis.