Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Search Results for: data analysis

Case Collaborative Week: Reflective Practice for Teaching and Learning with Cases by David Ensminger and Tiffany Tovey

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

ENRI Week: A Trust-Based Philanthropy Journey by Adama Brown

My name is Adama Brown, and I’m the Director of Research and Data Analytics at United Way of Rhode Island (UWRI). I lead a team that is interested in reimagining impact and the ways that data and narrative inform philanthropic work, acting as a catalyst for systems change. In recent years, UWRI has embraced the …

ENRI Week: A Trust-Based Philanthropy Journey by Adama Brown Read More »

Post Program Monitoring: An Entry Point for Localization by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

Applying Digital Development Principles to Locally Contextualize Evaluations by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

Spurious Precision – Leading to Evaluations that Misrepresent and Mislead by Burt Perrin

Sometimes it is helpful to be very precise. But, in other cases, this could be irrelevant at best, and quite likely misleading. And destroy, rather than enhance, the credibility of your evaluation – and of you. Hi, I’m Burt Perrin, and I’d like to discuss what considerations such as these mean for evaluation practice.

If one is undergoing brain surgery, one would hope that this would be done with precision based upon established knowledge about how this should be done. But one can be no more precise than the underlying data permit. Yet attempting this is where too many evaluations go wrong.

Shifting the Evaluation Lens to Localization – Progress You Can See by Kim Norris

Hi, I’m Kim Norris, Monitoring, Evaluation and Learning (MEL) Director for American Institutes for Research (AIR)’s International Development Division. Part of my role is to lead a MEL practice. As part of our initial strategy, our practice team determined to focus on localizing our work. For us this means we seek out ways to increase local partnering and leadership in and around MEL efforts – from business development to MEL direction and execution. This involves local team leadership, capacity strengthening and engagement on local terms.

No More Crappy Survey Reporting – Best Practices in Survey Reporting for Evaluations by Janelle Gowgiel, JoAnna Hillman, Mary Davis, and Christiana Reene

Janelle, JoAnna, Mary, and Christiana here, evaluators from Emory Centers for Public Health Training and Technical Assistance. We had the opportunity to present a session entitled No More Crappy Surveys at last year’s AEA Summer Evaluation Institute. We are on a mission to rid the world of crappy surveys, and are here to share some of our Hot Tips and Rad Resources to do so.

If you haven’t already, check out the first and second blog posts in this series, No More Crappy Surveys – Best Practices in Survey Design for Evaluations (you can check it out here) and No More Crappy Survey Analysis – Best Practices in Survey Analysis for Evaluations (which you can read here). Today, we’ll be following up with some tips on how to report your survey findings to different audiences and tips to engage partners throughout the survey process.

Measuring DEI in Our Own Workforce: Lessons from Four Studies Across Two Years by Laura Kim and Brooke Hill

We are Laura Kim (Senior Consultant at the Canopy Lab) and Brooke Hill (Senior Program Manager at Social Impact). Laura is part of the team that works on Canopy’s Inclusion and Leadership series, which explores the forces that influence who gets to advance in international development and why. Brooke is the technical lead for the BRIDGE survey and co-leads the Equity Incubator, a lab studying equity and inclusion through data.

Global Evaluation Initiative (GEI) Week: Lessons Learned from Mapping Evaluation Systems in Brazilian Subnational Governments by Lycia Lima, Gabriela Lacerda, and Lorena Figueiredo

Hi, we are Lycia Lima (Deputy Director), Gabriela Lacerda (Executive Manager), and Lorena Figueiredo (Researcher) of the Center for Learning on Evaluation and Results for Lusophone Africa and Brazil (CLEAR-LAB), which is based at the School of Economics of Getulio Vargas Foundation and is an Implementing Partner of the Global Evaluation Initiative (GEI).

DRG TIG Week: Made in Africa Evaluation: Applying Tanzanian Indigenous Knowledge Systems by Francis Mwaijande

I am Francis Mwaijande, former Chairman of Tanzania Evaluation Association (TanEA) and the Principal Investigator (P.I) of Democratizing Learning and Evaluation-Supporting evaluation ecosystems and opportunities for contextualized approaches: Made in Africa Evaluation (MAE). I’m grateful to the African Evaluation Association (AfrEA) and the U.S. Department of State who provided research support for this project to examine an inventory of scholarship in Africa-rooted evaluation approaches. MAE, “birthed” in the 2007 AfrEA Conference in Niami, was formally articulated by AfrEA in 2021 in a set of African Evaluation Principles designed to guide evaluation from an African perspective.