Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

DRG TIG Week: Made in Africa Evaluation: Applying Tanzanian Indigenous Knowledge Systems by Francis Mwaijande

I am Francis Mwaijande, former Chairman of Tanzania Evaluation Association (TanEA) and the Principal Investigator (P.I) of Democratizing Learning and Evaluation-Supporting evaluation ecosystems and opportunities for contextualized approaches: Made in Africa Evaluation (MAE). I’m grateful to the African Evaluation Association (AfrEA) and the U.S. Department of State who provided research support for this project to examine an inventory of scholarship in Africa-rooted evaluation approaches. MAE, “birthed” in the 2007 AfrEA Conference in Niami, was formally articulated by AfrEA in 2021 in a set of African Evaluation Principles designed to guide evaluation from an African perspective.

DRG TIG Week: Learning How to Evaluate Meaningful Outcomes in Collaborative Human Rights Digital Security Programs by Deanna Kolberg-Shah, Leah Squires, and Megan Guidrey

Hi all! We are MEL specialists at Freedom House (Deanna Kolberg-Shah and Leah Squires) and Internews (Megan Guidrey) and have been collaborating to develop an evaluation framework for digital security support programs in the DRG space as part of the USAID-funded Human Rights Support Mechanism (HRSM). HRSM is a 7-year leader with associate award designed to implement cutting edge human rights programming globally. Led by Freedom House in partnership with the American Bar Association Rule of Law Initiative, Internews, Pact, and Search for Common Ground, HRSM facilitates cross-project learning through a learning agenda based in best practices identified across 37 different associate award projects.

DRG TIG Week: From Evidence Review to Practicing with Tools: Insights on Evidence-Informed Dialogues from USIP’s Learning Agenda by David Connolly and Jill Baggerman

Hi, we are David Connolly (Director of Learning, Evaluation, and Research [LER]), and Jill Baggerman (Program Officer, LER) at United States Institute of Peace (USIP), sharing fresh insights from rolling out USIP’s inaugural Learning Agenda. In USIP’s ongoing growth in evidence to practice, we outline the rationale behind the learning agenda, its cross-programmatic dialogue-based approach to generating evidence, and key lessons central to both peacebuilding and the democracy, rights and governance (DRG) fields.

DRG TIG Week: Democracy, Rights & Governance (DRG) Approaches to Learning Agendas by Laura Adams

My name is Laura Adams and I am a learning agenda fanatic (as well as the Co-Chair of the DRG TIG this year), so I’m happy to be introducing the DRG week blogs, all of which touch on learning agendas. “Learning agendas” have become ubiquitous across United States Government (USG) agencies over the last five years following the passage of the Evidence Act in 2018 and are now required as a type of systematic plan for identifying and addressing important questions to inform decision-making based on project evaluations. Our blog posts this week focuses on NGOs and also examines the slippery concept of “capacity building” in the DRG sector and will share some exciting updates from colleagues who have been part of the Made in Africa Evaluation project.

The Case For A Shared Outcomes Measurement Framework for DEI Initiatives by Quisha Brown

Hi, I’m Quisha Brown, co-founder of Humanistic Care, LLC, an organization offering culturally responsive solutions to tough evaluation challenges. A recent AEA365 blog post titled “Applying Rubrics in Evaluation” by Gerard Atkinson caught my attention with its discussion on the benefits of using rubrics in evaluation. The Progressive Outcomes Scale Logic Model (POSLM) framework I developed in 2020 is one such evaluation model which uses a stage model rubric approach to measure outcomes towards social impact progressively using a common set of indicators. During my 20+ years working with nonprofits serving marginalized communities and 3 years helping them to create POSLMs, I’ve compiled over 200+ common person-centered equity indicators which derived from direct feedback shared with me by people most impacted by inequitable practices.

Reassessing and Reshaping our Research Study in Uncertain Times by Will Fisher and Jenny Seelig

Howdy AEA 365, it’s Will Fisher and Jenny Seelig, Research Scientists with NORC at the University of Chicago. NORC is devoted to objective and dynamic social science research.

As originally planned, our study,Engaging Youth for Positive Change (EYPC): Promoting Community Health Through Civic Education, was a randomized controlled trial carefully designed to evaluate the impact the EYPC’s[i] civics curriculum has on student health and community well-being in rural Illinois. It was funded by the Robert Wood Johnson Foundation in 2019 and scheduled to take place from 2020 to 2023. By Spring 2020, we had recruited 18 schools and 18 teachers into control and treatment groups and expected to steadfastly proceed. However, no one could have predicted the circuitous path our research would take.

Can Evaluation Help Make Bureaucracy More Responsive – or is it Part of the Problem? by Burt Perrin

Hi, I’m Burt Perrin, and I’d like you think about bureaucracy – its strengths, weaknesses, and what this means for evaluation.

Bureaucracy is complex. It is essential to democracy – while at the same time presenting many challenges. Evaluation has the potential to aid bureaucracies in being more responsive and effective – but also with the potential to acerbate the situation.