Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

AEA365 contributor, Curated by Elizabeth Grim

DRG TIG Week: Made in Africa Evaluation: Applying Tanzanian Indigenous Knowledge Systems by Francis Mwaijande

I am Francis Mwaijande, former Chairman of Tanzania Evaluation Association (TanEA) and the Principal Investigator (P.I) of Democratizing Learning and Evaluation-Supporting evaluation ecosystems and opportunities for contextualized approaches: Made in Africa Evaluation (MAE). I’m grateful to the African Evaluation Association (AfrEA) and the U.S. Department of State who provided research support for this project to examine an inventory of scholarship in Africa-rooted evaluation approaches. MAE, “birthed” in the 2007 AfrEA Conference in Niami, was formally articulated by AfrEA in 2021 in a set of African Evaluation Principles designed to guide evaluation from an African perspective.

DRG TIG Week: Learning How to Evaluate Meaningful Outcomes in Collaborative Human Rights Digital Security Programs by Deanna Kolberg-Shah, Leah Squires, and Megan Guidrey

Hi all! We are MEL specialists at Freedom House (Deanna Kolberg-Shah and Leah Squires) and Internews (Megan Guidrey) and have been collaborating to develop an evaluation framework for digital security support programs in the DRG space as part of the USAID-funded Human Rights Support Mechanism (HRSM). HRSM is a 7-year leader with associate award designed to implement cutting edge human rights programming globally. Led by Freedom House in partnership with the American Bar Association Rule of Law Initiative, Internews, Pact, and Search for Common Ground, HRSM facilitates cross-project learning through a learning agenda based in best practices identified across 37 different associate award projects.

DRG TIG Week: From Evidence Review to Practicing with Tools: Insights on Evidence-Informed Dialogues from USIP’s Learning Agenda by David Connolly and Jill Baggerman

Hi, we are David Connolly (Director of Learning, Evaluation, and Research [LER]), and Jill Baggerman (Program Officer, LER) at United States Institute of Peace (USIP), sharing fresh insights from rolling out USIP’s inaugural Learning Agenda. In USIP’s ongoing growth in evidence to practice, we outline the rationale behind the learning agenda, its cross-programmatic dialogue-based approach to generating evidence, and key lessons central to both peacebuilding and the democracy, rights and governance (DRG) fields.

DRG TIG Week: Democracy, Rights & Governance (DRG) Approaches to Learning Agendas by Laura Adams

My name is Laura Adams and I am a learning agenda fanatic (as well as the Co-Chair of the DRG TIG this year), so I’m happy to be introducing the DRG week blogs, all of which touch on learning agendas. “Learning agendas” have become ubiquitous across United States Government (USG) agencies over the last five years following the passage of the Evidence Act in 2018 and are now required as a type of systematic plan for identifying and addressing important questions to inform decision-making based on project evaluations. Our blog posts this week focuses on NGOs and also examines the slippery concept of “capacity building” in the DRG sector and will share some exciting updates from colleagues who have been part of the Made in Africa Evaluation project.

R Without Statistics by David Keyes

I’m David Keyes and I run R for the Rest of Us. Over the years, I’ve helped hundreds of people learn R through courses and trainings. For a long time, I felt like I wasn’t a “real” R user. Real R users, in my mind, used R for hardcore stats. I “only” used R for descriptive stats. I sometimes felt like I was using a souped up sports car to drive 20 miles an hour to the grocery store. Eventually, I realized that this framing misses the point. R started out as a tool created by statisticians for other statisticians. But, over a quarter century since its creation, R is as much a tool to improve your workflow as it is a tool for statistics. 

A Rapid Cycle Evaluation Approach: Implementing Micro Steps for Program Improvement by Elena Pinzon O’Quinn

Hi, my name is Elena Pinzon O’Quinn, and I am the National Learning and Evaluation Director at LIFT, an economic mobility nonprofit. Back in December 2021, I shared my tips for building a culture of data for decision making in nonprofits, which covered simple and efficient ways to share data. But getting data into stakeholders’ hands is just one piece of the puzzle in continuous improvement and learning. With efficient client data management systems and real-time dashboards, program teams often have near-constant access to data. At LIFT, we have had a strong track record of using data in strategic and long-term planning but struggled with how to use data on a more regular basis to understand program performance and integrate timely data-informed program improvements.

Needs Assessment TIG Week: Ethics in Evaluation: Identifying and Valuing Human Participants by Sue Hamann

I’m Sue Hamann from the Needs Assessment (NA) TIG. I have worked as an Evaluator for more than 40 years, currently employed at the National Institutes of Health as a Health Scientist and Science Evaluation Officer. I’m writing about how review boards can be helpful in valuing human (program) participants (beneficiaries), thus promoting ethical standards, and our role in this as professionals working in needs assessment (NA) and evaluations.