Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

DRG TIG Week: From Evidence Review to Practicing with Tools: Insights on Evidence-Informed Dialogues from USIP’s Learning Agenda by David Connolly and Jill Baggerman

Hi, we are David Connolly (Director of Learning, Evaluation, and Research [LER]), and Jill Baggerman (Program Officer, LER) at United States Institute of Peace (USIP), sharing fresh insights from rolling out USIP’s inaugural Learning Agenda. In USIP’s ongoing growth in evidence to practice, we outline the rationale behind the learning agenda, its cross-programmatic dialogue-based approach to generating evidence, and key lessons central to both peacebuilding and the democracy, rights and governance (DRG) fields.

DRG TIG Week: Democracy, Rights & Governance (DRG) Approaches to Learning Agendas by Laura Adams

My name is Laura Adams and I am a learning agenda fanatic (as well as the Co-Chair of the DRG TIG this year), so I’m happy to be introducing the DRG week blogs, all of which touch on learning agendas. “Learning agendas” have become ubiquitous across United States Government (USG) agencies over the last five years following the passage of the Evidence Act in 2018 and are now required as a type of systematic plan for identifying and addressing important questions to inform decision-making based on project evaluations. Our blog posts this week focuses on NGOs and also examines the slippery concept of “capacity building” in the DRG sector and will share some exciting updates from colleagues who have been part of the Made in Africa Evaluation project.

The Case For A Shared Outcomes Measurement Framework for DEI Initiatives by Quisha Brown

Hi, I’m Quisha Brown, co-founder of Humanistic Care, LLC, an organization offering culturally responsive solutions to tough evaluation challenges. A recent AEA365 blog post titled “Applying Rubrics in Evaluation” by Gerard Atkinson caught my attention with its discussion on the benefits of using rubrics in evaluation. The Progressive Outcomes Scale Logic Model (POSLM) framework I developed in 2020 is one such evaluation model which uses a stage model rubric approach to measure outcomes towards social impact progressively using a common set of indicators. During my 20+ years working with nonprofits serving marginalized communities and 3 years helping them to create POSLMs, I’ve compiled over 200+ common person-centered equity indicators which derived from direct feedback shared with me by people most impacted by inequitable practices.

Reassessing and Reshaping our Research Study in Uncertain Times by Will Fisher and Jenny Seelig

Howdy AEA 365, it’s Will Fisher and Jenny Seelig, Research Scientists with NORC at the University of Chicago. NORC is devoted to objective and dynamic social science research.

As originally planned, our study,Engaging Youth for Positive Change (EYPC): Promoting Community Health Through Civic Education, was a randomized controlled trial carefully designed to evaluate the impact the EYPC’s[i] civics curriculum has on student health and community well-being in rural Illinois. It was funded by the Robert Wood Johnson Foundation in 2019 and scheduled to take place from 2020 to 2023. By Spring 2020, we had recruited 18 schools and 18 teachers into control and treatment groups and expected to steadfastly proceed. However, no one could have predicted the circuitous path our research would take.

Can Evaluation Help Make Bureaucracy More Responsive – or is it Part of the Problem? by Burt Perrin

Hi, I’m Burt Perrin, and I’d like you think about bureaucracy – its strengths, weaknesses, and what this means for evaluation.

Bureaucracy is complex. It is essential to democracy – while at the same time presenting many challenges. Evaluation has the potential to aid bureaucracies in being more responsive and effective – but also with the potential to acerbate the situation.

R Without Statistics by David Keyes

I’m David Keyes and I run R for the Rest of Us. Over the years, I’ve helped hundreds of people learn R through courses and trainings. For a long time, I felt like I wasn’t a “real” R user. Real R users, in my mind, used R for hardcore stats. I “only” used R for descriptive stats. I sometimes felt like I was using a souped up sports car to drive 20 miles an hour to the grocery store. Eventually, I realized that this framing misses the point. R started out as a tool created by statisticians for other statisticians. But, over a quarter century since its creation, R is as much a tool to improve your workflow as it is a tool for statistics. 

A Rapid Cycle Evaluation Approach: Implementing Micro Steps for Program Improvement by Elena Pinzon O’Quinn

Hi, my name is Elena Pinzon O’Quinn, and I am the National Learning and Evaluation Director at LIFT, an economic mobility nonprofit. Back in December 2021, I shared my tips for building a culture of data for decision making in nonprofits, which covered simple and efficient ways to share data. But getting data into stakeholders’ hands is just one piece of the puzzle in continuous improvement and learning. With efficient client data management systems and real-time dashboards, program teams often have near-constant access to data. At LIFT, we have had a strong track record of using data in strategic and long-term planning but struggled with how to use data on a more regular basis to understand program performance and integrate timely data-informed program improvements.

Making Complex Content Clear: AI’s Potential for Readability in Evaluation by Jeff Kosovich

Hello! I’m Jeff Kosovich and I am a senior evaluator at the Center for Creative Leadership. One of the challenges of producing technical reports and surveys meant for people without your expertise is avoiding unnecessary complexity and jargon. I’m currently testing the effectiveness of tools like Chat GPT as a time-saving method of making surveys and reports more accessible.