Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

DRG TIG Week: From Evidence Review to Practicing with Tools: Insights on Evidence-Informed Dialogues from USIP’s Learning Agenda by David Connolly and Jill Baggerman

Hi, we are David Connolly (Director of Learning, Evaluation, and Research [LER]), and Jill Baggerman (Program Officer, LER) at United States Institute of Peace (USIP), sharing fresh insights from rolling out USIP’s inaugural Learning Agenda. In USIP’s ongoing growth in evidence to practice, we outline the rationale behind the learning agenda, its cross-programmatic dialogue-based approach to generating evidence, and key lessons central to both peacebuilding and the democracy, rights and governance (DRG) fields.

Breaking the Evidence Mold

The purpose of the learning agenda was to (re)build USIP’s core tools that are “essential for addressing violent conflict at the community, state, or interstate level.” Situated at the intersection of research, policy, training and practice, the learning agenda sought to hone these practical and strategic peacebuilding tools for our own application and disseminate them for other peacebuilders. At the heart of USIP’s theory of change, this pioneering initiative was designed to drive the institute’s virtuous cycle of action and learning at the enterprise level.

We launched the whole-of-institute learning agenda in 2020 by commissioning 12 evidence reviews on strategic questions organized into two main buckets: diagnostic (e.g., conflict analysis and women, peace and security) and operational (e.g., dialogue and reconciliation).

We then pivoted to using the findings to adapt USIP’s operational and diagnostic tools. The first two we’re working on is a conflict analysis tool (for teams to systematically gather and apply evidence in designing projects and programs), and a new Track 1.5/2 dialogue tool.

Three Lessons from Evidence Reviews

First, our whole-of-institute learning agenda required senior leadership and programmatic (regional and thematic) teams to work in close collaboration.

Second, while our colleagues have extensive expertise, a crucial layer of rigor came from partnerships with outside experts, who undertook many of the studies, and DevLab, which provided an independent review throughout. Creating the learning agenda also required practitioner engagement and/or primary data collection, perhaps given the emergent and complex nature of the peacebuilding field, which may resonate with DRG research.

The third lesson is more technical about the evidence reviews’ methodology. Systematic evidence reviews vary but the breadth of the strategic questions and the decentralized approach to conducting the studies meant that greater flexibility was necessary than initially planned. Nevertheless, all the reviews adopted a transparent, replicable methodology for addressing the interlinkages between evidence and theory.

Evidence-informed Dialogue
A project staffer explains the relevance of the dialogues to participants at the Women Preventing Violent Extremism community dialogues organized by USIP's Partner Sisters Without Borders in Uganda.

As we apply the evidence to practice in the learning agenda’s second phase, we’re already identifying insights. Our design of a dialogue tool—with a robust MEL toolkit—is instructive as we’re becoming increasingly aware of how it needs to be intuitive to be useful for monitoring (and eventually assessing) the impacts of dialogue.

Based on the evidence review findings, the dialogue tool categorizes potential outcomes by the type of changes we aim for dialogues to cause according to levels (individual, group, and institutional/systemic) and by dimensions of impact (knowledge, relationships, and foundations for influencing actions outside of the dialogue). The simple yet systematic framework allows programmatic teams to more accurately define and track progress on the outcomes of dialogue – and enables evaluators to evaluate the effectiveness more precisely.

Like the evidence review phase, creating the dialogue tool has been a collaborative experience in bridging the ever-present knowledge-to-practice gap. The combination of internal and external scholarly expertise and practitioner experience has been essential to the design.

Overall, creating the dialogue tool has been a fascinating learning experience in how to measure the “art” of dialogue work, which is often highly politically charged.


The American Evaluation Association is hosting Democracy, Human Rights & Governance TIG Week with our colleagues in the Democracy, Human Rights & Governance Topical Interest Group. The contributions all this week to aea365 come from our DRG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.