Hello All! We are Karen Chen (Independent Consultant), Dylan Diggs (Contractor to the Bureau of Democracy, Human Rights, and Labor’s Office of Global Programs’ Applied Learning and Evaluation DRL/GP/ALE team in the U.S. Department of State) and Pragati Godbole (EnCompass LLC). Under a DRL contract with Encompass, LLC, we collaborated for the past three years on an implementation and outcome evaluation of DRL’s flagship democracy program – the PAIRS (Political Accountability, Inclusivity, and Resiliency Support) mechanism.
The PAIRS mechanism funds democracy programs, in alignment with DRL’s Programming Framework, to support democratic actors, political competition, and civil society to better link citizens to democratic institutions. The multiphased portfolio evaluation sought to evaluate the initial years of this mechanism, with the final phase focused on assessing PAIRS projects’ outcomes and impact.
Long-term results of democracy programs can be difficult to anticipate and measure, as these programs often operate in complex political systems. Assessing them becomes harder in fragile and restrictive contexts characterized by heightened complexity, instability, influences from malign actors, and dynamics where civic participation can have costs that increase risks to individuals and organizations. With a view toward democratizing evaluation methods, the PAIRS mechanism evaluation team decided to use Ripple Effects Mapping (REM). REM incorporates elements from Appreciative Inquiry (AI), which is an asset-based approach to dialogue and engagement and a cornerstone approach of EnCompass LLC, and Most Significant Change (MSC) to collect and analyze project outcomes, their underlying influences, and project impacts.
After framing the purpose of the REM session for participants, the evaluation team facilitated appreciative interviews. Participants reflected on their experiences with PAIRS project activities and identified changes in attitudes, behaviors, knowledge, and actions. These were depicted as “ripples” (See Exhibit 1) stemming from project interventions. Following data collection, the evaluation team used the MSC technique to analyze the stories of change collected in-country and determine those most significant.
Implementing REM, AI, and MSC presented challenges. For example, due to participant schedules, the evaluation team had to conduct the sessions in half the recommended time. However, as we revised the approach, we found the combination of REM, AI and MSC to be particularly well suited for evaluating democracy programs, as it embraces values that lift various voices and captures nuanced democratic governance changes while accounting for contextual complexity.
Moreover, in combination with MSC, REM and AI adeptly captured the various changes emerging from PAIRS projects and identified processes and results pathways across multiple stakeholders and beneficiary types. This allowed the mapping of individual, organizational, network, and system pathways of change relevant to these programs.
Exhibit 1: REM example depicting individual, organizational, and systemic-level outcomes stemming from project activities
Lessons Learned
- Sometimes it is difficult for implementers and participants to distinguish between project activities and project results. Don’t be afraid to ask probing questions.
- REM sessions have the potential to democratize evaluations. Addressing power imbalances, such as the dynamics between project participants in evaluations, may require more than just participatory methods. Don’t be afraid to recognize limitations and strive to reshape those power dynamics.
Hot Tips
- Ideally, hold REM sessions, lasting at least two hours, with diverse stakeholders to ensure explanation of methods and purpose while fostering nuanced discussions and deeper understanding of the change stories.
- Share the results with the participants so that it isn’t an extractive exercise.
Rad Resources
Ripple Effects Mapping – Facilitator Guide
The American Evaluation Association is hosting Democracy, Human Rights and Governance TIG Week with our colleagues in the Democracy, Human Rights and Governance Topical Interest Group. The contributions all this week to AEA365 come from our DRG TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
An exceptionally thoughtful and useful post! Thank you.