We are Melanie Kawano-Chiu, Program Director at the Alliance for Peacebuilding (AfP) and Andrew Blum, Director of Evaluation and Learning at the United States Institute of Peace. More than two years ago I teamed up with Andrew Blum, Director of Learning and Evaluation at the United States Institute of Peace (USIP) to launch an initiative called the Peacebuilding Evaluation Project: A Forum for Donors and Implementers (PEP).
In December 2011, with support from USIP and the Carnegie Corporation of New York, USIP and AfP hosted a day-long Peacebuilding Evidence Summit. The closed event examined nine different evaluation approaches in order to identify their strengths and weaknesses when applied to peacebuilding programs, which operate in complex, chaotic, and sometimes dangerous environments.
Rad Resource: The discussions among donors, implementers and evaluation experts at the Peacebuilding Evidence Summit were synthesized into a report, Proof of Concept: Learning from Nine Examples of Peacebuilding Evaluation. In addition to themes that emerged throughout the various analyses of the different approaches examined at the Summit, the report covers each of the evaluation approaches strengths, potential challenges and pitfalls, and applicable lessons.
Lesson Learned: A reflection on the use of a mixed-method approach, which included an RCT, showed that the RCT was most useful to external audiences, particularly donors. For program managers within the organization, the qualitative research was much more useful. This raised question regarding the deployment of evaluation resources, since the bulk of the resources for the initiative went to the RCT.
Hop Tip: In developing evaluations for peacebuilding projects in the field, dangers resulting from conflict and post-conflict contexts must be acknowledged. In some cases, methodological rigor must be sacrificed due to security risks or political sensitivities. This calls for creative strategies to maximize rigor within these constraints.
Lesson Learned: The tension between accountability to donors and organizational learning within implementers is at times stark. There was discussion, although no consensus, at the Summit on whether these two goals are simply irreconcilable. This sparked discussion regarding the continued need for dialogue with donors on what is realistic to expect from evaluations.
Hot Tip: As the peacebuilding field evolves its evaluation practitioners, peacebuilders are increasingly sharing their evaluations and lessons learned in online settings such as the Learning Portal for Design, Monitoring, and Evaluation for Peacebuilding. The Learning Portal is a field-wide repository for evaluation reports and data, as well as best and emerging peacebuilding DM&E practices.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.