Hello! My name is Rhonda Schlangen and I’m an evaluation consultant specializing in advocacy and development.
By sharing struggles and strategies, evaluators and human rights organizations can help break down the conceptual, capacity and cultural barriers to using monitoring and evaluation (M&E) to support human rights work. In this spirit, three human rights organizations candidly profiled their efforts in a set of case studies recently published by the Center for Evaluation Innovation.
Lessons learned:
- Logic models may be from Mars: Evaluation can be perceived as at cross-purposes to human rights efforts. The moral imperative of human rights work means that “results” may be potentially unattainable. Planning for a specific result at a point in time risks driving work toward the achievable and countable. Learning-focused evaluation can be a useful entry point, emphasizing evaluative processes like critical reflections and one-day ‘good enough’ evaluations.
- Rewrite perceptions of evaluation orthodoxy: There’s a sense in the human rights groups reviewed for this project that credible evaluation follows narrow and rigid conventions and must produce irrefutable proof of impact. Evaluators can help recalibrate perceptions by focusing on a broader suite of appropriate approaches complex change scenarios (such as outcome mapping or harvesting).
- Methods are secondary: Equally important, if not more critical than, the tools and methods used is the confidence and capacity of staff and managers in using them. Investing in training and support is important. Prioritizing self-directed, low-resource internal learning as an integrated part of program work also helps cultivate a culture of evaluation. (See this presentation on organizational learning for an overview of organizational learning and stay tuned for an upcoming paper from the Center for Evaluation Innovation on the topic.)
Rad Resources: Evidence of change journals: Excel workbooks populated with outcome categories, these journals are shared platforms where human rights and other campaigners can log signs of progress and change. The tool facilitates real time tracking and analysis of developments related to a human rights issue and advocacy efforts.
Intense period debriefs: Fitting into the slipstream of advocacy and campaigns, these are a systematic and simple way to review what worked, and what didn’t, after particularly intense or critical advocacy moments. The tool responds to the inclination of advocates to keep moving forward but creates space for collective reflection.
People-centered change models: A Dimensions of Change model, such this one developed by the International Secretariat of Amnesty International, can serve as a shared lens for work that spans different types of human rights and different levels—from global to community.
Get involved: Evaluators can contribute to the discussion with the human rights defenders through online forums like the one facilitated by New Tactics in Human Rights.
The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.