Hello! I’m Carlisle Levine, an independent evaluator specializing in organizational and advocacy evaluation. I have led CARE USA’s advocacy evaluation and co-led Catholic Relief Services’ program evaluation.
As an internal and independent evaluator, I have learned important lessons about advocacy evaluation.
Lessons Learned:
- Influencing policy change is a lengthy, convoluted process involving many actors, actions and events. Isolating the influence of any one element on a policy change is difficult. Thus, most advocacy evaluators look for evidence of contribution, not attribution.
- Since policy change takes time, identifying meaningful measures of progress is important. Even in difficult political environments, advocates may be organizing and honing messages. Thus, incremental objectives should include measures related to advocacy capacity, as well as policy change.
- Because advocates are very busy and often skeptical about monitoring and evaluation, monitoring activities must have immediate relevance to their work and require minimal effort on their part.
- Advocates are often operating with limited funding. Identifying inexpensive monitoring and evaluation methods is critical.
Hot Tips and Rad Resources:
- The Advocacy and Policy Change Composite Logic Model, developed by the Harvard Family Research Project, provides excellent guidance on meaningful, incremental measures of progress.
- One possible desired outcome is a change in policymaker support for an issue.
- The Policymaker Rating Tool, also developed by the Harvard Family Research Project and found in Unique Methods in Advocacy Evaluation by Julia Coffman and Ehren Reed, is a useful and simple tool to assess policymaker’s influence over a policy and his/her level of support for an issue.
- At CARE, we worked with the Aspen Institute’s Advocacy Planning and Evaluation Program to create a policymaker champion scorecard that built on the Policymaker Rating Tool and identified specific measures relevant to CARE’s advocacy. By April, the tool will be available as part of BOND’s Improve It Framework. You can also read about the tool here.
- Media is often part of an advocacy effort and media monitoring is useful for assessing its effectiveness. However, advocates may lack access to media monitoring tools. At CARE, our media team developed a media champions scorecard that relied on staff knowledge to assess outlets based on their support for CARE and its advocacy issues. This tool will also be available in April as part of BOND’s Improve It Framework.
- Many organizations use periodic reviews, after-action reviews or Intense Period Debriefs to undertake internal assessments of advocacy progress. During these, advocates reflect on an advocacy initiative’s progress. In the conversation, an evaluator can help ask the right monitoring questions, and a trusted outsider can ground truth from the advocates’ claims and provide a different perspective.
The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Pingback: Around the horn: diversity edition | Createquity.