Hello! My name is Anna Williams. I provide evaluation, facilitation, and learning services for social change organizations and BHAG initiatives. (BHAGS, for those unfamiliar with this highly technical term, refers to Big Hairy Audacious Goals.)
I would like to encourage you to consider whether methods used to evaluated advocacy efforts are relevant to your work, particularly if you currently do not think that they are.
First, some context:
Five years ago, after years of conducting program evaluations for government agencies, I began evaluating a global effort created to provide specialized technical assistance to policy makers in a particular sector. Those providing the assistance were engineers, scientists, and other technical consultants; they did not consider themselves to be “advocates.” Yet the most viable methods and tools for evaluating their work, including mixed-method contribution analysis, outcome mapping, analysis of interim outcomes, and social network analysis, all came from – or were used for – evaluation of advocacy.
The same scenario arose when evaluating the work of an academically based institution working to inform the public and decision makers using objective, scientifically credible research. The organization would never call its work advocacy, but the applicable methods were those used to evaluate advocacy.
This story has repeated itself several times over.
Lessons Learned: The term “advocacy” continues to have a narrow interpretation associated with campaigning, lobbying, grassroots organizing, and public opinion. People often do not associate “advocacy” with other types of information provision or attempts to influence even though these too could fit under a broader interpretation of the word.
Methods for evaluating advocacy are more broadly applicable than many think. They apply to efforts with unpredictable or hard to measure outcomes, efforts where outcomes depend on some kind of influence (including promoting the scale-up of direct services), or efforts occurring in complex dynamic contexts where strategies must adapt to be successful.
Further, the methods used to evaluate advocacy are still considered by some as less credible, even though other methods, including experimental or quasi-experimental methods, are not suitable, feasible, or appropriate for advocacy efforts (broadly defined).
At the same time, the field of advocacy and policy change evaluation is still emerging. Those of us in the trenches are developing new tools and testing methodological boundaries; we can benefit from new ideas, building capacity, and refining methods further.
For these reasons, I encourage an open mind about evaluation of advocacy and policy change.
The forthcoming posts sponsored by the Advocacy and Policy Change TIG include practical tips, tricks, and resources. We invite you reflect on these posts, share thoughts about the relevance of methods used for evaluation of advocacy and policy change, and offer ideas on how this field can have broader resonance and reach.
The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.