I’m Tayo Fabusuyi, lead strategist at Numeritics, a Pittsburgh-based research and consulting practice.
While advocacy has been around since humans were first able to give voice to different opinions, the evaluation of advocacy efforts is still very much in its infancy. One of the hallmarks of a nascent field is the absence of consensus on nomenclature and standards that most stakeholders subscribe to. This attribute is more pronounced in the advocacy evaluation space given the nature of advocacy efforts that often require the use of networks and coalitions, its emergent nature, multiple objectives from different stakeholders that may be mutually exclusive, the uniqueness and the context-specific nature of advocacy efforts, and the inability to attribute cause and effect in an open system that often characterize advocacy efforts.
Lessons Learned: As a result, advocacy evaluators need to foster a community of practice to aid in exchanging knowledge and in creating a body of work that documents what works, why and within what context. The learning process thrives best when we promote social interaction that facilitates the exchange of tacit knowledge, and when the body of evidence that comprises explicit knowledge is compiled across time, space and context. Advocacy efforts are nearly always unique, and insights from specific engagements may not be transferable to the next.
This is why it is imperative to have a repository of experiences across different contexts. The compilation may also provide opportunities that allow tacit knowledge to be converted to explicit knowledge. This affords the fungibililty that makes the insights and experiences gained from one specific advocacy evaluation effort to be transferable to a similar one.
Drawing from documented past experiences allows us to develop a conceptual framework within which advocacy evaluation studies could be analyzed, and compared. A modest goal of this framework is a catalog of successes, failures, methodology used, unintended outcomes and contexts to guide future advocacy evaluations. This initiative can establish a basis on which we can articulate common ground on advocacy evaluation and provide insights on how best to proceed in the face of remaining uncertainty. Sharing can accelerate learning.
Hot Tip: If you are an American Evaluation Association member, join the Advocacy and Policy Change Topical Interest Group (TIG). You can do so by logging into the AEA website and selecting “Update Member Profile” from the Members Only menu. If you aren’t an AEA member, I invite you to join AEA.
Hot Tip: AEA members, take the next step and join the Advocacy and Policy Change discussion list (go to Members Only -> Groups/Forums Subscriptions) and contribute to vibrant conversations that can help build our community of practice.
We’re celebrating Advocacy and Policy Change week with our colleagues in the APC Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.