Hi! I am Leah Zallman, MD, MPH, physician advocate and advocacy evaluator at the Institute for Community Health. Clearly, I love advocacy! One of the things I love is that it is complicated, which makes evaluating advocacy – you guessed it – complicated!
Advocacy leaders try to pull as many people into a cause as possible. They see infinite potential advocates . . . and evaluators see a headache! The denominators we evaluators have come to love and respect do not exist – or worse, they exist but are unknown and always changing. This makes many of our most basic tools, such as proportions, impossible to capture.
Hot Tip: Forget the denominator! In our evaluations of advocacy, we have focused instead on how people flow up engagement ladders – such as CommunityCatalysts’ pyramid of engagement. This lets us understand how many people have been inspired to join the process AND how many are inspired to be more deeply engaged, without relying on an unknown and ever-elusive denominator.
On top of that, even though advocacy leaders often want to get advocates more deeply engaged, there is no easy way to measure how much a person is engaged. When we started this work, we reviewed the literature for engagement measures and found that existing measures were too complex to be feasible for advocacy leaders we work with. Advocacy leaders do not have time to waste – and can’t spend their time measuring how deeply each person is engaged.
Hot Tip: Forget measuring if people are engaged! Advocacy leaders don’t track this, so why ask them to? But advocacy leaders often do track which activities people participate in (like how many people attended trainings, or wrote letters). For example, for one of our evaluations, we categorized activities by level of engagement — for example, attending a community meeting indicated a low to medium level of engagement, whereas testifying before Congress indicated a much higher level of engagement. We were then able to measure changes in the numbers of people doing activities per each level of engagement, without asking busy advocacy leaders to measure anything they don’t already capture.
Finally, advocacy responds to an ever-changing political environment, so advocates’ goals adjust over time. This is the magic of advocacy – finding windows of opportunity and mobilizing groups toward achievable goals. When environments change, goals change.
Hot Tip: Forget the numbers! Well don’t forget them completely – numbers are important! But don’t rely on them too heavily. If you rely on a prespecified numerical measure, you’ll miss the chance to capture that unexpected learning that happens when programs and their policy environments shift. In other words, make sure there is plenty of qualitative data collection and constantly update your questions so you can learn from the magic that happens when the policy environment suddenly changes.
Learn more about our approach from blogs by Carrie Fisher and Sofia Ladner. If you have hot tips for advocacy evaluation, we’d love to hear from you! As an advocate and advocacy evaluator myself, I know how important learning from advocacy is!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.