I’m Nicole Robinson, an evaluator of color living in Wisconsin. I will always remember where I was and what I was feeling on election night during the surprise win of Donald Trump. Furthest from my mind was evaluation even though I have been an evaluator for over ten years. His election changed the political context in America and abroad. Whereas before the political context was just a pendulum shifting from one side to the other or a backdrop for political events, now it’s something different.
Hot Tips: A few years earlier, in a small study, I asked 19 evaluators how they define, measure, document, and assess the political context in thei r advocacy evaluations. They shared observations that might be helpful for today’s context:
- While study participants unanimously agreed that the political context mattered deeply and was essential to assessing an advocacy campaign’s success and outcomes, the advocacy evaluation field itself did not have common minimal standards of practice in this area. Even the term was interchangeable with political climate, landscape, and environment; phrases evoking different meanings and interpretations.
- Few evaluators have the necessary resources to accurately and cost-effectively account for the effects of multiple known 1) change agents such as partners, constituents, and allies, 2) target entities such as decisionmakers, and 3) oppositional forces such as organizations or individuals actively opposing change. Without this information, such evaluations may lack sufficient contextual data and precision about the true impact of an organization and produce findings that are biased, unsupported, or incomplete.
- The political context (even within the same issue area) will be characterized differently depending on an organization’s standpoint (e.g., grasstops versus grassroots). Identifying and framing wins and losses is a process that is contextually dependent in part, by the larger political context that is also inherently racialized and gendered (e.g., advocates of color may frame the advocacy “wins” of white advocates as “losses”).
- Study participants had mixed views on the reliability and utility of data related to oppositional forces, and specifically the sources of these data. Several evaluators were skeptical of how trustworthy primary data would be and if communication with opposing organizations might lead to unintended consequences. For example, would the opposition begin targeting our client more? Would the evaluator be responsible for communicating to the opposition about what they are learning just like any other stakeholder?). Other evaluators in the study were very curious about the opposition, noting that they typically learned about their activities through the eyes and experiences of their client and questioned whether this practice was enough.
Given this, what makes sense for your evaluation? Reflect on:
- What information is important to tell the reader in the methods section about your approach to documenting and assessing the political context?
- How did your client’s position in the larger ecosystem affect your methodology?
- How did the political context shape your findings and conclusions?
Here is a link to the full study on assessing political context. Also see Anne Bufardi et al. reflections on advocacy practices.
AEA365 is hosting the APC (Advocacy and Policy Change) TIG week.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.