Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

APC TIG Week: Using Contribution Analysis to Explore Potential Causal Linkages in Advocacy and Policy Change by Robin Kane, Carlisle Levine, Carlyn Orians, and Claire Reinelt

Hello! We are Robin Kane of RK Evaluation and Strategies, Carlisle Levine of BLE Solutions, LLC, Carlyn Orians of ORS Impact, and Claire Reinelt, independent evaluation consultant. We offer evaluation, applied research and technology services to help organizations increase their effectiveness and contribute to better outcomes.

In our advocacy and policy change evaluation work, we have found contribution analysis useful for identifying possible causal linkages, and determining the strength and likelihood of the causal connection.

Contribution analysis starts with working with advocates to develop a theory of change describing how they believe a specific change came about. The evaluator then identifies and tests alternative explanations to that theory of change by reviewing documents and interviewing advocates’ allies, others trying to influence a policy change, and policy makers themselves. Then the evaluator writes a story outlining the advocates’ contribution to a specific change of interest, acknowledging the roles played by other actors and factors.

When trying to identify possible causal linkages in advocacy and policy change evaluation, why choose contribution analysis?

Hot Tips:

  • Contribution analysis is a good choice when the need for information emphasizes plausible demonstration of credible contribution over proof or quantification of contribution.
  • Often in an advocacy process, multiple stakeholders are involved. Contribution analysis provides a method for distinguishing among contributions towards a policy change.
  • Contribution analysis allows for the acknowledgement of the contributions of different actors and factors to a policy change.
  • Through testing alternative explanations, contribution analysis offers a rigorous way to assess what difference a particular intervention made.

Cool Tricks:

  • Contribution analysis was developed as a performance management tool, and works especially well when performance outcomes and benchmarks are clear. In advocacy evaluation, goals and strategies adapt and respond to the political environment. To address this challenge, we developed timelines of actions, including high-level policy meetings, communications and media efforts, research, and policy briefs and position papers. We mapped our timelines to strategic moments when there were incremental changes related to our policy of interest. We could then trace how an advocacy effort influenced and was influenced by a policy change process.
  • Interpreting information received can be tricky, since different stakeholders have not only different perspectives regarding how change came about, but also different interests in how that change is portrayed. Being aware of stakeholders’ perspectives and interests is critical for interpreting the data they provide accurately.

Rad Resources: Stay tuned for our brief on using contribution analysis in advocacy and policy change evaluation; available prior to AEA 2017 on our websites and also on www.evaluationinnovation.org

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

6 thoughts on “APC TIG Week: Using Contribution Analysis to Explore Potential Causal Linkages in Advocacy and Policy Change by Robin Kane, Carlisle Levine, Carlyn Orians, and Claire Reinelt”

  1. Hi, Marc.

    I’ve certainly had that experience. I think it’s quite common.

    In cases like that, we rely on reviewing documents and many more interviews with people familiar with the policy issue, the advocacy done related to it, and the change that came about. While these people are not inside the heads of the decisionmakers, hopefully, by putting all of their perspectives together, we can get somewhere close to the truth.

    That said, we recently were exploring a case for which we did not have access to the policymaker, and we received conflicting responses from the sources we were able to interview. Therefore, absent access to the policymaker, we could not make a firm statement about how the intervention of interest contributed to the policy change, so we put out our analysis and then stated this. You’ll see this case in an evaluation we will publish sometime in July.

    In the meantime, Michael Q. Patton wrote an article some years ago on this topic that you might find useful. Here it is: http://journals.sfu.ca/jmde/index.php/jmde_1/article/view/159/181.

    Best,
    Carlisle

  2. My only question is how do you go about interviewing decision or opinion makers on how your programme contributed to them arriving on a particular decision on a sensitive topic like LGBTI? Having in mind difficult contexts!!

    1. Donnelly, thank you for your response. I can provide some thoughts, and then I would love to hear from others.

      When designing any evaluation, I always think about how the evaluation might affect those who participate in it, those intended to benefit from the intervention being evaluated, other stakeholders and the intervention itself. I believe that evaluation can potentially advance or undermine the intervention it is evaluating. At a minimum, I want to do no harm.

      In a recent advocacy and policy change evaluation I undertook with colleagues, we used contribution analysis, and were very careful to ensure that the way we wrote up our findings would not negatively affect advocacy on that issue in the future. One of our choices was to not publicize our findings. We are publicizing them, but after careful review.

      In another evaluation of an issue that wasn’t very sensitive, it wasn’t in a policymaker’s interest to acknowledge that he had been influenced, in this case, by a children’s march, so he reported that he had no knowledge of the march. In that case, understanding why he offered that reply was important, as was triangulating data with other sources familiar with what had happened. Again, we were sensitive in how we wrote up our findings, so as not to jeopardize future advocacy.

      In your case, which is super sensitive, I might not interview the key decision or opinion makers (acknowledging the difficult position in which such interviews would put them), and rather rely on a document review and interviews with others familiar with the issue and how the change came about. I would need to interview a sufficient number of others representing a variety of perspectives to maximize the credibility of my findings. But I would only do so if interviewing those others would do no harm to future advocacy on the issue. I would definitely not publicize my findings, given sensitivities. I would only use them internally to help guide future advocacy.

      That’s how I’m thinking about it right now. I would welcome hearing how you are approaching this. And I would also welcome hearing from others.

      Best wishes,
      Carlisle

  3. Re “The evaluator then identifies and tests alternative explanations to that theory of change…”

    Not so easy, at all…

    Here is the problem. Imagine a programme has 6 expected outputs that could be contributing to 6 different outcomes. There are 36 different possible single causal links between these events. But it is likely, in many settings, that it is _combinations_ of these that lead to more versus less achievement of each of these outcomes. There are 2^36 possible combinations of these causal links. Make a guess how long it will take the evaluation team to explore all of these, even at the most cursory level!!!!

    If you really want to systematically and transparently search for alternative explanations in settings like this (and many programs will be way more complicated that this) the kind of in-depth inquiry described above will need to be _assisted_ by automated search algorithms (aka machine learning / predictive analytics algorithms). Please note I said assisted by, not replaced by.

    1. Hi, Rick.

      Thanks for your comment.

      Could you share a specific example of what you are describing?

      In the interventions we’re describing, organization X is undertaking advocacy, and we’re exploring the causal linkages between that advocacy and outcome Y. The advocacy we’re talking about is a package comprised of activities such as producing and publicizing briefs, having one-on-one meetings with policymakers, holding public events, organizing sign-on letters, etc. We’re not asking which of those activities had the greatest contribution, since advocacy is not a single activity. Rather, we are asking what, if any, contribution the package made.

      Our alternative explanations would include advocacy undertaken by other organizations, changes in the external environment that created new openings, or actions on the part of policymakers not influenced by others, etc.

      What you are describing sounds a bit more like a variable approach: does a cause b, as compared to something else causing b. While contribution analysis is more of a process approach, in which we first start with an intervention and create a story about where it led, and then we start with the desired outcome and carefully work our way backwards to find out all that contributed to it. In our approach, we would come across the six outputs you describe, either as a package or as separate interventions, and via our interviews and document review, we would get a sense of the degree to which each contributed to our outcome of interest.

      By sometime in July, we will publish an evaluation in which we used contribution analysis to explore causal linkages with some pretty complex advocacy and policy change processes. It will be great to receive your feedback on the contribution stories.

      In the meantime, let’s keep talking about this!

      Best,
      Carlisle

  4. We tried taking an approach similar to this recently to develop a story of a recent advocacy success. However, it was really challenging to get interviews with decision-makers involved in the policy decisions. Any ideas on how to increase access to this type of evidence?

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.