AEA365 | A Tip-a-Day by and for Evaluators

TAG | advocacy

AEA365 Curator note: We generally feature posts by AEA staff and AEA365 Curators on Saturdays, and are now pleased to offer occasional Saturday blog posts from our esteemed AEA Board members!

Hi, I am Dominica McBride with Become: Center for Community Engagement and Social Change and serve you on the AEA Board of Directors.

John F. Kennedy said, “It is from numberless diverse acts of courage and belief that human history is shaped. Each time a [person] stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring those ripples build a current which can sweep down the mightiest walls of oppression and resistance.”

In the midst of national leaders acting against our values as an organization, explicitly marginalizing many who find a professional home in AEA and harming communities that many of us serve, I believe we are called as professionals and human beings to make ripples.

In the face of a grim reality, I have hope, especially given what I know about us as evaluators. We are connected to various organizations that are connected to many people, from residents to leaders. We’re able to critically and empirically explore the intersection of our content area and the sociopolitical context and how we may use our position and expertise to move forward on a broader issue. We have a unique set of skills – to gather information, think critically, analyze, synthesize and communicate. We are able to partner with organizations and leaders in many ways to use our skillset towards action around an issue.

With this potential, there are various possibilities for a new or refined role for evaluators to make a necessary difference in this environment. For example, we could:

  • Advocate or mobilize our partners, clients and communities to move in a common direction
  • Build resilience in the systems and institutions that are being depleted of resources
  • Help communities construct new systems and programs that work for and, in many cases, could be run by them

Hot Tips:

Begin one-on-one meetings with your clients, partners, colleagues or fellow community members. Remember to reach out and listen to those not often included in evaluation, such as returning citizens from incarceration, single mothers struggling to get by, and disenfranchised youth. Listen for recurring themes about what matters to them and what may motivate them to act collectively.

After those meetings, convene groups around that common issue to develop a plan of action and ground that action in evidence.

 

Rad Resource:

To learn more about advocacy, mobilizing and organizing and for examples on successful collective action, read Jane McAlevey’s book No Shortcuts: Organizing for Power in the New Gilded Age.

*If you’re interested in exploring or working together around these possibilities, please reach out to me at dmcbride@becomecenter.org or 312-394-9274.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello! I am Liz Zadnik, Capacity Building Specialist at the New Jersey Coalition Against Sexual Assault. I’m also a new member of the aea365 curating team and first-time Saturday contributor!  Over the past five years I have been working within the anti-sexual violence movement at both the state and national levels to share my enthusiasm for evaluation and support innovative community-based programs doing tremendous social change work.

Over the past five years I have been honored to work with talented evaluators and social change agents in the sexual violence prevention movement. A large part of my work has been de-mystifying evaluation and data for community-based organizations and professionals with limited academic evaluation experience.

Rad Resources: Some of my resources have come from the field of domestic and sexual violence intervention and prevention, as well as this blog! I prefer resources that offer practical application guidance and are accessible to a variety of learning styles and comfort levels. A partnership between the Resource Sharing Project and National Sexual Violence Resource Center has resulted in a fabulous toolkit looking at assessing community needs and assets. I’m a big fan of the Community Tool Box and their Evaluating the Initiative Toolkit as it offers step-by-step guidance for community-based organizations. Very similar to this is The Ohio Domestic Violence Network’s Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit, which incorporates the values of the anti-sexual violence movement into prevention evaluation efforts.

Lesson Learned: Be yourself! Don’t stifle your passion or enthusiasm for evaluation and data. I made the mistake early in my technical assistance and training career of trying to fit into a role or mold I created in my head. Activists of all interests are needed to bring about social change and community wellness. Once I let my passion for evaluation show – in publications, trainings, and technical assistance – I began to see marked changes in the professionals I was working with (and myself!). I have seen myself grow as an evaluator by leaps and bounds since I made this change – so don’t be afraid to let your love of spreadsheets, interview protocols, theories of change, or anything else show!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! I’m Lisa Hilt, a Monitoring, Evaluation, and Learning Advisor for Policy and Campaigns at Oxfam.

We strive for policy changes that will right the wrongs of poverty, hunger, and injustice. Much of our progress takes place in small steps forward, resulting from ongoing engagement with key stakeholders and multiple campaign spikes (high intensity, short-term advocacy moments focused on a particular issue).

Following these campaign spikes, teams ask:

  • Were the outcomes worth the resources we invested?
  • How can we be more effective and efficient?

We evaluators ask: How can we support teams to answer these questions with confidence when in-depth analyses are not possible or appropriate? We’ve found from our experience at Oxfam that conducting “simple” value for money analyses for campaign spikes is a useful alternative for the teams we support.

Here are a few tips and lessons based on our experience:

Hot Tips:

Plan ahead: Even simple analysis can be difficult (or impossible) to conduct without pre-planning. Decide in the planning phases of the campaign spike which indicators and investments will be tracked and how.

Break down investments by tactic: Having even a high level breakdown of spending and staff time by key tactics (see example) enables more nuanced analysis of the connections between particular investments and the intended outcomes.

Team analysis is key: In addition to using “hard” data as a source of evidence, utilize insights of team members who bring multiple perspectives and are experts in their field to assess the extent to which their interrelated efforts relate to the results. Team debriefs are an effective way to do this.

Hilt

Lessons Learned:

Present information visually: A visual presentation of investments and outcomes enhances the team’s ability to make sense of the information and generate actionable insights (see example). Indicate which tactics were intended to achieve specific objectives.

Don’t let perfection be the enemy of the good: Slightly imperfect analysis is better than no analysis at all, and often adequate for short-term campaign spikes. Match the levels of rigor and effort to the confidence level needed to enable the team to generate reliable insights.

Trust is important: Trust and communication is fundamental to honest conversations within the team. Be cognizant of team dynamics when designing team reviews, and focus the discussion on the outcomes and tactics, not individual performance.

Focus on the future: The strategic learning and forward-looking aspects of this type of exercise are arguably the most important. While looking back at the campaign spike, focus the conversation on what the team can learn from this experience to improve future efforts.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Carlisle Levine, an independent evaluator specializing in advocacy, peacebuilding and strategic evaluation. I led CARE USA’s advocacy evaluation and co-led Catholic Relief Services’ program evaluation.

A big challenge in advocacy evaluation, because of the many factors that influence policy change and the time it takes for change to come about, is drawing causal relationships between advocacy activities and policy outcomes. Contribution analysis is an approach that responds to this challenge.

John Mayne outlines a six step process for undertaking contribution analysis:

  1. An advocacy team identifies the causal relationship it wants to explore: Did a particular set of advocacy activities contribute to a targeted policy change?
  2. An evaluator helps the team describe how they believe their advocacy intervention contributed to the desired policy change and identify the assumptions underlying their story, thus, articulating their theory of change.
  3. The evaluator gathers evidence related to this theory of change.
  4. The evaluator synthesizes the contribution story, noting its strengths and weaknesses.
  5. By gathering perspectives from allied organizations, others involved in the policy change process, and ideally, policy makers themselves, the evaluator tests the advocacy team’s theory of change.
  6. Using triangulation, the evaluator develops a more robust contribution story. With a wide enough range of perspectives collected, this analysis can provide a credible indication of an advocacy intervention’s contribution to a targeted policy change.

Cool Tricks:

  • Timelines can help advocacy teams remember when activities happened and how they relate to each other.
  • Questions such as “And then what happened?” can help a team articulate how an activity contributed to short and medium-term results.
  • Questions such as “What else contributed to that change coming about?” can help a team identify other factors, beyond their activities, that also contributed to the targeted results.
  • When gathering external perspectives, interviewers may start by asking about the targeted policy change and how it came about. Later in the interview, once the interviewee has shared his/her change story, the evaluator can ask about the role of the organization or coalition being evaluated.

Lessons Learned:

  • External stakeholders are more likely to agree to an interview about an initially unnamed organization or coalition if they are familiar with the evaluator. This is especially true with policy makers.
  • Where external stakeholders do not know an evaluator, a well-connected person independent of the organization or coalition being evaluated can facilitate those introductions.
  • Stakeholders will offer distinct perspectives, based on their experience and interests. The more stakeholders one can include, the better.

Rad Resource: APC Week: Claire Hutchings and Kimberly Bowman on Advocacy Impact Evaluation, February 7, 2013.

Clipped from http://betterevaluation.org/plan/approach/contribution_analysis

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello evaluation world! We are Kat Athanasiades and Veena Pankaj from Innovation Network.

This might sound familiar: you are given hundreds of pages of grant documents to make sense of. You are left wondering, “Where do I start?”

Fig. 1: A typical expression of one of the authors in this situation.

Fig. 1: A typical expression of one of the authors in this situation.

We were recently tasked with guiding evaluation for a funder’s national advocacy campaign, and had to make sense of advocacy data contained in 110 grants. Where did we start?

Julia Coffman’s Framework for Public Policy Advocacy (the Framework; Fig. 2), a comprehensive “map” of strategies that might be used in an advocacy campaign, was the perfect tool to analyze the grant reports. It let us identify and compare advocacy strategies employed by grantees individually, as well as step back and look at strategies used across the campaign.

Rad Resource: You can learn more about the Framework in Julia Coffman’s Foundations and Public Policy Grantmaking.

Fig. 2: The Framework for Public Policy Advocacy plots advocacy strategies against possible audiences (X-axis) and different levels of engagement of those audiences (Y-axis).

Fig. 2: The Framework for Public Policy Advocacy plots advocacy strategies against possible audiences (X-axis) and different levels of engagement of those audiences (Y-axis).

So how did we actually use the Framework to help us with analysis?

1. We reviewed grant reports and determined which strategies were used by each grantee. We created a top sheet to record this information (Fig. 3).

Fig. 3: A sample top sheet for one grant, with relevant advocacy strategies identified.

Fig. 3: A sample top sheet for one grant, with relevant advocacy strategies identified.We entered the data into Excel, where it would be easy to manipulate into a visual, reportable format.

2. We entered the data into Excel, where it would be easy to manipulate into a visual, reportable format.

3. We created a series of “bubble charts” (a chart option in Excel) to display the information (Figs. 4, 5).

Fig. 4: Each “bubble” above represents an advocacy strategy used by Organization X. Blue bubbles represent awareness-building strategies, red show will-building, and yellow denote action strategies.

Fig. 4: Each “bubble” above represents an advocacy strategy used by Organization X. Blue bubbles represent awareness-building strategies, red show will-building, and yellow denote action strategies.

Fig. 5: Across all the grants in this campaign, you can quickly see by the bubble size that certain strategies were prioritized: specifically, grantees used awareness-building strategies most often. These charts allowed the funder to quickly grasp the breadth and depth of the advocacy work in their campaign.

Fig. 5: Across all the grants in this campaign, you can quickly see by the bubble size that certain strategies were prioritized: specifically, grantees used awareness-building strategies most often. These charts allowed the funder to quickly grasp the breadth and depth of the advocacy work in their campaign.

Hot Tip: If you’re designing data collection, the Framework provides a systematic way to sort grantees for further analysis based on the type of advocacy work they are engaged in.

Rad Resource: Want to learn how to make bubble charts? Check out Ann Emery’s blog to get help on constructing circle charts.

We would love to hear how you use the Framework in your work! Let us know via email or in the comments below.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

I’m Jewlya Lynn, CEO at Spark Policy Institute, where we combine policy work with real-time evaluations of advocacy, field building, collective impact, and systems building to achieve sustainable, meaningful change.

While advocacy evaluation as a field has developed tools and resources that are practical and appropriate for advocacy, it has done little to figure out the messy issue of evaluating actual changes in public will.

Most advocacy evaluation tools are too focused on the advocates and champions to learn about the impact on the public. Polling is one approach, but if you’re on the ground mobilizing volunteers to change the way the public is thinking about an issue, public polls are too far removed from the immediate impact of your work. So what do you evaluate?

Cool Trick: When evaluating a campaign to build public will for access to healthcare, polling results provided us with context on the issue, but didn’t help us understand the impact on the general public. Evaluating the immediate outcome of a strategy (e.g., how forum participants responded to the event) had value, but also didn’t tell us enough about the overall impact of the work on public will.

We decided to try a new approach, designing a “stakeholder fieldwork” technique that was a hybrid of polling and more traditional interviews and surveys:

  • Similar to polling, the interviews took only 15 minutes, were by phone and were unscheduled and unexpected.
  • Unlike typical polling, the participants were identified by sampling the phone numbers of the actual audience members of the various grantee activities. Participants were called by researchers with community mobilizing experience and the questions were open-ended, exploring audience experiences with the activity they had been exposed to and how they engaged in other parts of the strategy. We asked for the names and contact information of people they talked to about their experience, allowing us to call the people who represented the “ripple effect.”

The outcome? We learned about the ways that over 100 audience members benefited from multiple types of engagement and we learned about the impact of the “ripple effect” including the echo chamber that existed among audiences of the overall strategy.

Hot (Cheap) Tip: Polling companies use online software to manage the high volume outbound calling and to capture the data. Don’t have money to purchase this type of capacity? We adapted a typical online survey program into our very own polling software!

Rad Resource: The Building Public Will 5-Phase Communication Approach from The Metropolitan Group is a great resource to guide your evaluation design and give you language to help communicate your results.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! My name is Rhonda Schlangen and I’m an evaluation consultant specializing in advocacy and development.

By sharing struggles and strategies, evaluators and human rights organizations can help break down the conceptual, capacity and cultural barriers to using monitoring and evaluation (M&E) to support human rights work. In this spirit, three human rights organizations candidly profiled their efforts in a set of case studies recently published by the Center for Evaluation Innovation.

Lessons learned:

  • Logic models may be from Mars: Evaluation can be perceived as at cross-purposes to human rights efforts. The moral imperative of human rights work means that “results” may be potentially unattainable. Planning for a specific result at a point in time risks driving work toward the achievable and countable. Learning-focused evaluation can be a useful entry point, emphasizing evaluative processes like critical reflections and one-day ‘good enough’ evaluations.
  • Rewrite perceptions of evaluation orthodoxy: There’s a sense in the human rights groups reviewed for this project that credible evaluation follows narrow and rigid conventions and must produce irrefutable proof of impact. Evaluators can help recalibrate perceptions by focusing on a broader suite of appropriate approaches complex change scenarios (such as outcome mapping or harvesting).
  • Methods are secondary: Equally important, if not more critical than, the tools and methods used is the confidence and capacity of staff and managers in using them. Investing in training and support is important. Prioritizing self-directed, low-resource internal learning as an integrated part of program work also helps cultivate a culture of evaluation. (See this presentation on organizational learning for an overview of organizational learning and stay tuned for an upcoming paper from the Center for Evaluation Innovation on the topic.)

Rad Resources: Evidence of change journals: Excel workbooks populated with outcome categories, these journals are shared platforms where human rights and other campaigners can log signs of progress and change. The tool facilitates real time tracking and analysis of developments related to a human rights issue and advocacy efforts.

Intense period debriefs: Fitting into the slipstream of advocacy and campaigns, these are a systematic and simple way to review what worked, and what didn’t, after particularly intense or critical advocacy moments. The tool responds to the inclination of advocates to keep moving forward but creates space for collective reflection.

People-centered change models: A Dimensions of Change model, such this one developed by the International Secretariat of Amnesty International, can serve as a shared lens for work that spans different types of human rights and different levels—from global to community.  

Get involved: Evaluators can contribute to the discussion with the human rights defenders through online forums like the one facilitated by New Tactics in Human Rights.

Clipped from http://www.evaluationinnovation.org/

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello!  My name is Anna Williams. I provide evaluation, facilitation, and learning services for social change organizations and BHAG initiatives.  (BHAGS, for those unfamiliar with this highly technical term, refers to Big Hairy Audacious Goals.)

I would like to encourage you to consider whether methods used to evaluated advocacy efforts are relevant to your work, particularly if you currently do not think that they are.

First, some context:

Five years ago, after years of conducting program evaluations for government agencies, I began evaluating a global effort created to provide specialized technical assistance to policy makers in a particular sector.  Those providing the assistance were engineers, scientists, and other technical consultants; they did not consider themselves to be “advocates.”  Yet the most viable methods and tools for evaluating their work, including mixed-method contribution analysis, outcome mapping, analysis of interim outcomes, and social network analysis, all came from – or were used for – evaluation of advocacy.

The same scenario arose when evaluating the work of an academically based institution working to inform the public and decision makers using objective, scientifically credible research.   The organization would never call its work advocacy, but the applicable methods were those used to evaluate advocacy.

This story has repeated itself several times over.

Lessons Learned: The term “advocacy” continues to have a narrow interpretation associated with campaigning, lobbying, grassroots organizing, and public opinion.  People often do not associate “advocacy” with other types of information provision or attempts to influence even though these too could fit under a broader interpretation of the word. 

Methods for evaluating advocacy are more broadly applicable than many think.  They apply to efforts with unpredictable or hard to measure outcomes, efforts where outcomes depend on some kind of influence (including promoting the scale-up of direct services), or efforts occurring in complex dynamic contexts where strategies must adapt to be successful.

Further, the methods used to evaluate advocacy are still considered by some as less credible, even though other methods, including experimental or quasi-experimental methods, are not suitable, feasible, or appropriate for advocacy efforts (broadly defined).

At the same time, the field of advocacy and policy change evaluation is still emerging.   Those of us in the trenches are developing new tools and testing methodological boundaries; we can benefit from new ideas, building capacity, and refining methods further.

For these reasons, I encourage an open mind about evaluation of advocacy and policy change.

The forthcoming posts sponsored by the Advocacy and Policy Change TIG include practical tips, tricks, and resources.  We invite you reflect on these posts, share thoughts about the relevance of methods used for evaluation of advocacy and policy change, and offer ideas on how this field can have broader resonance and reach.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Colleen Duggan, Senior Evaluation Specialist, International Development Research Centre (Canada) and Kenneth Bush, Director of Research, International Conflict Research (Northern Ireland).  For the past three years, we have been collaborating on a joint exploratory research project called Evaluation in Extremis:  The Politics and Impact of Research in Violently Divided Societies, bringing together researchers, evaluators, advocates and evaluation commissioners from the global North and South. We looked at the most vexing challenges and promising avenues for improving evaluation practice in conflict-affected environments.

CHALLENGES Capture1Conflict Context Affects Evaluation – and vice versa.  Evaluation actors working in settings affected by militarized or non-militarized violence suffer from the typical challenges confronting development evaluation.  But, conflict context shapes how, where and when evaluations can be undertaken – imposing methodological, political, logistical, and ethical challenges. Equally, evaluation (its conduct, findings, and utilization) may affect the conflict context – directly, indirectly, positively or negatively.

Capture

Lessons Learned:

Extreme conditions amplify the risks to evaluation actors.  Contextual volatility and political hyper-sensitivity must be explicitly integrated into the planning, design, conduct, dissemination, and utilization of evaluation.

  1. Some challenges may be anticipated and prepared for, others may not. By recognizing the most likely dangers/opportunities at each stage in the evaluation process we are better prepared to circumvent “avoidable risks or harm” and to prepare for unavoidable negative contingencies.
  2. Deal with politico-ethics dilemmas. Being able to recognize when ethics dilemmas (questions of good, bad, right and wrong) collide with political dilemmas (questions of power and control) is an important analytical skill for both evaluators and their clients.  Speaking openly about how politics and ethics – and not only methodological and technical considerations – influence all facets of evaluation in these settings reinforces local social capital and improves evaluation transparency.
  3. The space for advocacy and policymaking can open or close quickly, requiring readiness to use findings posthaste. Evaluators need to be nimble, responsive, and innovative in their evaluation use strategies.

Rad Resources:

  • 2013 INCORE Summer School Course on Evaluation in Conflict Prone Settings , University of Ulster, Derry/ Londonderry (Northern Ireland. A 5-day skills building course for early to mid-level professionals facing evaluation challenges in conflict prone settings or involved in commissioning, managing, or conducting evaluations in a programming or policy-making capacity.
  • Kenneth Bush and Colleen Duggan ((2013) Evaluation in Extremis: the Politics and Impact of Research in Violently Divided Societies (SAGE: Delhi, forthcoming)

The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hello – we’re Claire Hutchings and Kimberly Bowman, working with Oxfam Great Britain (GB) on Monitoring, Evaluation and Learning of Advocacy and Campaigns. We’re writing today to share with you Oxfam GB’s efforts to adopt a rigorous approach to advocacy impact evaluation and to ask you to help us strengthen our approach.

Rad Resources Resources:

As part of Oxfam GB’s new Global Performance Framework, each year we randomly select and evaluate a sample of mature projects.  Project evaluations that don’t lend themselves to statistical approaches, such as policy-change projects,   are particularly challenging. Here, we have developed an evaluation protocol based on a qualitative research methodology known as process-tracing.  The protocol attempts to get at the question of effectiveness in two ways: by seeking evidence that can link the intervention in question to any observed outcome-level change; and also by seeking evidence for alternative “causal stories” of change in order to understand the significance of any contributions the intervention made to the desired change(s).  Recognizing the risks of oversimplification and/ or distortion, we are also experimenting with the use a of simple (1-5) scale to summarize the findings.

Lessons Learned (and continuing challenges!):

  • As a theory based evaluation methodology, process tracing involves understanding the Theory of Change underpinning the project/campaign, but this is rarely explicit – and can take time to pull out.
  • It’s difficult (and important) to Identify ‘the right’ interim outcomes to focus on.  They shouldn’t be very close in time and type to the intervention; that could make the evaluation superfluous.  Nor should the outcomes be so far down the theory of change that they can‘t realistically occur or be linked causally to the intervention within the evaluation period.
  • In the absence of a “signature” – something that unequivocally supports one hypothesized cause – what constitutes credible evidence of the intervention’s contribution to policy change?  Can we overcome the charge of (positive) bias so often leveled at qualitative research?

And of course, all this coupled with the very practical implementation challenges!  The bottom line: like all credible impact evaluations, it takes time, resources, and expertise to do these well. We have to balance real resource and time constraints with our desire for quality and rigor.

As we near the end of our second year working with this protocol, we are looking to review, refine, and strengthen our approach to advocacy evaluation.  We would welcome your inputs! Please use the comments function below or blog about the issue to share your experience and insights, “top tips” or “rad resources.”  Or email us directly.

The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top