AEA365 | A Tip-a-Day by and for Evaluators

CAT | Advocacy and Policy Change

Hi, I’m Barbara Klugman. I offer strategy support and conduct evaluations with social justice funders and NGOs in South Africa and internationally. I practice utilization-focused evaluation, frequently using mixed methods including outcomes harvesting and social network analysis (SNA). My own history spans social activism, directing NGOs and both working for and being on boards of foundation.

AEA’s Conference theme is Speaking Truth to Power, something that is particularly challenging because of inequitable power relations between nonprofits and their funders, and even between boards and staff.  Evaluators can play a useful intermediary role by providing both the evidence and the facilitation to open space for honest communication.

Hot Tip: I have found the following six factors influenced the effectiveness of my communication across power divides:

  1. Timing of the evaluation and a formative or developmental approach may enhance both grantee and funder interest in the outcomes.
  2. Making learning rather than compliance the evaluation objective creates an environment that welcomes insights to strengthen effectiveness and removes much of the fear and risk from evaluation.
  3. The evaluator needs a substantial capacity for evaluation practice that enhances trust-building to undercut anxiety and establish rules of engagement that allow those with least power the ability to engage, influence and use findings.
  4. The production of high quality evidence while self-evident will be more effective in speaking truth to power if all parties have agreed on the questions, mix of methods and evaluation rubrics
  5. A commitment to and comfortableness with the role of evaluator as social justice advocate assumes that the evaluator can navigate when it is appropriate for her to speak, and when to empower the evaluand to do so.
  6. Terms of reference give the evaluator the independent right and resources to communicate findings to audiences beyond the intended users or those to whom they disseminate findings. While recognising the concomitant ethical responsibility to do no harm, the right and resources to publish findings is critical to the ability of an evaluator to speak truth to power and for the resources that go into evaluation to contribute to broader learning in the field.

Rad Resources: As an illustrative example, see the public communications from the evaluation team of the Ford Foundation’s $54m Strengthening Human Rights Worldwide global initiative. The ToR included funds for the team to publicize findings in Spanish and English which included the summary report, a series of blogs and videos, an article for the international human rights journal SUR and a reflection in Alliance magazine.

Blogs:

The Value of Diversity in Creating Systemic Change for Human Rights

Finding Equity – Shifting Power Structures in Human Rights

Addressing Systemic Inequality in Human Rights Funding

Videos:

The Human Rights System is Under Attack – Can it Survive Current Global Challenges?

The Changing Ecology of the Human Rights Movement

Funding an Effective Human Rights Movement

 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I am Tosca Bruno-van Vijfeijken, and I direct the Transnational NGO Initiative at Syracuse University, USA. The Initiative has assisted a number of major international non-governmental organizations (NGOs) to review their leadership and management practices related to large-scale organizational change. My colleagues – Steve Lux, Shreeya Neupane, and Ramesh Singh – and I recently completed an external assessment of Amnesty International’s Global Transition Program (GTP). The assessment objectives included among others the way in which GTP had affected Amnesty’s human rights advocacy outcomes. It also assessed the efficacy of Amnesty’s change leadership and management. One of the fundamental difficulties with this assessment was limitations on time and resources. As such, it was not possible to develop objective measures that directly represented either assessment objectives. Instead, the assessment process primarily triangulated staff perceptions at various levels and coming from across different identity groups within the organization as an approximate measure of the effect of GTP on human rights advocacy goal achievement.

Lessons Learned:

  1. The change process was controversial within Amnesty and generated high emotions – both for and against. To protect the credibility of the assessment, we gathered multiple data sources and triangulated staff views through careful sampling for surveying, interviewing and focus group use. A survey with external peers and partners added independent perspectives. Workshops to validate draft findings with audiences that had both legitimacy and diversity of views were critical as well.
  2. Evaluating human rights advocacy outcomes is complex. Process and proxy indicators were essential in our assessment.
  3. It is equally difficult to attribute human rights advocacy outcomes to Amnesty’s change process, due to the lack of comparative baseline information or counterfactuals.
  4. Amnesty is a complex, democratic, membership-based NGO. Given the controversy around the ‘direction of travel’ under GTP, Amnesty promised accountability towards its members by requesting this External Assessment barely four years after the change process had been announced. Statements about the extent of correlation between the GTP and human rights advocacy outcomes thus had to be all the more qualified.

With high profile, high-emotion evaluations like this that are also largely dependent on staff perspectives, the measurement of number of ‘mentions’, and/or recurrent staff views was one obvious indicator. However, as evaluators we also need — in a defensible way — to judge the strength of points made or issues raised – and include not just their frequency but also the gravity of their expression.

5. Evaluators need to be acutely aware of where power is situated in organizations if they want to produce actionable, utilization-focused evaluations.

6. In high profile evaluations such as this, an ability to both understand senior leadership contexts, perspectives and world views and to speak truth to power are important.

Rad Resources: The frameworks by Bolman and Deal (Reframing Organizations: Artistry, Choice and Leadership, 2017) and William and Susan Bridges (Managing Transitions, 2017)  offer consistent value in evaluating organizational change processes in INGOs.

Continue the conversation with us!  Tosca tmbruno@maxwell.syr.edu

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Johanna Morariu and Katie Fox of Innovation Network and Marti Frank of Efficiency for Everyone. Since 2015 our team has worked with the Center for Community Change (CCC) to develop case studies of economic justice campaigns in Seattle, Washington, DC and Minneapolis/St. Paul.

In each case study, our goal was to lift up the factors that contributed to the success of the local advocacy campaigns to deepen learning for staff within the organization about what it takes to run effective campaigns. After completing two case studies that shared a number of success factors, we realized an additional benefit of the studies: to provide CCC with additional criteria for future campaign selection. Our case study approach was deeply qualitative and allowed success factors to emerge from the stories and perspectives of the campaign’s participants. Using in-depth interviews to elicit perspectives on how and why change happened, we constructed an understanding of campaign timelines and the factors that influenced their success.

Lessons Learned: This form of inquiry produced two categories of success factors:

(1) contextual factors that are specific to the history, culture, or geography of place; and

(2) controllable factors that may be replicable given sufficient funding, time, and willingness on the part of partners in the field.

These factors broaden the more traditional campaign selection criteria, particularly by emphasizing the importance of local context.

Traditional campaign selection criteria often focus on considerations like “winnability,” elite and voter interests, and having an existing base of public support. While important, these factors do not go deep enough in understanding the local context of a campaign and the unique dynamics and assets of a place that may impact success.

Take for example one of the contextual factors we identified: The localities’ decision makers and/or political processes are accessible to diverse viewpoints and populations. In each of the case studies, the local pathways of influence were relatively accessible to advocates and community members. If this factor is in the mix, a funder making a decision about which campaigns to support may ask different questions and may even come to a different decision. In addition to asking about a campaign’s existing level of support and the political alignment of the locality, the funder would also need to know how decisions are made and who has the ability to influence them.

Lesson Learned: Our analysis produced five other contextual factors that influenced success, including: high levels of public awareness and support for the campaign issue; a progressive population (the campaigns focused on economic justice issues); an existing network of leaders and organizations with long-standing relationships; the existence of anchor organizations and/or labor unions with deep roots in the local community; and the small relative size of the cities.

Hot Tip: The factors provided a useful distinction between assets that were in existence or not (contextual) and factors that, if not already present, could potentially be developed by a new campaign (controllable). The factors also highlight the need to attend to place-based characteristics to understand the success of campaigns.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Carlisle Levine with BLE Solutions in the United States and Toyin Akpan with Auricle Services in Nigeria. We served as evaluation partner to Champions for Change Nigeria, an initiative that builds Nigerian NGOs’ evaluation capacities of to more effectively advocate for policies and programs that support women’s and children’s health. Through this experience, we learned important lessons about international partnerships and their value for advocacy evaluation.

Lesson Learned: Why building international teams is important for advocacy evaluation?

Much advocacy measurement relies on access, trust and accurately interpreting information provided.

  • Assessing advocacy capacity: Many advocacy capacity assessment processes rely on advocates’ self-reporting, often validated by organizational materials. For advocates to answer capacity assessment questions honestly, trust is required. That trust is more easily built with evaluators from the advocates’ context.
  • Assessing advocacy impact: Timelines and document reviews can identify correlations between advocates’ actions and progress toward policy change. However, reducing uncertainty about the contribution of an initiative to observed results often requires triangulating interview sources, including relevant policymakers. An evaluator from a specific policy context is more likely to gain access to policymakers and accurately interpret the responses they provide.

In advocacy evaluation, an evaluation teammate from a specific policy context ideally:

  • Understands the context;
  • Is culturally sensitive;
  • Has relationships that give her access to key stakeholders, such as policymakers;
  • Knows local languages;
  • Can build trust more quickly with evaluation participants;
  • Knows appropriate data collection approaches; and
  • Can correctly interpret data collected.

An evaluation teammate from outside a specific policy context ideally helps ensure that:

  • An evaluation is informed by other contexts;
  • Additional critical questions are raised; and
  • Additional alternative perspectives are considered.

Rad Resources: How did we find each other?

We did not know each other before this partnership. We found each other through networking, and then interviewed each other and checked each other’s past work.

There are a number of other resources we could have used to find each other:

Hot Tips: How did we make it work?

How did we make it work?

  • We communicated frequently to get to know each other. Building trust was critical to our partnership’s success.
  • We stayed in touch using Skype, phone, WhatsApp and email.
  • We were open to each other’s ideas and input.
  • We were sensitive to our cross-cultural communication.
  • We learned about our complementary evaluation skills: Carlisle wrote succinctly, while Toyin collected and analyzed data in the Nigerian context. Over time, our expectations of each other and the speed with which we worked improved.
  • We made our partnership a learning experience, seeking opportunities to strengthen our skills and to present our findings.

Building our international evaluation team took effort. As a result of our investment, we provided our client with more nuanced and accurate insights to inform initiative improvement, and we grew as evaluators.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Barbara Klugman. I offer strategy support and conduct evaluations with social justice funders, NGOs, networks and leadership training institutions in South Africa and internationally. I practice utilization-focused evaluation, frequently using mixed methods including outcomes harvesting and Social Network Analysis (SNA).

Rad Resource: For advocacy evaluation, SNA can help identify:

  • how connected different types of advocacy organizations are to each other;
  • what roles they play in relation to each other such as information exchange, partnering for litigation, driving a campaign, or linking separate networks;
  • if and how their positioning changes over time in terms of relative influence in the network.

The method involves surveying all the groups relevant to the evaluation question, asking if they have a particular kind of relationship with all other groups surveyed. To illustrate the usefulness of SNA, the map below illustrates an information network of the African Centre for Biodiversity, a South African NGO.  In the map, each circle is an organization, sized by the number of organizations who indicated “we go to this organization for information” – to answer one piece of the evaluation question, regarding the position and role of the evaluand in its field, nationally and regionally. Of the 55 groups advocating for food sovereignty in the region who responded, the evaluand is the main bridger between South African groups and others on the continent. It is also a primary information provider to the whole group alongside a few international NGOs and a few African regional organizations.

As another example, an SNA evaluating the Ford Foundation’s $54m Strengthening Human Rights Worldwide global initiative distinguished changes in importance and connectedness before the initiative and after four years, among those inside the initiative (blue), ‘matched’ groups with similar characteristics (orange), and five others in Ford’s portfolio (pink). It shows that the initiative’s grantees and notably those from the Global South (dark blue) have developed more advocacy relationships than have the matching groups (see larger size of nodes and more connections). However, the largest connector for advocacy remains Amnesty International – the big pink dot in the middle, demonstrating its continuing differential access to resources and influence relative to the other human rights groups.

 

Hot tips:

  • Keep it simple: As surveys ask about each organization, responding takes time, so ask only about roles that closely answer the evaluation questions regarding the network. For example, “my organization has engaged with them in advocacy at a regional forum”; “my organization has taken cases with them”
  • Work with a mentor: While SNA software like Gephi is open access, making sense of social network data requires statistical analysis capacity and SNA theory to extract meaning accurately.

Lesson Learned:

  • Consider whether or not to show names of groups as your tables or maps will surface who is ‘in’ and who is on the outside of a network in ways that might have negative consequences for group dynamics or for individual groups, or expose group’s negative perceptions of each other.

Rad resources:

Wendy Church, Introduction to Social Network Analysis, 2018.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I’m Susanna Dilliplane, Deputy Director of the Aspen Institute’s Aspen Planning and Evaluation Program. Like many others, we wrestle with the challenge of evaluating complex and dynamic advocacy initiatives. Advocates often adjust their approach to achieving social or policy change in response to new information or changes in context, evolving their strategy as they build relationships and gather intel on what is working or not. How can evaluations be designed to “keep up” with a moving target?

Here are some lessons learned during our five-year effort to keep up with Women and Girls Lead Global (WGLG), a women’s empowerment campaign launched by Independent Television Service (ITVS).

How to evaluate a moving target?

Through a sprawling partnership model, ITVS aimed to develop, test, and refine community engagement strategies in five countries, with the expectation that strategies would evolve, responding to feedback, challenges, and opportunities. Although ITVS did not set out with Adaptive Management specifically in mind, the campaign incorporated characteristics typical of this framework, including a flexible, exploratory approach with sequential testing of engagement strategies and an emphasis on feedback loops and course-correction.

Women and Girls Lead Global Partnerships

 

Lessons Learned:

  • Integrate M&E into frequent feedback loops. Monthly reviews of data helped ITVS stay connected with partner activities on the ground. For example, we reviewed partner reports on community film screenings to help ITVS identify and apply insights into what was working well or less well in different contexts. Regular check-ins to discuss progress also helped ensure that a “dynamic” or “adaptive” approach did not devolve into proliferation of disparate activities with unclear connections to the campaign’s theory of change and objectives.
  • Be iterative. An iterative approach to data collection and reporting allowed ITVS to accumulate knowledge about how best to effect change. It also enabled us to adjust our methods and tools to keep data collection aligned with the evolving theory of change and campaign activities.
  • Tech tools have timing trade-offs. Mobile phone-based tools can be handy for adaptive campaigns. We experimented with ODK, CommCare, and Viamo. Data arrive more or less in “real time,” enabling continuous monitoring and timely analysis. But considerable time is needed upfront for piloting and user training.
  • Don’t let the evaluation tail wag the campaign dog. The desire for “rigorous” data on impact can run counter to an adaptive approach. As an example: baseline data we collected for a quasi-experiment informed significant adjustments in campaign strategy, rendering much of the baseline data irrelevant for assessing impact later on. We learned to let some data go when the campaign moved in new directions, and to more strategically apply a quasi-experiment only when we – and NGO partners – could approximate the level of control required by this design.

Establishing a shared vision among stakeholders (including funders) of what an adaptive campaign and evaluation look like can help avoid situations where the evaluation objectives supersede the campaign’s ability to efficiently and effectively adapt.

Rad Resources: Check out BetterEvaluation’s thoughtful discussion and list of resources on evaluation, learning, and adaptation.

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! We are Rhonda Schlangen and Jim Coe, evaluation consultants who specialize in advocacy and campaigns. We are happy to kick off this week of AEA365 with interesting posts from members of the Advocacy and Policy Change Evaluation TIG.

Over the last two decades there has been a seismic shift in thinking about evaluating advocacy. Evaluators have generated a plethora of resources and ideas that are helping introduce more structured and systematized advocacy planning, monitoring, evaluation, and learning.

Lessons Learned: As evaluators, we need to be continually evolving and we think the next big challenge is to navigate the tension between wanting clear answers and the uncertainties and messiness inherent in social and political change.

Following are just three of many sticky advocacy evaluation issues, how evaluators are addressing them, and ideas about where we go from here:

Essentially, these developments boil down to accommodating the unpredictability of change and the uncertainties of measurement, thinking probabilistically, and opening up room to explore doubt rather than looking for definitive answers—all to better fit with what we know about how change happens.

Hot Tip: Some questions evaluators can consider are:

  • How can we better design MEL that even more explicitly accommodates the unpredictability and uncertainty of advocacy?
  • What are effective ways to incorporate and convey that judgments reached may have a very strong basis or may be more speculative, as advocacy evaluation is seldom absolutely conclusive?
  • How can we maximize space for generating discussion among advocates and other users of evaluation about conclusions and their implications?

Hot Tip:  Get involved in advocacy. First hand experience, like participating in a campaign in your own community, can be a helpful reality check for evaluators. Ask yourself: How well do the approaches and tools I use as an evaluator apply to that real life situation?

 

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to describe our philosophy – inclusive, accessible, perceptible, voice, empowered, equitable, representative, to name a few.  As we end our week, I’d like to share major themes that have emerged over my three terms in TIG leadership.

Lessons Learned

  • Representation in evaluation should mirror representation in the program. Oftentimes, this can be overlooked in evaluation reports.  This is an example from a community housing evaluation.  The data overrepresented some groups and underrepresented others.

 HUD Participant Data Comparison

  • Avoid using TDMs.
    • T = tokenism or giving participants a voice in evaluation efforts but little to no choice about the subject, style of communication, or any say in the organization.
    • D = decoration or asking participants to take part in evaluation efforts with little to no explanation of the reason for their involvement or its use.
    • M = manipulation or manipulating participants to participate in evaluation efforts. One example was presented in 2010 where food stamp recipients were required to answer surveys or they were ineligible to continue receiving assistance.  The surveys included identifying information.
  • Don’t assume you know the backgrounds, cultures, abilities, and experiences of your stakeholders and participants. If you plan for all, all will benefit.
    • Embed the principals of Universal Design whenever and wherever possible.
    • Utilize trauma-informed practice.
  • Increase authentic participation, voice, recommendations, and decision-making by engaginge all types and levels of stakeholders in evaluation planning efforts. The IDEA Partnership depth of engagement framework for program planning and evaluation has been adopted in state government planning efforts across the United States.

 IDEA Partnership Leading by Convening Framework

  • Disaggregating data helps uncover and eliminate inequities. This example is data from Detroit Public Schools (DPS).  DPS is in the news often and cited as having dismal outcomes.  If we were to compare state data with DPS, does it really look dismal?2015-16 Graduation and Dropout Rates

 

Disaggregating by one level would uncover some inequities, but disaggregating by two levels shows areas that can and should be addressed.2015-16_Grad_DO_rate_DTW_M_F

 

 

We hope you’ve enjoyed this week of aea365 hosted by the DUP TIG.  We’d love to have you join us at AEA 2017 and throughout the year.

The American Evaluation Association is hosting the Disabilities and Underrepresented Populations TIG (DUP) Week. The contributions all week are focused on engaging DUP in your evaluation efforts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Greetings from my nation’s capital – Ottawa, eh! My name is Marc Brown and I’m the Design, Monitoring & Evaluation (DME) manager for our government policy influence campaigns at World Vision Canada.  WVC has spent the past 15 years engaging government stakeholders directly in policy creation and implementation which impacts the well-being of the most vulnerable children around the world.

Three years ago, an internal evaluation position was created to help us plan and monitor progress on our policy influence campaigns. This is a summary of our key learnings from the past few years.

Lessons Learned:

  • Policy influence campaigns are a bit like an ancient, exploratory sea voyage – uncertain destination, shifting winds, unanticipated storms and a non-linear pathway. Policy change happens in a complex environment with rapidly changing decision-makers, shifting priorities and public opinions, uncertain time frames, forces beyond our control and an uncertain pathway to achieving the desired policy change. Campaigns are unlikely to be implemented as planned and unlikely to be replicable. Design, monitoring, and evaluation must therefore be done differently than with traditional development programming.
  • A developmental evaluation approach is internally focused with the purpose of providing rapid feedback for continual program adaptation in fluid contexts. We document our original objectives and plans and the implementation results in hopes of discovering how to adapt our ongoing campaigns – to take advantage of what’s working well or emerging opportunities or to do something different in response to obstacles encountered.

This graphic illustrates the DME framework we’ve developed – starting with a DE paradigm and using the Rad Resources mentioned below and learning from our own experience.

  • An evaluator:
    • facilitates problem analysis to identify root causes and create contextual understanding;
    • helps develop a theory of change, ensuring a logical strategy is developed to address the root causes;
    • documents the results of implementation; and
    • creates space for reflection to discuss evidence / results for program adaptation.
  • The overall framework is circular because the reflection on evidence collected during our implementation leads us to again examine our context and adapt our engagement strategy to guide future implementation.

Rad Resources:

  1. ODI, Rapid Outcome Mapping Approach – ROMA: we’ve used lots of these tools for issue diagnosis and design of an engagement strategy. Developing a theory of change is foundational and useful to evaluators to identify desired changes for specific stakeholders to create indicators and set targets.
  2. The Asia Foundation, Strategy Testing: An Innovative Approach to Monitoring Highly Flexible Aid Programs: This is a good comparison of traditional vs. flexible M&E and includes some great monitoring templates. Documenting the changes in a theory of change and the reasons for the changes demonstrates responsiveness. That’s the value of reflection on evidence that has been facilitated by the internal evaluator!
  3. Patton’s book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, provides a valuable paradigm in creating an appropriate monitoring framework.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

We are Annette Gardner and Claire Brindis, both at the Philip R. Lee Institute for Health Policy Studies at the University of California, San Francisco and authors of the recent book, Advocacy and Policy Change Evaluation: Theory and Practice.

There is a growing body of resources on linking theory to advocacy and policy change evaluation practice. However, APC evaluators are surfacing knowledge that can contribute to the scholarship on public policy and influence.  Based on our review of political science and public policy arenas, we would like to nudge the conversation to the next level, suggesting some topics where APC evaluators can ‘give back’ to the scholarship.

New Voices and Forms of Participation: APC evaluators have not shied away from identifying new voices or recognizing existing voices whose influence has gone unnoticed, such as ‘bellwethers.’ Moreover, advocates are leveraging new forms of communication, such as text messaging.  Evaluators are on the front lines and are learning about new advocacy strategies and tactics in real time.

Assessing Advocacy Effectiveness: Evaluators can provide information on advocacy tactics and their influence, such as findings from policymaker surveys that inquire about perceptions of specific advocacy tactics. Second, a perennial research question on influence is: Is it ‘Who you know’ or ‘What you know’? Or both? Given their vantage point, evaluators can characterize the roles and relationships of advocates and decision-makers who work together to craft and/or implement policy.

Other areas of inquiry include:

  • Taking the Policy Stage Model to the Next Level: Evaluators are documenting whether specific tactics wax and wane during the policy cycle. Given limited resources, is it better to engage in targeted advocacy during one stage of the policymaking process?  Evaluators are focusing on a specific stage and can determine its relative importance to other stages.
  • Advancing Contextual Analysis: Evaluators are well positioned to characterize complicated policy arenas. Focusing on contextual factors using interviews and observations can advance understanding why specific advocacy tactics are/aren’t successful.
  • Measuring Civil Society and Civic Renewal: Evaluators that focus on grassroots, community-based advocacy campaigns have a front-row seat to the effectiveness and impacts of these initiatives and their potential for strengthening civil society.

APC evaluators are well positioned to contribute to the knowledge base of successful and not so successful forms of influence and their outcomes.  Publications such as the Journal of Policy Analysis and Management, Policy Studies Journal, and Public Policy and Administration are waiting to hear from you!

Rad Resources: ORS Impact’s 2016 paper, Beyond the Win: Pathways for Policy Implementation describes linking designs and theories of change to scholarship on policy change. For a refresher on the mechanics of public policy and politics, check out Michael Kraft and Scott Furlong’s Public Policy: Politics, Analysis, and Alternatives.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top