AEA365 | A Tip-a-Day by and for Evaluators

TAG | policy evaluation

Happy Labor Day Week! I am Calista H. Smith, President of C H Smith & Associates, a project management consulting and evaluation firm in Ohio.  C H Smith & Associates has done multiple evaluation projects for the Ohio Department of Education and designed evaluations for other clients related to public policy. In this work, it has been important to understand policymakers and the legislative decision-making process.

Lessons Learned:

  • Legislative processes may influence your evaluation design and timeline. Publicly sponsored projects may have reporting deadlines written into legislation or their funding streams may be subject to annual budgeting reviews.  Projects sponsored by private philanthropy may also be influenced by the legislative cycle as findings may be helpful to craft or change public policy.
  • Policymakers may get data and information from a variety of sources. It was common for a policymaker to have visited a program site or talked extensively with program champions. Program critics may also be vocal to policymakers. External criticism may be based on program perceptions (rooted in experiences or in ideology), or a sense of competition for resources. Your evaluation data will need to be clear and easily accessible to cut through what may be noise.
  • You may need various reports of the same analysis. For one evaluation, we produced a one pager of highlights for quick reference by high level administrators and officials, a 6-page summary of lessons to insert in a public annual report, and a full technical report with more detailed explanation of methodology and data for staffers and stakeholders.

Hot Tips (or Cool Tricks):

  • Spend time refining research questions related to what legislative decision-makers want to or should know regarding the project and related policies.
  • Regardless of the scope of your program evaluation, identify what policies and funding streams impact the program. This understanding helps you to gain clarity on who the stakeholders are and their interests and constraints.
  • In your evaluation design, consider legislative timelines. Think about what data you may be able to reasonably collect, analyze, and report to provide insights to legislators in line with the legislative decision-making process.
  • Encourage your client to think independently from your evaluation about courses of productive action they may take if findings are less favorable than expected. Consider building in extra review time for analysis so that the client can process data and determine how to make lessons actionable or identify questions that may emerge from policymakers about the results or the evaluation approach.

Rad Resources: 

  • The National Conference of State Legislators has a Program Evaluation Society for its state policy staff members. It is helpful to see what materials policy staff members may reference when they would like to implement or review an evaluation.
  • You may map out stakeholder interests, including policymaker’s interest, in your evaluations in a” Power/interest matrix.”:

The American Evaluation Association is celebrating Labor Day Week in Evaluation: Honoring the WORK of evaluation. The contributions this week are tributes to the behind the scenes and often underappreciated work evaluators do. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello. I’m Chris Stalker of Oxfam America with the second of a two-part series on policy advocacy evaluation. Yesterday, I explained that in almost 30 years working in advocacy and campaigning I’ve never felt less confident about ‘how change happens’ than I do now, and that is why we proposed an AEA Conference session on this issue where foundations, southern CSOs and INGOs can share some of their perspectives and ideas.

Lesson Learned:

There are, nevertheless, some good practice tested assumptions to hold onto. As evaluators of outcomes and impact – and despite the turbulent, disruptive operating environment – it remains our job to seek to understand people’s experiences of change and to consider what has, or hasn’t, happened, and why, and why not.

We know that this can be a complicated and complex reality, usually involving a range of perspectives, and sometimes producing contested findings. Because of this, it is the evaluator’s responsibility to use integrity and try and accurately reflect this in their evaluation, and to give voice to all stakeholders and their views on what constitutes ‘facts’, including the views of groups that are seldom heard.

Advocacy evaluation has always been about building an aggregated and balanced sense of different perspectives and should be seen as understanding tested experiences – a craft requiring shrewd judgment and implied knowledge – rather than as a quasi-scientific methodology.

Evaluators and campaign and advocacy practitioners can inhabit such very different spaces sometimes even a mutual incomprehension exists. One concern we often hear is that the discourse around professional evaluation (MEL) can risk creating exclusion out of the notion of evaluation that, at its worst, can make it alienating to practitioners.

One consequence of this is that is even more important to close the space between evaluators and advocacy practitioners. The lived experience is that evaluation is too often seen as burdensome and of limited direct benefit, in part because it is disconnected from both stakeholders’ information needs and wider organisational and sectoral processes.

Rather than positioning evaluation as a parallel and separate entity, the approach we need now should be to develop reflective, learning and evaluation strategies that are accompanied, embedded, participatory and generate knowledge that is relevant, timely and is used.

Experience tells us that learning best serves advocacy when its analysis and tools are in the hands of both advocates and evaluators working collaboratively.

The turbulent, disruptive context means, more important than ever, we now need to continue to communicate a picture of what this looks like and persuade people of the added value that we bring to practitioners. Maybe we’ll then see more evaluations for learning and evidence-informed adjustments and, ultimately, a focus on assessing effectiveness, transformative change and contribution to impact.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello. I’m Chris Stalker of Oxfam America with the first of a two-part series on policy advocacy evaluation. I was recently nearing the end of an internal presentation of a contested policy advocacy and campaign review, when a senior Director reflected: “I disagree. These findings sound like alternative facts to me.”

This got me thinking: given the profound, disruptive public and political contextual changes we are experiencing (and forgive the self indulgence here) are advocacy evaluators an endangered species in a post-truth world?

Lesson Learned:

First, let’s step back for a moment. As a discipline, policy advocacy evaluation has grown exponentially in the past twenty years. Arguably, attempts to understand this evolution have not kept pace with this relatively rapid growth.

My own experiences of this began as a campaigner based in Oxfam’s campaigns department in the mid-1990s, sitting in strategy meetings, testing each other asking: “How do we know we’re making a difference? What tactics should we be doing more of, and which less of? What are the meaningful signs of progress? What about campaigning and impact – the changes in people’s lives – question?

In my experience, campaign and advocacy evaluation grew from the inside out.

Any organisation undertaking advocacy and campaigning must have an interest in understanding what has changed, its significance, and their own contribution to it, as well as to which their ways of working, the activities undertaken, and strategic approaches followed were optimum in advancing towards progressive policy and social change.

However, the changes we have sought these past two decades have – generally – been of the incremental and technical policy kind, rather than striving for significant transformational socio-political change. Understandably, the evaluation ‘community’ has modelled its responses and interventions to reflect these type of changes, both incremental and technical.

In fact, the transformational socio-economic and political change is coming from the political hard right; from regressive populist nationalists, rather than progressive internationalists. And it’s happening as a set of systemic shocks in ways that we, in the NGO sector, didn’t anticipate particularly well (for reasons I won’t go into here). As a consequence, rather than seeking change, we are campaigning and organizing to protect, maintain and defend policy and political gains, from changing policy to saving democracy, and discussing how to assess what levels of success might look like.

What are the consequences for advocacy evaluation of political turmoil and uncertainty? How do we effectively respond to this scale of disruption and turbulence? To what extent are organizations that are already disrupted and unsettled, likely to be open to hearing some of the more challenging, awkward evaluative questions that may need to be asked?

In almost 30 years working in advocacy and campaigning I’ve never felt less confident about ‘how change happens’ than I do now. So I’m testing some of this, perhaps hoping others can build on organically and iteratively. This is why we have proposed an AEA conference session on this issue where foundations, southern CSOs and INGOs can share some of their perspectives and ideas.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top