AEA365 | A Tip-a-Day by and for Evaluators

TAG | Theory of Change

Hi there! I am Marah Moore, the founder and director of i2i Institute (Inquiry to Insight). We are based in the high desert mountains of Northern New Mexico, and we work on evaluations of complex systems locally, nationally, and internationally.

Since 2008 I have been the lead evaluator for the McKnight Foundation’s Collaborative Crop Research Program (CCRP), working in nine countries in Africa and three countries in the Andes. In 2014 the CCRP Leadership Team (LT), guided by the evaluation work, began an intentional process of identifying principles for the program. Up to that point we had developed a robust and dynamic theory of change (ToC) that guided program evaluation, learning, planning, and implementation. The ToC helped bring coherence to a complex and wide-ranging program. Because we wanted the ToC to remain a living document, growing and changing as the program grew and changed, we found we needed to identify a different sort of touchstone for the program—something that would anchor the conceptual and practical work of the program without inhibiting the emergence that is at the core of CCRP. That’s when we developed principles.

CCRP has eight overarching principles. The principles guide all decision-making and implementation for the program, and inform the development of conceptual frameworks and evaluation tools.

In addition to the principles at the program level, we have developed principles for various aspects of the program.

Lesson Learned: Programs based on principles expect evaluation to also be principles-based. Here are the draft principles we are using for the CCRP Integrated Monitoring & Evaluation Process.

  1. Make M&E utilization-focused and developmental
  2. Ensure that M&E is informed by human systems dynamics and the adaptive cycle: What? So what? Now what?
  3. Design M&E to serve learning, adaptation, and accountability
  4. Use multiple and mixed methods.
  5. Embed M&E so that it’s everyone’s responsibility
  6. Align evaluation with the Theory of Change.
  7. Ensure that M&E is systematic and integrated across CCRP levels
  8. Build M&E into project and program structures and use data generated with projects and programs as the foundation for M&E.
  9. Aggregate and synthesize learning across projects and time to identify patterns and generate lessons.
  10. Communicate and process evaluation findings to support ongoing program development and meet accountability demands.
  11. Ensure that evaluation follows the evaluation profession’s Joint Committee Standards.

Hot Tip: The evaluation process can surface principles of an initiative, exposing underlying tensions and building coherence. The evaluation can go further and assess the “fidelity” of an initiative against the principles and explore the role of the principles in achieving outcomes. 

Rad Resources:

The American Evaluation Association is celebrating Principles-Focused Evaluation (PFE) week. All posts this week are contributed by practitioners of a PFE approach. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

My name is Steve Powell and I’m an evaluator based in Sarajevo, Bosnia. I’m very interested in theories of change and their role in evaluation – both explicit theories (perhaps a programme’s logframe) and implicit (what is really going on, or what people really think is going on). I think it’s important to be able to quickly sketch out, visualise, share and update theories of change. If only these four things were really easy, we might see a lot more debate about programme theory and a lot more helpful diagrams in evaluation.

I always found it a pain to do using PowerPoint and co., so in the end I made my own online tool called Theory Maker, at http://theorymaker.info. So yes, this post is self-promotion, kind-of, but the tool is completely free, open-source and you don’t even have to register. Mostly I’m interested in feedback from you, my illustrious AEA colleague.

Rad Resource: http://theorymaker.info

You make theory-of-change diagrams just by typing the names of the elements in a structured way into a (resizeable) window, and you get a live diagram as output which reflects what you type. Some people can’t stand making a diagram this way, some people love it! I like it because you don’t have to fiddle about with dragging boxes and connectors around; Theory Maker automatically finds a good layout even when you add and make changes.

You can save the diagram as a graphic on your computer and paste it into any document. You can also save the diagram at Theory Maker and send other people a link – to the original diagram, and/or to a version they can edit.

There are lots of features including:

  • different ways to tell Theory Maker to put links between different variables
  • optional boxes to group the pieces of your diagram, for example to mark off different phases, regions or stakeholders
  • click to include/exclude various parts of your diagram or even include other diagrams
  • easy to add cross links which are difficult in traditional logframes, e.g. from one Output to more than one Outcome ?- add notes, conditional formatting and lots more

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each others stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, Veronica Olazabal, senior associate director of evaluation at The Rockefeller Foundation here with Karim Harji, director at Purpose Capital. As many of you already know, market-based approaches to poverty alleviation are gaining traction across the social sector. A range of innovative strategies are being used to finance these initiatives, including impact investing—an approach to deploy various types of capital to intentionally deliver social impact alongside financial return.

Olazabel

Mapping of the Impact Investing Industry (courtesy of E.T. Jackson and Associates)

There is clearly a strong need to strengthen impact measurement and evaluation in this space. Many evaluators have had limited engagement to date, but it is an emerging topic of interest, as demonstrated by the focus at the Evaluation 2014 AEA annual conference. Below, we outline a few ways the evaluation community can become more engaged with evaluating impact investing:

Lessons Learned:

  1. Break out your Theory of Change facilitation skills! Like more conventional social sector program-designs, impact investors have theories of how they expect change to happen, including assumptions and goals.While they may use an ‘investment thesis/approach’ instead of  ‘theory of change,’ they often use these models to select, assess and monitor investments. Thus, it should not be surprising that theory of change has become a tool of emerging importance to impact investing. Rad Resource: Interrogating the theory of change.
  1. Use monitoring strategies to generate timely data: Monitoring can be an important strategy for tracking social performance, especially since investors already use financially-focused tools in this way. When designed and implemented well, monitoring data can provide timely and relevant information that can be used to adapt operational plans for investee enterprises or funds. For example, in the access to finance sector, monitoring has been used effectively to validate that target beneficiaries and clients are actually being reached. Rad Resource: Portfolios of the Poor
  1. Gain familiarity with emerging standards and approaches: There are a range of initiatives in this sector that seek to assess social change for a range of uses and users. For example, the Impact Reporting and Investment Standard System(IRIS) seeks to build a standard vocabulary/taxonomy at the output level, while Global Impact Investing Global Rating System (GIIRS) is a standards-based rating system that assesses impact funds and social enterprises.

As this area continues to evolve, more evaluation capacity will be needed at every level, and particularly around moving from lives touched (reach) to validating lives impacted (depth) in the field. Evaluators will be important not only to assess the intended and actual outcomes from individual transactions, but also to critically analyze how the field is contributing to market-based approaches to poverty alleviation.

 Rad Resource: Assessing Impact Investing: Five Doorways for Evaluators – Ted Jackson and Karim Harji.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Lycia Lima, the executive coordinator of the newest CLEAR center- for Brazil and Lusophone (Portuguese-speaking) Africa. We’re formally joining CLEAR later this year and are planning our inauguration in October 2015. I was also one of the organizers involved in the formation of the Brazilian M&E Network – Rede Brasileira de Monitoramento e Avaliaçã – which has become a very active association.

We’re based in Brazil, at the Sao Paulo School of Economics at Fundação  Getulio Vargas and work jointly with the school´s Center for Applied Microeconomics. Through CLEAR we’re looking forward to expanding into new areas and building bridges with the M&E communities in Brazil and elsewhere. In particular, we’ll be working to advance evaluation capacity development services and products in Portuguese for use in Lusophone (Portuguese-speaking) countries, all to foster evidence-based policy making in these countries.

Historically, our team in Brazil has had a lot of experience in carrying out impact evaluations in all sectors. Though we specialize in impact evaluation, we have experience in and appreciate the broader range of M&E approaches, and think that an integrated approach will make our work better. In this post, I have put together a few tips about impact evaluation that you would not learn in conventional econometrics books. This is advice I’d give to impact evaluators.

Lessons Learned: Know well the theory of change of your intervention! If you don´t know the theory of change well, you might not fully understand the causality channels and might leave important impact indicators out of the analysis. Get your hands dirty! Go to the field, talk to project managers, talk to beneficiaries and make sure you fully understand the intervention you are trying to evaluate. Also, be careful with the quality of your data. Make sure you spend some resources on hiring and training qualified staff to supervise data collection. Good quality data is crucial for your study.

Lessons Learned: Even if you are an empiricist and believe mostly in quantitative methods, do not underestimate the value of mixed methods. In particular, qualitative approaches will help you understand “why and how” things happened. Importantly, get to know M&E “foundational” literature from Patton, Scriven, Bamberger, and others.

Rad Resources: While in general M&E materials available in Portuguese are limited in numbers, there is a very useful impact evaluation book that I have co-authored with other Brazilian experts. The book may be obtained free at

http://www.fundacaoitausocial.org.br/_arquivosestaticos/FIS/pdf/livro_aval_econ.pdf

We look forward to contributing to the M&E literature base in Portuguese, so please check back with us on this.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

My name is Rachel Leventon, and I am a consultant at CNM Connect, a nonprofit capacity building organization. Most of my work focuses on training, coaching, and consulting to increase the internal program evaluation capacities of nonprofit organizations and collaboratives. I am a sociologist at heart so theory informs everything I do. As a result, theory of change is a common theme in my evaluation consulting practice.

I spend most of my time talking about measuring success with client-focused outcomes, and In every class I teach there is at least one student who doesn’t fit the mold. Often this is a representative from a collaborative or coalition asking: “Who are the clients served by my housing collaborative or my literacy coalition when our activities don’t directly touch clients? Our members don’t even provide the same services to the same kinds of clients!” These coalitions and collaboratives cannot always measure their success using traditional outcomes-based program evaluation methodology, but that doesn’t mean they cannot be evaluated.

Hot Tip: Use theory of change to identify how a collaborative or coalition functions and to define goals for evaluating its effectiveness. Recognize that the actual “clients” are the participating member organizations.

Hot Tip: Using theory of change in this way can also help participating organizations better understand how they can maximize their available resources and strengthen their role as collaborative members.

A theory of change for a collaborative might look something like this:

Rachel's TOC

Lessons Learned: Illustrated this way, it is clear that the measurements of the collaborative could focus on whether networking and information-sharing activities help participating organizations better serve their own clients.

  • Is information-sharing and networking happening as planned within the context of the collaborative?
  • Are participating organizations building awareness, knowledge, and connections that they could use to improve their services?
  • Are participating organizations using new awareness, knowledge, and connections in a way that could improve their services?
  • Is participating organizations’ usage of new awareness, knowledge, and connections resulting in improvement in the services?

Hot Tip: Remember that the usefulness, usage, and benefit of information-sharing and networking taking place in the collaborative may take on different forms for each participating organization.

Rad Resource: TheoryofChange.org (www.theoryofchange.org) provides great resources on understanding and creating theories of change, and it also links to an awesome FREE resource – Theory of Change Online (TOCO) – http://toco.actknowledge.org/ diagramming tool to create your own theory of change diagrams without having to invest in pricey software.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Sheila B. Robinson, aea365’s Lead Curator and sometimes Saturday contributor.

You’ve probably read about AEA’s LinkedIn page for fabulous free discussion about anything evaluation. Today, I want to highlight one particular discussion that has sparked a good deal of participation from a diverse group of evaluators.

Terminology has always been a sticky point for evaluators as those from different sectors (i.e. health, education, non-profits, government, etc.) have developed their own preferences and in many cases, definitions of terms.

This* discussion started by a Project Manager received 51 responses – not the longest discussion  – but nonetheless a rich and detailed investigation into these two terms. (*you will need to have a LinkedIn account to access the discussion)

Evaluators rang in on this from a variety of perspectives. Positions identified in the profiles included a:

  • Research Analyst
  • Prevention Specialist
  • Independent Consultant
  • Strategy and Planning Advisor
  • Community-based Impact, Assessment and Evaluation Consultant
  • Impact, Monitoring, Evaluation and Research Specialist
  • Senior Public Engagement Associate
  • Policy Analyst
  • And several owners and presidents of research, consulting, or evaluation firms.

The discussion featured individuals who offered their own definitions of the two terms, after which several became engaged in a discussion of how these tools are used or should be used in practice.

Lesson Learned: Most commenters consider logic models and theories of change as related but distinct. Several indicate that theories of change are indeed embedded in logic models.

Here is how some commenters describe logic models:

  • help identify inputs, activities and outcomes
  • trace a flow of inputs through program activities to some sort of output or even on to outcome, and are usually intended as handy guides to program implementer
  • visual model of how a program works
  • represent the basic resource and accountability bargain between the ‘funder’ and the ‘funded’

Here is how some commenters describe theories of change:

  • show how and why outcomes/activities cause change
  • an attempt to make explicit the “whys” behind relationships or expected outcome
  • explicit or implicit theory of how change occurs
  • how one designs a program as it breaks out how and why the change pathway will happen
  • work behind the scenes, and can be drawn from to assemble logic models

Rad Resources: Several commenters offer resources for exploring these concepts:

I recommend these blog posts on the topic:

and this one on the topic of evaluation terminology:

And finally, I must recommend Kylie Hutchinson’s tools for untangling evaluation terminology:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello, I am Carolyn Cohen, owner of Cohen Research & Evaluation, LLC, based in Seattle Washington. I specialize in program evaluation and strategic learning related to innovations in the social change and education arenas.  I have been infusing elements of Appreciative Inquiry into my work for many years.  Appreciative Inquiry is an asset-based approach, developed by David Cooperrider in the 1980s for use in organizational development. It is more recently applied in evaluation, following the release of Reframing Evaluation through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas in 2006.

 Lessons Learned:

Appreciative Inquiry was originally conceived as a multi-stage process, often requiring a long-term time commitment. This comprehensive approach is called for in certain circumstances. However, in my practice I usually infuse discrete elements of Appreciative Inquiry on a smaller scale.  Following are two examples.

  • Launching a Theory of Change discussion. I preface Theory of Change conversations by leading clients through an abbreviated Appreciative Inquiry process.  This entails a combination of paired interviews and team meetings to:
    • identify peak work-related experiences
    • examine what contributed to those successes
    • categorize the resulting themes.

The experience primes participants to work as a team to study past experiences in  a safe and positive environment. They are then  able to craft  strategies, outcomes and goals. These elements become the cornerstone of developing a Theory of Change or a strategic plan, as well as an evaluation plan.

  • Conducting a needs assessment. Appreciative interviews followed by group discussions are a perfect approach for facilitating organization-wide or community meetings as part of a needs assessment process.   AI methods are  based on respectful  listening to each other’s stories, and are well-suited for situations where participants don’t know each other, or have little in common.

Using the resources listed below, you will find many more applications for Appreciative Inquiry in your work.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

We are Silvia Salinas-Mulder, Bolivian anthropologist, feminist activist and independent consultant, and Fabiola Amariles, Colombian economist, founder and director of Learning for Impact. We have worked for several years as external evaluators for development programs in Latin America. The following ideas may help to operationalize the principles of gender- and human rights (HR)-responsive evaluation in complex, multicultural contexts.

Lesson learned: Terms of Reference (TOR) for an evaluation are not engraved in stone.

Tip: Reframe the often conventional evaluation questions and other aspects of the evaluation process to ensure that gender and HR issues surface, and evidence of change (or no change) in women’s lives is gathered. Take into account context-specific issues and gender dynamics, as well as relevant cultural patterns, such as the effects of migration in the family roles and decision-making processes within some agricultural community settings.

Lesson learned: Some stakeholders are tired of being interviewed, while others – especially rural women- are eager to be heard.

Tip: Be creative; evaluation techniques are the means not the end, and can thus permanently be created, recreated and adapted to each situation and context. For example, use “conversatorios” (round table discussions), as opposed to focus groups, to gather people with diverse backgrounds and perspectives to discuss over a particular issue of the evaluation; participants usually appreciate these reflective spaces and feel motivated to speak “outside the box”, while evaluators take a holistic overview of the topic.  Drawings, role plays and other popular education techniques may also facilitate participation of marginalized groups, including illiterate women.

Lesson learned: Answers to your questions may not contain key gender and HR issues to understand how change is occurring.

Tip: Awareness of specific cultural and gender communication patterns is crucial for an effective exchange. In any case, interviews should be dealt with as dialogues where people have the opportunity to express their priorities and points of view. Do not limit your interactions to a question-answer dynamic. Let people speak freely and “listen actively” to discover the essential. Respect and interpret the silences and do not insist on answers to your questions, rather focus on trying to understand the underlying meaning of each reaction. This will allow an eventual reconstruction of how change is occurring (Theory of Change) for the specific intervention and context, even if it has not been explicitly stated in the project design. Also, as evaluators we tend to focus on verbal communication, ignoring the importance of tone and gestures. Make sure you are alert to less explicit key messages.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Older posts >>

Archives

To top