AEA365 | A Tip-a-Day by and for Evaluators

CAT | Collaborative, Participatory and Empowerment Evaluation

Hello! My name is Miki Tsukamoto and I am a Monitoring and Evaluation Coordinator at the International Federation of Red Cross and Red Crescent Societies (IFRC).

What if video could be used as the “spark” to increase the engagement and interest of communities in your programmes?

Recently, I had an opportunity to be part of a PVE team for the Global Framework for Climate Service’s programme which aimed to deliver and apply “…salient, credible and actionable climate services towards improved health and food security in Malawi and Tanzania.” To ensure better use and acceptance of this PVE for future programming, IFRC piloted the Most Significant Change technique[1](MSC), using the OECD/DAC criteria of relevance/appropriateness, effectiveness, coverage, sustainability and impact as themes for group discussions. Here are some of the lessons learnt:

Lessons learned:

Rad Resources: PVE videos were made at the community level, the country level and the multi-regional level.

Country level PVEs:

(https://www.youtube.com/watch?v=fSXj0IllfvQ&index=3&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

(https://www.youtube.com/watch?v=mFWCOyIb9mU&index=4&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

Multi-country PVE:

(https://www.youtube.com/watch?v=HzbcIZbQYbs&index=2&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

A Red Cross Red Crescent Guide to Community Engagement and Accountability (CEA)

Guide to the “Most Significant Change” Technique by Rick Davies and Jess Dart

[1] http://www.mande.co.uk/docs/MSCGuide.pdf

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Emily Spence-Almaguer and I am an Associate Professor of Behavioral and Community Health at the University of North Texas Health Science Center. I spend most of my professional time serving as an independent evaluator for community initiatives and conducting assessment studies. I am a social worker by training and have found that the conversational skills used in Solution-Focused Therapy have great application in the realm of evaluation and community assessment.

Hot Tips: My favorite ways to use solution-focused dialogues are in:

  • Focus group and individual interviews because they help generate rich qualitative data and great ideas for continuous program improvements.
  • Evaluation planning meetings because they help stakeholders articulate a wide range of potential outcomes and describe how those outcomes might be observed (i.e., measured).
  • Meetings where stakeholders are being debriefed around disappointing evaluation results. The nature of solution-focused dialogues avoids finger-pointing and helps drive forward momentum.

Hot Tips:

  • It’s all about the questions!! Solution-focused dialogues are driven by questions that promote deep reflection and critical thinking.
  • Context: Use questions that help situate people’s minds in a particular context and use details in your question that will encourage an individual to imagine him or herself in that moment. Here’s an example that I use with consumers at a program trying to help lift individuals and families out of poverty:
    • I want you to take a moment and imagine that you just learned that the Bass [local philanthropist] family recently donated $100,000 to the United Way for this project. They want you to help them figure out how to best spend the money. What is the first thing you would advise them to do? What would you advise them to do next?
    • Expertise: I love the way that Gaiswinker and Roessler referred to this as the “expertise of not-knowing”. In solution-focused dialogues the words of questions and tone of delivery are carefully crafted to amplify the assumption that the stakeholders have exceptional knowledge, skills and capacities.

Rad Resource: For an introduction to solution focused concepts, I like Coert Visser’s Doing What Works Blog.

spence

Download from the AEA Public eLibrary to View the Poster in Full Size!

Rad Resource: I presented on Solution-Focused dialogues in evaluation at AEA’s Evaluation 2012 conference. You can download my poster and resources list from the AEA public eLibrary here.

Lessons Learned: A direct question, such as “What would you recommend to improve this program?” often fails to generate detailed or meaningful responses. In focus groups with program consumers, I find that this question is interpreted as “what is wrong with the program?” and may lead to comments in defense of the program staff members (see my 2012 AEA poster for an example of this from my data).

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Jeff Sheldon and today I’m introducing the results of a study of 131 evaluation practitioners that I hope will inform the way you think about empowerment evaluation.

In brief, this study: 1) identified the extent of implementation fidelity to the three empowerment evaluation models; 2) described the extent to which the empowerment evaluation process were evident; 3) described the extent to which empowerment evaluation outcome principles resulted from the evaluations reported on; and 4) determined whether variation in empowerment and self-determination could be explained by: the interaction between model fidelity and percentage of steps implemented, the interaction between model fidelity and percentage of steps implemented during different evaluation phases, the process principles in evidence, the outcome principles in evidence, and by evaluator demographics.

Results indicated that evaluation practitioners implemented the three-step, 10-step, and five-tool empowerment models with fidelity. A majority reported the presence of both the six process principles (i.e., community ownership, inclusiveness, democratic participation community knowledge, evidence-based strategies, and accountability and the four outcome principles (i.e., improvement, capacity building, organizational learning and social justice). Last, the interaction between model fidelity and percentage of activities implemented explained variation in evaluation capacity. The interaction between early evaluation model fidelity and percentage of activities implemented, and mid-evaluation model fidelity and percentage of activities implemented explained variation in evaluation capacity. The inclusiveness and community knowledge process principles each explained variation in evaluation knowledge. The inclusiveness process principle alone explained variation in evaluation capacity, individual empowerment, and evaluation competence. Although not tested against the null hypothesis, variation in evaluation knowledge, individual empowerment, evaluation competence, and evaluation autonomy was explained by where evaluation practitioners lived, specifically an African country.

Hot Tips: If building evaluation capacity is important:

  • Implement a high percentage of activities to ensure model fidelity especially during the mid-phase of an empowerment evaluation.
  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If building evaluation knowledge is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.
  • Attend to the community knowledge process principle so everyone who is engaged in the evaluation can use their collective wisdom to develop evaluation tools, evaluation procedures, interpret data, etc.

If building evaluation competence is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If individual empowerment is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

Rad Resource:

Evaluation as Social Intervention: An Empirical Study of Empowerment Evaluation Practice and Principle Effects on Psychological Empowerment and Self – Determination Outcomes

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Building Evaluation Capacity. David and I have teamed up to apply empowerment evaluation concepts and principles to build evaluation capacity at Google and beyond. We are using rubrics to focus learning, and student ratings to identify areas that are strong or merit attention. We are using a 3-step approach to empowerment evaluation and an evaluation planning worksheet (building on my graduate school courses with Nick Smith) to help our colleagues assess their program’s performance.

The worksheet has 4 parts:

  • describe the program to be evaluated
  • define the evaluation context (purpose and audience)
  • plan the evaluation (questions, data sources, procedures)
  • create an evaluation management plan

With little or no evaluation background needed, teams dive into the worksheet to focus on their program’s purpose and goals before setting up metrics. Laying out the evaluation plan is often illuminating — leading to refined program logic, alternative (and more meaningful) programmatic plans, and more useful ideas about how to measure processes and outcomes.

Beyond Google. We are also sharing our work with nonprofits and higher education. Through the Computer Science Outreach Program Evaluation Network (CS OPEN) Google is supporting evaluation for 12 nonprofits through a partnership with the National Girls Collaborative Project.

David and I are also co-teaching at Pacifica Graduate Institute. David highlights the 3-step approach to empowerment evaluation, including: 1) mission; 2) taking stock; and 3) planning for the future. I follow-up with our worksheet to answer questions such as:

What is the overall program purpose?

Who are the audiences for the evaluation?

How will the result be utilized and by whom?

Rubrics and Technology for Peer Review and Self-assessment. Students in our course are developing evaluation proposals that can help them conduct evaluations, solicit funding, and/or guide their doctoral dissertations. The class meets face-to-face, but includes a virtual classroom strategy that has worked well in the past. Students use rubrics to guide their self- and peer-feedback to refine and improve their work and understanding. This improves the proposals, guides instruction, and models our focus on empowerment and capacity building.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Rad Resources: We are using our evaluation rubric with tools that require Chrome and free extensions:

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute.

Doctopus: A tool for teachers to manage, organize, and assess student projects in Google Drive.

DF3

Goobrics: This rubrics based assessment tool works with Doctopus, allowing teachers to evaluate student’s work in Google Drive.

DF4

Goobrics for Students: Allows students to use a rubric to assess peers’ documents.

DF5

Google Forms: Enables students to self-assess their work and their peers’ work using an online survey.

DF6

DF7

Please contact us for additional information!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi, we’re Tosca Bruno-van Vijfeijken (Director of the Syracuse University Transnational NGO Initiative) and Gabrielle Watson (independent evaluator). We engaged a group of practitioners at the 2015 AEA conference to talk about organizational change in International Non-Governmental Organizations (INGOs), and explore a hunch that Developmental Evaluation could help organizations manage change.

Several large INGOs are undergoing significant organizational change. These are complex processes – they’re always disruptive and often painful. The risk of failure is high. Roughly half of all organizational change processes either implode or fizzle out ( ). A common approach is not to build in learning systems at all, but rather to take an “announce, flounder, learn” approach ( ).

Lesson Learned: Most INGOs support change processes in three main ways: (1) external “expert” reviews; (2) CEO- level exchanges with peer organizations; (3) staff-level reviews. It is this last category – where change is actually implemented – that is least developed but where it’s most needed. Successful organizational change hinges on deep culture and mindset change ( ).

AEA Session participants highlighted key challenges:

  • Headquarters and country staff experience change very differently
  • Frequent revisiting of decisions
  • Ineffective communication; generates uncertainty and anxiety
  • Learning not well supported at country or implementation team level
  • Country teams retain a passive mindset when should be more assertive
  • Excessive focus on legal and administrative; not enough on culture and mind-set

Can organizations do better? Might Developmental Evaluation offer useful approaches and tools?

Hot Tip: seems tailor-made for large-scale organizational change processes. It is designed for innovative interventions in complex environments when the optimum approach and end-state are not known or knowable. It involves stakeholder sense-making supported by tailored & evolving evaluative inquiry (often also participatory) to quickly test iterations, track progress and guide adaptations. It’s designed to evolve along with the intervention itself.

Hot Tips: Session participants share some good practices:

  • Action learning. Exchanges among implementers increased adaptive capacity and made emotional experience with change easier
  • Pilot initiatives. Time-bound, with frequent reviews and external support
  • “Guerrilla” roll-out. Hand-picked early adopters sparked “viral” spread of new approaches

Lesson Learned: Our review suggests Developmental Evaluation can address many of the challenges of organizational change, including shifting organizational culture. Iterative participatory learning facilitates adaptations that are appropriate and owned by staff. It adds value by building a learning culture – the ultimate driver of large scale organizational change.

We are curious how many organizations are using Developmental Evaluation for their change processes, and what we can learn from this experience. Add your thoughts to the comments, or write to Tosca or Gabrielle if you have an experience to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi y’all, Daphne Brydon here. I am a clinical social worker and independent evaluator. In social work, we know that a positive relationship built between the therapist and client is more important than professional training in laying the foundation for change at an individual level. I believe positive engagement is key in effective evaluation as well as evaluation is designed to facilitate change at the systems level. When we engage our clients in the development of an evaluation plan, we are setting the stage for change…and change can be hard.

The success of an evaluation plan and a client’s capacity to utilize information gained through the evaluation depends a great deal on the evaluator’s ability to meet the client where they are and really understand the client’s needs – as they report them. This work can be tough because our clients are diverse, their needs are not uniform, and they present with a wide range of readiness. So how do we, as evaluators, even begin to meet each member of a client system where they are? How do we roll with client resistance, their questions, and their needs? How do we empower clients to get curious about the work they do and get excited about the potential for learning how to do it better?

Hot Tip #1: Engage your clients according to their Stage of Change (see chart below).

I borrow this model most notable in substance abuse recovery to frame this because in all seriousness, it fits. Engagement is not a linear, one-size-fits-all, or step-by-step process. Effective evaluation practice demands we remain flexible amidst the dynamism and complexity our clients bring to the table. Understanding our clients’ readiness for change and tailoring our evaluation accordingly is essential to the development of an effective plan.

Stages of Change for Evaluation

Hot Tip #2: Don’t be a bossypants.

We are experts in evaluation but our clients are the experts in the work they do. Taking a non-expert stance requires a shift in our practice toward asking the “right questions.” Our own agenda, questions, and solutions need to be secondary to helping clients define their own questions, propose their own solutions, and build their capacity for change. Because in the end, our clients are the ones who have to do the hard work of change.

Hot Tip #3: Come to my session at AEA 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Daphne? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Rainbow Research is known for its focus on using participatory evaluation approaches.

Through my work, I am always looking for new tools and approaches to engage stakeholders throughout the evaluation process. Today, I am sharing two methods that I have found helpful.

Rad Resource:

Ripple Effect Mapping [REM] is an effective method for having a large group of stakeholders identify the intended and unintended impacts of projects. In REM stakeholders use elements of Appreciative Inquiry, mind mapping, and qualitative data analysis to reflect upon and visually map the intended and unintended changes produced by a complex program or collaboration. It is a powerful technique to document impacts, and engage stakeholders. Rainbow Research is currently collaborating with Scott Chazdon at the University of Minnesota to use this method to evaluate a community health program impact by conducting REM at two points in time —at the beginning and end of a three-year project. Want to learn more? See http://evaluation.umn.edu/wp-content/uploads/Ripple-Effect-Mapping-MESI13-spring-training-march-2013_KA-20130305.pdf

Hot Tip:

The Art of Hosting (AoH) is a set of facilitation tools, evaluators can use to engage stakeholders and create discussions that count. AoH is a set of methods for working with groups to harness the collective wisdom and self-organizing capacity of groups of any size.  The Art of Hosting uses a set of conversational processes to invite people to step in and become fully engaged in the task at hand. This working practice can help groups make decisions, build their capacity and find new ways to respond to opportunities challenges and change. For more information see – http://www.artofhosting.org/what-is-aoh/

Have you used these tools? Let us all know your thoughts!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Projects range in scale from one-time program evaluations to multi-year, multi-site research studies and designs that explicitly include participatory approaches designed to lead to program improvement.

Through my work, I am always looking for creative ways to capture evaluation data. Here is one rad resource and a hot tip on a participatory tool to add to your tool box.

Rad Resource: Participatory evaluation approaches are used extensively by international development organizations. This web page is a great resource for exploring different rapid appraisal methods that can be adapted to the US context.

ELDIS –http://www.eldis.org/go/topics/resource-guides/participation/participatory-methodology#.UwwFaf1z8ds

ELDIS provides descriptions and links to a variety of information sources on participatory evaluation approaches, including online documents, organization’s web sites, databases, library catalogues, bibliographies, and email discussion lists, research project information, map and newspaper collections. Eldis is hosted by the Institute of Development Studies in Sussex, U.K.

Hot Tip: Evaluators are often asked to identify program impacts and measure key outcomes of community based projects. Impact and outcome measures are often externally determined by the funder. Many times, however, collaborative projects lead to unanticipated outcomes that are seen to be of great value by program participants but are overlooked by formal evaluation designs. One participatory technique, Most Significant Change (MSC), offers an alternative approach to address this issue and can be used to surface promising practices.

Most Significant Change Technique (MSC) – MSC is a participatory qualitative data collection process that uses stories to identify the impact of the program. This approach involves a series of steps where stakeholders search for significant program outcomes and deliberate on the value of these outcomes in a systematic and transparent manner. Stakeholders are asked to write stories of what they see as “significant change” and then dialogue with others to select stories of most importance. The goal of the process is to make explicit what stakeholders (program staff, program beneficiaries and others) value as significant change. The process allows participants to gain a clearer understanding of what is and what is not being achieved. The process can be used for program improvement, identifying promising practices as well as to uncover key outcomes by helping evaluators identify areas of change that warrant additional description and measurement.

Where to go for more information: http://www.mande.co.uk/docs/MSCGuide.pdf

Have you used this tool? Let us all know your thoughts!

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I am Liz Zadnik, Capacity Building Specialist at the New Jersey Coalition Against Sexual Assault. I’m also a new member of the aea365 curating team and first-time Saturday contributor!  Over the past five years I have been working within the anti-sexual violence movement at both the state and national levels to share my enthusiasm for evaluation and support innovative community-based programs doing tremendous social change work.

Over the past five years I have been honored to work with talented evaluators and social change agents in the sexual violence prevention movement. A large part of my work has been de-mystifying evaluation and data for community-based organizations and professionals with limited academic evaluation experience.

Rad Resources: Some of my resources have come from the field of domestic and sexual violence intervention and prevention, as well as this blog! I prefer resources that offer practical application guidance and are accessible to a variety of learning styles and comfort levels. A partnership between the Resource Sharing Project and National Sexual Violence Resource Center has resulted in a fabulous toolkit looking at assessing community needs and assets. I’m a big fan of the Community Tool Box and their Evaluating the Initiative Toolkit as it offers step-by-step guidance for community-based organizations. Very similar to this is The Ohio Domestic Violence Network’s Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit, which incorporates the values of the anti-sexual violence movement into prevention evaluation efforts.

Lesson Learned: Be yourself! Don’t stifle your passion or enthusiasm for evaluation and data. I made the mistake early in my technical assistance and training career of trying to fit into a role or mold I created in my head. Activists of all interests are needed to bring about social change and community wellness. Once I let my passion for evaluation show – in publications, trainings, and technical assistance – I began to see marked changes in the professionals I was working with (and myself!). I have seen myself grow as an evaluator by leaps and bounds since I made this change – so don’t be afraid to let your love of spreadsheets, interview protocols, theories of change, or anything else show!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top