AEA365 | A Tip-a-Day by and for Evaluators

CAT | Collaborative, Participatory and Empowerment Evaluation

Collaborative evaluation principles have been used to bolster projects and gain representative stakeholder input. I’m Julianne Rush-Manchester of the Military and Veterans TIG. I’m an implementation science and evaluation professional working in the Department of Defense. I’ve learned some tips for facilitating stakeholder input in clinical settings that may be more hierarchical (rather than collaborative) in nature.  These tips could be applied in military and non-military settings.

Lessons Learned: 

  • Push for early involvement of stakeholders, with targeted discussions, to execute projects successfully (according to plan).  It is expected that adjustments to the implementation and evaluation plan will occur; however, these should be modest rather than substantive if stakeholders have provided input on timing, metrics, access to data, program dosage, recruitment challenges, and so forth.  This is particularly true in military settings, where bureaucratic structures dictate logistics and access.
  • Plan for unintended effects, along with intended ones, in new contexts for the program. A replicated program may look slightly different as it must accommodate for nuances of the organization (military member participants, contractors, mandatory vs. volunteer programs, program support from senior leadership). Expected outcomes may be variations of intended ones as the program adjusts to its host setting.

Rad Resources:

This article refers to the use of collaborative evaluation principles when there is an anticipation of systems change as a result of implementation (Manchester et al., 2014)The paper may be helpful in strategizing for collaborative evaluations around evidence based practices in clinical and non-clinical settings, military or otherwise.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I’m Clara Pelfrey, Translational Research Evaluation TIG past chair and evaluator for the Clinical and Translation Science Collaborative at Case Western Reserve University. I’m joined by graphic recorder Johnine Byrne, owner of See Your Words, and Darcy Freedman, Associate Director of the Prevention Research Center for Healthy Neighborhoods (PRCHN). We’d like to extend our previous AEA365 post on graphic recording and show how it can be used to create a shared vision between researchers doing community engaged research and community members.

Graphic recording (GR) is a visual capturing of people’s ideas and expressions. The GR shown below was created at an annual retreat of the PRCHN’s community advisors. It visually captured the community’s ideas around the major areas of work done by the center, helping to identify priority areas for future work and opportunities for collaboration. The PRCHN used the GR to show what role its partners play, the questions they have, what the bottlenecks are and any risks or unintended consequences to attend to.

graphic recording

(click for larger image)

Hot Tip:

Evaluation uses of graphic recording (GR) in community based research/community engagement:

  • Provide qualitative analysis themes. GR acts as a visual focus group report, providing opportunities to interact with your study findings.
  • GR can show system complexity. A non-profit organization working on youth justice commissioned a systems model GR so that all the service providers for youth experiencing homelessness could: 1) see where they fit into the wider system; 2) identify gaps and redundancies; 3) identify feedback loops; 4) find reinforcements.
  • Focus group participants may be reluctant to speak up in a group. Seeing images on the GR encourages participants to speak.
  • GR allows everyone to share their ideas in real-time. This immediacy creates energy and fosters more discussion.
  • Get right to the heart of the matter. Concepts on the GR become objects and lose their attribution to a person, fostering conversation that is more open and honest. This is especially useful when discussing sensitive issues (e.g. racism).
  • Compare changes over time. In the community setting, GR allows for an evolving group of people to honor the engagement of prior groups and provides a benchmark for the future.
  • Hear all perspectives. The graphic recorder mirrors the ideas in the room capturing the full range of opinions including the divergent or outsider perspectives.
  • GR helps the late arrivals catch up on what transpired at the meeting while helping everyone review.

Lessons Learned:

  • Get a good facilitator! An experienced facilitator manages room dynamics. The graphic recorder is the “silent partner.”
  • Schedule time to review and discuss the GR at the end. This helps uncover possible opportunities by asking: “What haven’t we talked about?”
  • Display last year’s GR for comparison and encourage everyone to compare and ask the question: “Have we made progress?”
  • GR requires a democratic belief in participatory approaches, empowering multiple perspectives and not just the leaders’ ideas.
  • PowerPoint slides and GR do not mix. GR best captures the dialog, not the slide content.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jason Ravitz (Google) and David Fetterman (Fetterman & Associates, past-president of AEA, and founder of empowerment evaluation) have been using empowerment evaluation in various educational settings, including a graduate school program and with Kathy Haynie (Haynie Research and Evaluation) and Tom McKlin (The Findings Group), in our work with two computer science education evaluation learning communities.

Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination.  This approach aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs.

3-Step Approach.  One empowerment evaluation approach involves helping a group: 1) establish their mission; 2) take stock of their current status; and 3) plan for the future.  Additional tools include an evaluation dashboard to help communities monitor their own progress.

CS/STEM Learning Communities. The “Evaluation Wrecking Crew” includes over 60 CS education evaluators across the country. A second group (with some overlap) is the NSF-funded Computer Science Outcomes Networked Improvement Community (CSONIC).  

We have joined forces to: 1) build a CS/STEM repository of evaluation instruments and approaches; 2) build a common hub for the community, with the assistance of Oak Ridge Associated Universities; and 3) educate the CS community about the value and role of evaluation to improve the quality of CS and STEM education.  We meet biweekly using Zoom video-conferencing software.  

Kathy Haynie (Haynie Research and Evaluation) Remotely Facilitates Bi-monthly Meetings

Kathy Haynie (Haynie Research and Evaluation) Remotely Facilitates Bi-monthly Meetings

Online Spreadsheet. Jason designed a 3-step online spreadsheet, using Google Sheets, to facilitate the empowerment evaluation process used in both the Evaluation Wrecking Crew and CSONIC workshops.

Mission. Our collaborative process allowed workshop members to remotely record their views about the mission or purpose of the group.  Later comments were transformed into a mission statement (using Google Docs).

Taking Stock. A second sheet in the spreadsheet was devoted to “brainstorming” a list of the group’s most important activities. Members prioritized the list by “voting” for the most important activities to evaluate as a group.

A third sheet was populated with the list of the prioritized activities. The online workshop participants used a 1 (low) to 10 (high) scale to rate their performance on the “taking stock” sheet.  We used the results to facilitate a dialogue about the ratings using videoconferencing software and referencing participants’ ratings.

Planning for the Future.  We used a fourth sheet to help the group record plans for the future, specifying goals, strategies, and evidence.

Evaluation Dashboard.  A final sheet was devoted to the dashboard to help us monitor our own performance.  It included: goals, strategies, and evidence.

Computer Science Education Evaluators Conducting an Empowerment Evaluation Online

Computer Science Education Evaluators Conducting an Empowerment Evaluation Online

Rad Resources:

Free Template.  This spreadsheet is available (free) to use to facilitate your own empowerment evaluation exercise remotely: tinyurl.com/eeblank.

Other free tools we have used include Google Forms to help graduate students evaluate their own as well as their peers’ work.  We used these data to assess students’ performance, and in the process, make mid-course corrections concerning our instruction.  Finally, we used Google Evaluation Worksheets to help them refine their proposals:  /tinyurl.com/evalworksheet-google Additional resources can be found here (https://tinyurl.com/empowermentevaluationresources).

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Jenny McCullough Cosgrove, Nicole Huggett, and Deven Wisner, the 2017-2018 conference planning committee for the AEA Arizona Local Affiliate, the Arizona Evaluation Network (AZENet). This year, we built off of the momentum from the inspiring #Eval17 AEA conference, to bring focus and meaningful attention to inclusion and equity in evaluative practice at our annual Arizona Evaluation conference.

Cool Tricks: Provide a Space for Evaluators to Practice

We know participatory evaluation can be a powerful tool in advancing equity by explicitly including underrepresented stakeholder voice. Given this, the conference planning committee has worked with our keynote speaker Dr. Mia Luluquisen, Deputy Director of Community Assessment Planning and Education at Alameda County Public Health Department, to build an evaluation event that incorporates an active experience in participatory evaluation. Specifically, an evaluation of the conference will be used as an introduction to this topic.

Hot Tips: Purposefully Build Inclusion and Safety into the Event

  • Choose an event location that will be accessible to all abilities.
  • Design event products and communications so they are as usable by as many participants as possible.
  • Define and use an inclusive and just vocabulary in promotion of the event and during the event.
  • Add activities that focus on experiencing deep empathy.
  • Establish ground rules for active listening; encourage all participants to engage and listen.
  • Support critical reasoning and safety in participants by asking for quiet reflection before sharing ideas.
  • Do not assume that marginalized people have the responsibility to educate evaluators on equity issues. Be mindful of asking underrepresented peoples to teach or explain their needs or experience at your event. Marginalized people are often burdened with the expectation to be the teachers in matters of justice and equity issues.

Rad Resources:

Intrigued and want to learn (or experience) more? Check out the Arizona Evaluation Network’s 2018 From Learning to Practice: Using Evaluation to Address Social Equity conference taking place this April in Tucson, AZ.

The Annie E. Casey Foundation provides seven steps to embed equity and inclusion in a program or organization in the Race Equity and Inclusion Action Guide.

Racial Equity Tools provide some wonderful resources for evaluators to learn more about the fundamentals of racial inequity, as well as useful tools and guides to support learning.

Learn more about disability inclusion strategies from the Centers for Disease Control and Prevention.

Reflect on your strategies for gender inclusion with this guide from the University of Pittsburgh.

The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week. The contributions all this week to aea365 come from our AZENet members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings! I am Steve Mumford, New Orleans resident and Southeast Evaluation Association (SEA) member. When you read this, fingers crossed, I will have just completed a PhD in Public Policy & Administration, concentrating in program evaluation, from George Washington University. I am writing this to share insights from my dissertation research. This research focused on bringing stakeholders’ “ways of knowing” into participatory evaluation.

“Ways of knowing” or “personal epistemologies” are implicit preferences. These preferences guide us as we decide what information is credible. Many frameworks exist for understanding and identifying personal epistemologies. One I especially like comes from Women’s Ways of Knowing.

Lessons Learned: The authors identified three ways of knowing relevant to evaluation practice:

  • Separate knowing resembles common definitions of critical thinking. Separate, skeptical knowers play “devil’s advocate,” debating ideas in abstract, “objective” terms. Think lawyers and scientists.
  • Connected knowing is a less appreciated approach to critical thinking. Connected knowers play the “believing game,” resisting argument in favor of empathic understanding of why a person holds certain beliefs. Think social workers and therapists.
  • Constructed knowing is the self-aware application of either approach depending on context. Constructed knowers build rapport by exploring others’ rationales, but they do not shy away from critically evaluating their claims. Think evaluators!

Hot Tips: Evaluators can take steps to bring ways of knowing into their facilitation. In turn, they might better engage diverse stakeholders and produce more credible and actionable findings.

  • Assess. First, figure out the way of knowing preferred by your key stakeholders, like advisory group members. Administer the brief Attitudes Toward Thinking and Learning Survey (ATTLS), or guide a conversation in which stakeholders self-identify. Be sure to assess your own way of knowing as well!
  • Assign. Throughout the evaluation, clarify what way of knowing you want to emphasize within the group. Anyone can practice constructed knowing! Early on, encourage connected knowing as the group builds trust and brainstorms questions, by establishing group “ground rules” that promote open-minded listening. Later, when the group is ready to debate results and recommendations, encourage separate knowing, assign group members to play the role of devil’s advocate.
  • Reflect. Occasionally bring focus back to ways of knowing to help the group reflect on its process. For instance, call out a group member practicing separate knowing when a connected approach is preferred. Alternatively, ask connected knowers how it feels to play devil’s advocate. In this way, all group members can learn to engage in constructed knowing!

Build appreciation for ways of knowing into your participatory evaluation process, and you will tap the full potential of your stakeholder group.

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Professional Development Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! My name is Miki Tsukamoto and I am a Monitoring and Evaluation Coordinator at the International Federation of Red Cross and Red Crescent Societies (IFRC).

What if video could be used as the “spark” to increase the engagement and interest of communities in your programmes?

Recently, I had an opportunity to be part of a PVE team for the Global Framework for Climate Service’s programme which aimed to deliver and apply “…salient, credible and actionable climate services towards improved health and food security in Malawi and Tanzania.” To ensure better use and acceptance of this PVE for future programming, IFRC piloted the Most Significant Change technique[1](MSC), using the OECD/DAC criteria of relevance/appropriateness, effectiveness, coverage, sustainability and impact as themes for group discussions. Here are some of the lessons learnt:

Lessons learned:

Rad Resources: PVE videos were made at the community level, the country level and the multi-regional level.

Country level PVEs:

(https://www.youtube.com/watch?v=fSXj0IllfvQ&index=3&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

(https://www.youtube.com/watch?v=mFWCOyIb9mU&index=4&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

Multi-country PVE:

(https://www.youtube.com/watch?v=HzbcIZbQYbs&index=2&list=PLrI6tpZ6pQmQZsKuQl6n4ELEdiVqHGFw2)

A Red Cross Red Crescent Guide to Community Engagement and Accountability (CEA)

Guide to the “Most Significant Change” Technique by Rick Davies and Jess Dart

[1] http://www.mande.co.uk/docs/MSCGuide.pdf

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Emily Spence-Almaguer and I am an Associate Professor of Behavioral and Community Health at the University of North Texas Health Science Center. I spend most of my professional time serving as an independent evaluator for community initiatives and conducting assessment studies. I am a social worker by training and have found that the conversational skills used in Solution-Focused Therapy have great application in the realm of evaluation and community assessment.

Hot Tips: My favorite ways to use solution-focused dialogues are in:

  • Focus group and individual interviews because they help generate rich qualitative data and great ideas for continuous program improvements.
  • Evaluation planning meetings because they help stakeholders articulate a wide range of potential outcomes and describe how those outcomes might be observed (i.e., measured).
  • Meetings where stakeholders are being debriefed around disappointing evaluation results. The nature of solution-focused dialogues avoids finger-pointing and helps drive forward momentum.

Hot Tips:

  • It’s all about the questions!! Solution-focused dialogues are driven by questions that promote deep reflection and critical thinking.
  • Context: Use questions that help situate people’s minds in a particular context and use details in your question that will encourage an individual to imagine him or herself in that moment. Here’s an example that I use with consumers at a program trying to help lift individuals and families out of poverty:
    • I want you to take a moment and imagine that you just learned that the Bass [local philanthropist] family recently donated $100,000 to the United Way for this project. They want you to help them figure out how to best spend the money. What is the first thing you would advise them to do? What would you advise them to do next?
    • Expertise: I love the way that Gaiswinker and Roessler referred to this as the “expertise of not-knowing”. In solution-focused dialogues the words of questions and tone of delivery are carefully crafted to amplify the assumption that the stakeholders have exceptional knowledge, skills and capacities.

Rad Resource: For an introduction to solution focused concepts, I like Coert Visser’s Doing What Works Blog.

spence

Download from the AEA Public eLibrary to View the Poster in Full Size!

Rad Resource: I presented on Solution-Focused dialogues in evaluation at AEA’s Evaluation 2012 conference. You can download my poster and resources list from the AEA public eLibrary here.

Lessons Learned: A direct question, such as “What would you recommend to improve this program?” often fails to generate detailed or meaningful responses. In focus groups with program consumers, I find that this question is interpreted as “what is wrong with the program?” and may lead to comments in defense of the program staff members (see my 2012 AEA poster for an example of this from my data).

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Jeff Sheldon and today I’m introducing the results of a study of 131 evaluation practitioners that I hope will inform the way you think about empowerment evaluation.

In brief, this study: 1) identified the extent of implementation fidelity to the three empowerment evaluation models; 2) described the extent to which the empowerment evaluation process were evident; 3) described the extent to which empowerment evaluation outcome principles resulted from the evaluations reported on; and 4) determined whether variation in empowerment and self-determination could be explained by: the interaction between model fidelity and percentage of steps implemented, the interaction between model fidelity and percentage of steps implemented during different evaluation phases, the process principles in evidence, the outcome principles in evidence, and by evaluator demographics.

Results indicated that evaluation practitioners implemented the three-step, 10-step, and five-tool empowerment models with fidelity. A majority reported the presence of both the six process principles (i.e., community ownership, inclusiveness, democratic participation community knowledge, evidence-based strategies, and accountability and the four outcome principles (i.e., improvement, capacity building, organizational learning and social justice). Last, the interaction between model fidelity and percentage of activities implemented explained variation in evaluation capacity. The interaction between early evaluation model fidelity and percentage of activities implemented, and mid-evaluation model fidelity and percentage of activities implemented explained variation in evaluation capacity. The inclusiveness and community knowledge process principles each explained variation in evaluation knowledge. The inclusiveness process principle alone explained variation in evaluation capacity, individual empowerment, and evaluation competence. Although not tested against the null hypothesis, variation in evaluation knowledge, individual empowerment, evaluation competence, and evaluation autonomy was explained by where evaluation practitioners lived, specifically an African country.

Hot Tips: If building evaluation capacity is important:

  • Implement a high percentage of activities to ensure model fidelity especially during the mid-phase of an empowerment evaluation.
  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If building evaluation knowledge is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.
  • Attend to the community knowledge process principle so everyone who is engaged in the evaluation can use their collective wisdom to develop evaluation tools, evaluation procedures, interpret data, etc.

If building evaluation competence is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

If individual empowerment is important:

  • Attend to the inclusiveness process principle to ensure everyone who wants to engage in the evaluation is included.

Rad Resource:

Evaluation as Social Intervention: An Empirical Study of Empowerment Evaluation Practice and Principle Effects on Psychological Empowerment and Self – Determination Outcomes

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Building Evaluation Capacity. David and I have teamed up to apply empowerment evaluation concepts and principles to build evaluation capacity at Google and beyond. We are using rubrics to focus learning, and student ratings to identify areas that are strong or merit attention. We are using a 3-step approach to empowerment evaluation and an evaluation planning worksheet (building on my graduate school courses with Nick Smith) to help our colleagues assess their program’s performance.

The worksheet has 4 parts:

  • describe the program to be evaluated
  • define the evaluation context (purpose and audience)
  • plan the evaluation (questions, data sources, procedures)
  • create an evaluation management plan

With little or no evaluation background needed, teams dive into the worksheet to focus on their program’s purpose and goals before setting up metrics. Laying out the evaluation plan is often illuminating — leading to refined program logic, alternative (and more meaningful) programmatic plans, and more useful ideas about how to measure processes and outcomes.

Beyond Google. We are also sharing our work with nonprofits and higher education. Through the Computer Science Outreach Program Evaluation Network (CS OPEN) Google is supporting evaluation for 12 nonprofits through a partnership with the National Girls Collaborative Project.

David and I are also co-teaching at Pacifica Graduate Institute. David highlights the 3-step approach to empowerment evaluation, including: 1) mission; 2) taking stock; and 3) planning for the future. I follow-up with our worksheet to answer questions such as:

What is the overall program purpose?

Who are the audiences for the evaluation?

How will the result be utilized and by whom?

Rubrics and Technology for Peer Review and Self-assessment. Students in our course are developing evaluation proposals that can help them conduct evaluations, solicit funding, and/or guide their doctoral dissertations. The class meets face-to-face, but includes a virtual classroom strategy that has worked well in the past. Students use rubrics to guide their self- and peer-feedback to refine and improve their work and understanding. This improves the proposals, guides instruction, and models our focus on empowerment and capacity building.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Rad Resources: We are using our evaluation rubric with tools that require Chrome and free extensions:

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute.

Doctopus: A tool for teachers to manage, organize, and assess student projects in Google Drive.

DF3

Goobrics: This rubrics based assessment tool works with Doctopus, allowing teachers to evaluate student’s work in Google Drive.

DF4

Goobrics for Students: Allows students to use a rubric to assess peers’ documents.

DF5

Google Forms: Enables students to self-assess their work and their peers’ work using an online survey.

DF6

DF7

Please contact us for additional information!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi, we’re Tosca Bruno-van Vijfeijken (Director of the Syracuse University Transnational NGO Initiative) and Gabrielle Watson (independent evaluator). We engaged a group of practitioners at the 2015 AEA conference to talk about organizational change in International Non-Governmental Organizations (INGOs), and explore a hunch that Developmental Evaluation could help organizations manage change.

Several large INGOs are undergoing significant organizational change. These are complex processes – they’re always disruptive and often painful. The risk of failure is high. Roughly half of all organizational change processes either implode or fizzle out ( ). A common approach is not to build in learning systems at all, but rather to take an “announce, flounder, learn” approach ( ).

Lesson Learned: Most INGOs support change processes in three main ways: (1) external “expert” reviews; (2) CEO- level exchanges with peer organizations; (3) staff-level reviews. It is this last category – where change is actually implemented – that is least developed but where it’s most needed. Successful organizational change hinges on deep culture and mindset change ( ).

AEA Session participants highlighted key challenges:

  • Headquarters and country staff experience change very differently
  • Frequent revisiting of decisions
  • Ineffective communication; generates uncertainty and anxiety
  • Learning not well supported at country or implementation team level
  • Country teams retain a passive mindset when should be more assertive
  • Excessive focus on legal and administrative; not enough on culture and mind-set

Can organizations do better? Might Developmental Evaluation offer useful approaches and tools?

Hot Tip: seems tailor-made for large-scale organizational change processes. It is designed for innovative interventions in complex environments when the optimum approach and end-state are not known or knowable. It involves stakeholder sense-making supported by tailored & evolving evaluative inquiry (often also participatory) to quickly test iterations, track progress and guide adaptations. It’s designed to evolve along with the intervention itself.

Hot Tips: Session participants share some good practices:

  • Action learning. Exchanges among implementers increased adaptive capacity and made emotional experience with change easier
  • Pilot initiatives. Time-bound, with frequent reviews and external support
  • “Guerrilla” roll-out. Hand-picked early adopters sparked “viral” spread of new approaches

Lesson Learned: Our review suggests Developmental Evaluation can address many of the challenges of organizational change, including shifting organizational culture. Iterative participatory learning facilitates adaptations that are appropriate and owned by staff. It adds value by building a learning culture – the ultimate driver of large scale organizational change.

We are curious how many organizations are using Developmental Evaluation for their change processes, and what we can learn from this experience. Add your thoughts to the comments, or write to Tosca or Gabrielle if you have an experience to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top