AEA365 | A Tip-a-Day by and for Evaluators

CAT | Collaborative, Participatory and Empowerment Evaluation

Hi, we’re Tosca Bruno-van Vijfeijken (Director of the Syracuse University Transnational NGO Initiative) and Gabrielle Watson (independent evaluator). We engaged a group of practitioners at the 2015 AEA conference to talk about organizational change in International Non-Governmental Organizations (INGOs), and explore a hunch that Developmental Evaluation could help organizations manage change.

Several large INGOs are undergoing significant organizational change. These are complex processes – they’re always disruptive and often painful. The risk of failure is high. Roughly half of all organizational change processes either implode or fizzle out ( ). A common approach is not to build in learning systems at all, but rather to take an “announce, flounder, learn” approach ( ).

Lesson Learned: Most INGOs support change processes in three main ways: (1) external “expert” reviews; (2) CEO- level exchanges with peer organizations; (3) staff-level reviews. It is this last category – where change is actually implemented – that is least developed but where it’s most needed. Successful organizational change hinges on deep culture and mindset change ( ).

AEA Session participants highlighted key challenges:

  • Headquarters and country staff experience change very differently
  • Frequent revisiting of decisions
  • Ineffective communication; generates uncertainty and anxiety
  • Learning not well supported at country or implementation team level
  • Country teams retain a passive mindset when should be more assertive
  • Excessive focus on legal and administrative; not enough on culture and mind-set

Can organizations do better? Might Developmental Evaluation offer useful approaches and tools?

Hot Tip: seems tailor-made for large-scale organizational change processes. It is designed for innovative interventions in complex environments when the optimum approach and end-state are not known or knowable. It involves stakeholder sense-making supported by tailored & evolving evaluative inquiry (often also participatory) to quickly test iterations, track progress and guide adaptations. It’s designed to evolve along with the intervention itself.

Hot Tips: Session participants share some good practices:

  • Action learning. Exchanges among implementers increased adaptive capacity and made emotional experience with change easier
  • Pilot initiatives. Time-bound, with frequent reviews and external support
  • “Guerrilla” roll-out. Hand-picked early adopters sparked “viral” spread of new approaches

Lesson Learned: Our review suggests Developmental Evaluation can address many of the challenges of organizational change, including shifting organizational culture. Iterative participatory learning facilitates adaptations that are appropriate and owned by staff. It adds value by building a learning culture – the ultimate driver of large scale organizational change.

We are curious how many organizations are using Developmental Evaluation for their change processes, and what we can learn from this experience. Add your thoughts to the comments, or write to Tosca or Gabrielle if you have an experience to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi y’all, Daphne Brydon here. I am a clinical social worker and independent evaluator. In social work, we know that a positive relationship built between the therapist and client is more important than professional training in laying the foundation for change at an individual level. I believe positive engagement is key in effective evaluation as well as evaluation is designed to facilitate change at the systems level. When we engage our clients in the development of an evaluation plan, we are setting the stage for change…and change can be hard.

The success of an evaluation plan and a client’s capacity to utilize information gained through the evaluation depends a great deal on the evaluator’s ability to meet the client where they are and really understand the client’s needs – as they report them. This work can be tough because our clients are diverse, their needs are not uniform, and they present with a wide range of readiness. So how do we, as evaluators, even begin to meet each member of a client system where they are? How do we roll with client resistance, their questions, and their needs? How do we empower clients to get curious about the work they do and get excited about the potential for learning how to do it better?

Hot Tip #1: Engage your clients according to their Stage of Change (see chart below).

I borrow this model most notable in substance abuse recovery to frame this because in all seriousness, it fits. Engagement is not a linear, one-size-fits-all, or step-by-step process. Effective evaluation practice demands we remain flexible amidst the dynamism and complexity our clients bring to the table. Understanding our clients’ readiness for change and tailoring our evaluation accordingly is essential to the development of an effective plan.

Stages of Change for Evaluation

Hot Tip #2: Don’t be a bossypants.

We are experts in evaluation but our clients are the experts in the work they do. Taking a non-expert stance requires a shift in our practice toward asking the “right questions.” Our own agenda, questions, and solutions need to be secondary to helping clients define their own questions, propose their own solutions, and build their capacity for change. Because in the end, our clients are the ones who have to do the hard work of change.

Hot Tip #3: Come to my session at AEA 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Daphne? She’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Rainbow Research is known for its focus on using participatory evaluation approaches.

Through my work, I am always looking for new tools and approaches to engage stakeholders throughout the evaluation process. Today, I am sharing two methods that I have found helpful.

Rad Resource:

Ripple Effect Mapping [REM] is an effective method for having a large group of stakeholders identify the intended and unintended impacts of projects. In REM stakeholders use elements of Appreciative Inquiry, mind mapping, and qualitative data analysis to reflect upon and visually map the intended and unintended changes produced by a complex program or collaboration. It is a powerful technique to document impacts, and engage stakeholders. Rainbow Research is currently collaborating with Scott Chazdon at the University of Minnesota to use this method to evaluate a community health program impact by conducting REM at two points in time —at the beginning and end of a three-year project. Want to learn more? See http://evaluation.umn.edu/wp-content/uploads/Ripple-Effect-Mapping-MESI13-spring-training-march-2013_KA-20130305.pdf

Hot Tip:

The Art of Hosting (AoH) is a set of facilitation tools, evaluators can use to engage stakeholders and create discussions that count. AoH is a set of methods for working with groups to harness the collective wisdom and self-organizing capacity of groups of any size.  The Art of Hosting uses a set of conversational processes to invite people to step in and become fully engaged in the task at hand. This working practice can help groups make decisions, build their capacity and find new ways to respond to opportunities challenges and change. For more information see – http://www.artofhosting.org/what-is-aoh/

Have you used these tools? Let us all know your thoughts!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Projects range in scale from one-time program evaluations to multi-year, multi-site research studies and designs that explicitly include participatory approaches designed to lead to program improvement.

Through my work, I am always looking for creative ways to capture evaluation data. Here is one rad resource and a hot tip on a participatory tool to add to your tool box.

Rad Resource: Participatory evaluation approaches are used extensively by international development organizations. This web page is a great resource for exploring different rapid appraisal methods that can be adapted to the US context.

ELDIS –http://www.eldis.org/go/topics/resource-guides/participation/participatory-methodology#.UwwFaf1z8ds

ELDIS provides descriptions and links to a variety of information sources on participatory evaluation approaches, including online documents, organization’s web sites, databases, library catalogues, bibliographies, and email discussion lists, research project information, map and newspaper collections. Eldis is hosted by the Institute of Development Studies in Sussex, U.K.

Hot Tip: Evaluators are often asked to identify program impacts and measure key outcomes of community based projects. Impact and outcome measures are often externally determined by the funder. Many times, however, collaborative projects lead to unanticipated outcomes that are seen to be of great value by program participants but are overlooked by formal evaluation designs. One participatory technique, Most Significant Change (MSC), offers an alternative approach to address this issue and can be used to surface promising practices.

Most Significant Change Technique (MSC) – MSC is a participatory qualitative data collection process that uses stories to identify the impact of the program. This approach involves a series of steps where stakeholders search for significant program outcomes and deliberate on the value of these outcomes in a systematic and transparent manner. Stakeholders are asked to write stories of what they see as “significant change” and then dialogue with others to select stories of most importance. The goal of the process is to make explicit what stakeholders (program staff, program beneficiaries and others) value as significant change. The process allows participants to gain a clearer understanding of what is and what is not being achieved. The process can be used for program improvement, identifying promising practices as well as to uncover key outcomes by helping evaluators identify areas of change that warrant additional description and measurement.

Where to go for more information: http://www.mande.co.uk/docs/MSCGuide.pdf

Have you used this tool? Let us all know your thoughts!

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! I am Liz Zadnik, Capacity Building Specialist at the New Jersey Coalition Against Sexual Assault. I’m also a new member of the aea365 curating team and first-time Saturday contributor!  Over the past five years I have been working within the anti-sexual violence movement at both the state and national levels to share my enthusiasm for evaluation and support innovative community-based programs doing tremendous social change work.

Over the past five years I have been honored to work with talented evaluators and social change agents in the sexual violence prevention movement. A large part of my work has been de-mystifying evaluation and data for community-based organizations and professionals with limited academic evaluation experience.

Rad Resources: Some of my resources have come from the field of domestic and sexual violence intervention and prevention, as well as this blog! I prefer resources that offer practical application guidance and are accessible to a variety of learning styles and comfort levels. A partnership between the Resource Sharing Project and National Sexual Violence Resource Center has resulted in a fabulous toolkit looking at assessing community needs and assets. I’m a big fan of the Community Tool Box and their Evaluating the Initiative Toolkit as it offers step-by-step guidance for community-based organizations. Very similar to this is The Ohio Domestic Violence Network’s Primary Prevention of Sexual and Intimate Partner Violence Empowerment Evaluation Toolkit, which incorporates the values of the anti-sexual violence movement into prevention evaluation efforts.

Lesson Learned: Be yourself! Don’t stifle your passion or enthusiasm for evaluation and data. I made the mistake early in my technical assistance and training career of trying to fit into a role or mold I created in my head. Activists of all interests are needed to bring about social change and community wellness. Once I let my passion for evaluation show – in publications, trainings, and technical assistance – I began to see marked changes in the professionals I was working with (and myself!). I have seen myself grow as an evaluator by leaps and bounds since I made this change – so don’t be afraid to let your love of spreadsheets, interview protocols, theories of change, or anything else show!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello aea365ers! I’m Susan Kistler, Executive Director Emeritus of the American Evaluation Association, professional trainer and editor, and all around gregarious gal. Email me at susan@thesmarterone.com if you wish to get in touch.

Rad Resource – Padlet: The last time I wrote about Padlet for aea365, exactly two years ago on September 12 of 2012, it was still called Wallwisher. One name change, two years, and a number of upgrades since then, this web-based virtual bulletin board application is worth a fresh look.

Padlet is extremely easy to set up – it takes under 10 seconds and can be done with or without an account; however, I highly recommend that you sign up for a free account to manage multiple bulletin boards and manipulate contributions.

Padlet is even easier to use, just click on a bulletin board and add a note. You can add to your own boards, or to other boards for which you have a link. I’ve set up two boards to try.

Hot Tip – Brainstorming: Use Padlet to brainstorm ideas and get input from multiple sources, all anonymously. Anonymously is the keyword here – the extreme ease of use (no sign in!) is balanced by the fact that contributions only have names attached if the contributors wish to add their names.

Hot Tip – Backchannel: Increasingly, facilitators are leveraging backchannels during courses and workshops as avenues for attendees to discuss and raise questions. Because Padlet is a platform/device independent application (PIA) accessed through the browser, and does not require a login to contribute, it can make an excellent backchannel tool.

The uses are almost endless – any time you might try sticky notes, Padlet may be a virtual alternative.

***IF YOU ARE READING THIS POST IN EMAIL, PLEASE CLICK BACK TO THE AEA365 WEBSITE TO TRY IT OUT!***

This board illustrates the linen background (there are 15+ backgrounds from which to choose) with contributions added wherever the contributor placed them (the owner may then move them). Just click to give it a try. Please.

Created with Padlet

This board illustrates the wood background with contributions organized as tiles (a new option).

Created with Padlet

The size is small when embedded on aea365, go here to see the same board in full page view.

Hot Tip – Multimedia: Padlet can accommodate pictures, links, text, files, and video (when hosted elsewhere).

Hot Tip – Export: A major improvement to Padlet’s functionality has been the addition of the capacity to export the contributions to Excel for analysis, sharing, etc.

Rad Resource – Training: I’ll be offering an estudy online workshop in October on collaborative and participatory instrument development. We’ll leverage Padlet as an avenue for stakeholder input if you’d like to see it in action. Learn more here.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Jeff Sheldon from the School of Social Science, Policy, and Evaluation at Claremont Graduate University and today I’m introducing the Survey of Empowerment Evaluation Practice, Principles, and Outcomes (SEEPPO). I developed SEEPPO for my dissertation, but more important, as a tool that can be modified for use by researchers on evaluation and evaluation practitioners.

For practitioners, SEEPPO is an 82 item self – report survey (92 items for researchers) across seven sections (nine for researchers).

  • Section one items (“Your evaluation activities”) ask for a report on behaviors in terms of the specific empowerment evaluation steps implemented.
  • Section two (“Evaluation Participant Activities”) asks for observations on the evaluation – specific behaviors of those engaged in the evaluation as they relate to the empowerment evaluation steps implemented.
  • Section three (“Changes you observed in individual’s values”) asks for a report on changes in evaluation – related values by comparing the values observed at the beginning of the evaluation to those observed at the end of the evaluation.
  • Section four items (“Changes you observed in individual’s behaviors”) ask for a report on changes observed in evaluation – related behavior and whether the sub-constructs characterizing the psychological well- being outcomes of empowerment (i.e. knowledge, skills/capacities, self-efficacy) and self – determination (competence, autonomy, and relatedness) were present by comparing observed behaviors at the beginning of the evaluation to those at evaluation’s end.
  • Section five (“Changes you observed within the organization”) items ask for a report on the changes observed within the organization as a result of the evaluation by comparing various organizational capacities at the beginning of the evaluation to those observed at evaluation’s end.
  • Section six (“Inclusiveness”) asks about the extent to which everyone who wanted to fully engage in the evaluation was included.
  • Section seven (“Accountability”) items ask about who the evaluator was accountable to during the evaluation.
  • Lastly, the items in sections eight and nine, for researchers, ask about the evaluation model used and demographics.

This is a brief “snap-shot” of SEEPPO. Item development was based on: 1) constructs found in the literature regarding the three known empowerment evaluation models and their respective implementation steps; 2) the ten principles (i.e., six process and four outcome) of empowerment evaluation; 3) the purported empowerment and self – determination outcomes for individuals and organizations engaged in the process of an empowerment evaluation; and 4) constructs found in the humanistic psychology literature on empowerment theory and self – determination theory.

Sheldon

Hot Tip: Theresults of SEEPPO can be used to: determine whether you or your subjects are adhering with fidelity to the empowerment evaluation model being implemented, the principles of empowerment evaluation in evidence, and the likelihood of empowerment and self – determination outcomes.

Rad Resource: Coming soon! SEEPPO will soon be widely available.

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Rachel Becker-Klein and I am an evaluator and a Community Psychologist with almost a decade of experience evaluating programs. Since 2005, I have worked withPEER Associates, an evaluation firm that provides customized, utilization-focused program evaluation and educational research services for organizations nationwide.

Recently I have been used an interview and analysis methodology called Most Significant Change (MSC). MSC is a strategy that involves collecting and systematically analyzing significant changes that occur in programs and the lives of program participants. The methodology has been found to be useful in monitoring programmatic changes, as well as evaluating the impact of programs.

Klein

Lessons Learned: Many clients are interested in taking an active role in their evaluations, but may not be sure how to do so. MSC is a fairly intuitive approach to collecting and analyzing data that clients and participants can be trained to use. Having project staff interview their own constituents can help to create a high level of comfort for interviewees, allowing them to share more openly. Staff-conducted interviews also provides them with a sense of empowerment in collecting data. The MSC approach also includes a participatory approach to analyzing the data. In this way, the methodology can be a capacity building process in and of itself, supporting project staff to learn new and innovative monitoring and evaluation techniques that can be integrated into their own work once the external evaluators leave.

Cool Trick: In 2012, Oxfam Canada contracted with PEER Associates to conduct a case study of their partner organization in the Engendering Change (EC) program in Zimbabwe – Matabeleland AIDS Council (MAC). The EC program funds capacity-building of Oxfam Canada’s partner organizations. This program is built around a theory of change that suggests partners become more effective change agents for women’s rights when their organizational structures, policies, procedures, and programming are also more democratic and gender just.

The evaluation employed a case study approach, using MSC methodology to collect stories from MAC staff and their constituents. In this case study, PEER Associates trained MAC staff to conduct the MSC interviews, while the external evaluators documented the interviews with video and/or audio and facilitated discussions on the themes that emerged from those interviews.

Rad Resources:

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Abe Wandersman and I have been working since the last century to help programs achieve outcomes by building capacity for program personnel to use evaluation proactively.  The words “evaluation” and “accountability” scare many people involved in health and human services programs and in education.   They are afraid that evaluation of their program will prove embarrassing or worse and/or they may think the evaluation didn’t really evaluate their program.   Empowerment evaluation (EE) has been devoted to demystifying evaluation and putting the logic and tools of evaluation into the hands of practitioners so that they can proactively plan, implement, self-evaluate, continuously improve the quality of their work, and thereby increase the probability of achieving outcomes.

Lesson Learned: Accountability does not have to be relegated solely to “who is to blame” after a failure occurs e.g., problems in the U.S. government initial roll out of the health insurance website (and Secretary of Health and Human Services Kathleen Sebelius’ resignation) and the Veterans Administration scandal (and Secretary Shisinski’s resignation). It actually makes sense to think that individuals and organizations should be proactive and strategic about their plans, implement the plans with quality, and evaluate whether or not the time and resources spent led to outcomes. It is logical to want to know why certain things are being done and others are not, what goals an organization is trying to achieve, that the activities are designed to achieve the goals, that a clear plan is put into place and carried out with quality, and that there be an evaluation to see if it worked. EE can provide funders, practitioners, evaluators, and other key stakeholders with a results-based approach to accountability that helps them succeed.

Hot Tip: I am very pleased to let you know that in September 2014, there will be a new EE book: Empowerment Evaluation: Knowledge and Tools for Self-Assessment, Evaluation Capacity Building, and Accountability (Sage:Second Edition) edited by Fetterman, Kaftarian, & Wandersman.   Several chapters are authored by community psychologists including:  Langhout and Fernandez describe EE conducted by fourth and fifth graders; Imm et al. write about the SAMSHA service to science program that brings practice-based programs to reach evidence-based criteria; Haskell and Iachini describe empowerment evaluation in charter schools to reach educational impacts; Chinman et al describe a decade of research on the Getting To Outcomes® accountability approach; Suarez-Balcazar,Taylor-Ritzler,  & Morales-Curtin describe their work on building evaluation capacity in a community-based organization; and Lamont, Wright, Wandersman, & Hamm describe the use of practical implementation science in building quality implementation in a district school initiative integrating technology into education.

Rad Resources:

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top