AEA365 | A Tip-a-Day by and for Evaluators

TAG | Appreciative Inquiry

Hi All! I am Kirsten Mulcahy, an evaluator at the economic-consulting firm Genesis Analytics, based in South Africa.

As evaluators, we are often called to insert ourselves seamlessly into different countries, cultures and organisations without contributing to bias. Yet we must still engage appropriately with prevailing perspectives in order to extract useful information. In two projects, our evaluation team used an Appreciative Inquiry (AI) technique to assist in overcoming hindering organisational cultures of entities in Bosnia and Herzegovina (BiH) and South Africa (SA). In both organisations, their narrative of change was steeped in negativity – in BiH due to fatigue with respect to monitoring and results measurement (MRM) systems; and in SA due to external influence and poor performance within the government organisation.

Lessons Learned:

  • AI is an action science which moves from theory into the creative; from scientific rules into social constructions of shared meaning. Using this participatory and positivist approach helped us to challenge the existing organisational discourse to achieve improved buy-in, and creative, actionable, solutions for both projects.
  • The language used influences the extent of the response. We have found that language of deficit sees much shorter and closed responses, while a positive-framing yields more insightful, lengthier and balanced replies. In the SA AI session, actively seeking the positive actually yielded uninhibited input on challenges and failures.
  • AI is created as a 4-D model (Discovery, Dream, Design and Destiny) but when using AI in an evaluation, we found it more useful to focus your energy on Discovery and Dream with a lesser focus on Design and perhaps not unpacking Destiny until later (if at all).
  • The AI discussion findings should be used to develop the evaluation framework. For example, in BiH decision-making and learning emerged as two critical components to research. Exploring these components improved the relevance, focus and practicality of our recommendations; thus, improving likelihood of future utilization.

Hot Tips:

  • Make your intention for the session clear: it shouldn’t be a secret that you are following a positivist approach.
  • The AI session should be held post the theory of change workshop: the organisation team are then already aligned in vision, and can begin unpacking how to achieve their ‘best selves’.
  • Make the sessions as visual and interactive as possible: Understand that introverts and extroverts engage in group situations differently, and incorporate a combination of pair-based activities as well as group activities.
  • This paper is part of the AEA Evaluation 2017 conference Learning to Action across International Evaluation: Culture and Community Perspectives panel that is scheduled for 16:30 on 9th November 2017; under the topical interest group (TIG) International and Cross Cultural Evaluation.

Rad Resources:

  • For the philosophers, looking to understand the origins: here
  • For the pragmatists, looking to apply AI in evaluation: article, book and website
  • For the millennials, looking for a summary: here

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings from Maryland! I am Tessie Catsambas of EnCompass LLC, an evaluation, leadership, and organizational development organization. How easy it is to get lost in a client’s maze of politics, anxiety, stress, and complexities! And how easy it is to step on toes you do not even know were there! This post discusses strategies for not only managing competing priorities and sensitivities, but paradoxically, staying independent by stepping right in the winds of controversy constructively, and using the controversy to do better evaluation.

Lesson Learned: Being “independent” does not mean you are without a point of view. “Independence” means transparency about your ethics, assumptions and professional boundaries, and a commitment to honesty

Hot Tips – being “appreciative”: A client is typically made up of different points of view and agendas, and that is fine! Help everyone appreciate each separate perspective, and understand its origins. Others’ interpretation of what is going on and what things mean will make you and the whole group smarter. As they talk, they are already benefiting from the evaluation process you have created.

Hot Tip – appropriate process: There are many tools for creating appropriate participation in evaluation. I like to use Appreciative Inquiry – described in detail in the book Hallie Preskill and I co-authored—but there are many others: success case method, empowerment evaluation tools, structured dialogue, and many creative exercises. (FAQs on the application of Appreciative Inquiry in evaluation in this PDF file.) Do not get cornered fighting other people’s fight—through good processes and tools, first get issues articulated, and then get out of the way, so your client(s) can talk things through and work out differences.

Hot Tip – stay open: You, the visiting evaluator, know very little. Before you rush to create categories and analyze, stay open, and use some of the Soft Systems tools such as described by Bob Williams on his webpage to question assumptions. Open yourself up to different ways of seeing. Develop good and effective questions, because by asking them, you will enable others to perceive more expansively, and to generate more creative recommendations than you could alone.

Hot Tip – care: You can fake a lot of things, but you cannot fake caring, even if you use very sophisticated tools. People know when you care, and they engage with you and the evaluation at a deeper level, in a more trusting and productive way.

Hot Tip – be respectfully honest: It is hard to report on unpleasant findings, but if you do so respectfully, with data and context information, appreciating efforts made, and not blaming, you can provide a useful evaluation report that echoes the voices of diverse agendas and common ground, and helps to forge a constructive way forward.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Want to learn more from Tessie and colleagues? Attend their session “Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values” at Evaluation 2011.  aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings from beautiful Boise! We are Rakesh Mohan and Bryon Welch from the Idaho legislature’s Office of Performance Evaluations.

Last February, the Idaho legislature asked us to evaluate the state’s new system for processing Medicaid claims. Legislators had received many constituent complaints that claims from Medicaid providers were being denied, delayed, or inaccurately processed. Legislators were beginning to question whether the new system would ever perform as intended.

The $106 million system went live in July 2010 and immediately began experiencing problems. At the time of our review, over 23,000 providers were enrolled in the system, which was processing about 150,000 claims each week.

Lessons Learned: Our review found that problems with processing provider claims were the result of unclear contract requirements, a lack of system readiness, and most importantly, the absence of adequate end user participation. Less than one percent of total providers were selected for a pilot test, but neither the state administrators nor the contractor knew how many claims were actually pilot tested. Further, only about 50 percent of the providers were enrolled when the system went live.

Hot Tip: If you are ever asked to evaluate the implementation of a large IT system that is experiencing problems, make sure you examine the end user involvement in the system’s design and implementation. Too often end user feedback is underappreciated, not used, or completely ignored.

Lessons Not Learned: Nearly ten years ago, Idaho attempted to implement a similar IT system to track student information for K-12 public schools. After spending about $24 million, the project was terminated due to undelivered promises and a lack of buy in from end users. Unfortunately, lessons identified in our evaluation of the failed student information systems were apparently not learned by people responsible for this new Medicaid claims processing system.

Hot Tip: Because the success of an IT system depends on end user buy-in, ask the following questions when evaluating the implementation of large IT systems:

1.       Are end users clearly identified?

2.       Are end user needs identified and incorporated into system objectives?

3.       Do vendors clearly specify how their solutions/products will address system objectives and end user needs?

4.       Is there a clear method for a two-way communication between system managers and end users with technical expertise?

5.       Is there a clear method for regularly updating end users on changes and progress?

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Ariana Brooks and I am the Director of Evaluation, Research and Planning for HeartShare Human Services.

Lesson Learned: When I started as an internal evaluator my supervisor, Stan Capela, stressed to me one main point: evaluation does not solve management problems. My initial reaction was it made sense and remember similar issues discussed in graduate school. I did not fully grasp the meaning until I was performing my job responsibilities. Specifically, each report was producing similar results. At first I was naively shocked at the level of resistance from some managers. We were well versed in Patton’s Utilization Focused approach. So we focused on providing meaningful reports, but there was resistance even though we would repeatedly tell managers the “numbers don’t lie”.

Lesson Learned: As a social psychologist, I reflected on various theories that helped explain their behavior. Of course, people will interpret stimuli based on their own perspective. People are motivated to preserve a positive sense of self and are more resistant to counterattitudinal messages, especially if they are highly invested in the issue (e.g. their job). So it made sense that when an internal audit illustrates program’s deficiencies have more to do with supervision or program administration it can be hard for management to swallow.

Although it is frustrating when management’s resistance to change can reduce the utility of evaluation work, it is fascinating to see how the theories I studied play out in an organization. Borrowing from evaluation and social psychology theories, here are some tips that helped me combat and understand resistance:

  • Hot tip: Think about the source of the message, or evaluation results. The source should be respected, seen as having expertise, trusted and viewed as an in-group member (someone also invested in the program or in a similar role).
  • Rad Resource: The appreciative inquiry approach to evaluating programs has been met with great success. Managers are more willing to be involved and use evaluation results when they carry a more positive tone. Focusing on management’s strengths to overcome program challenges has proved to be a more useful approach. A great resource online is: http://appreciativeinquiry.case.edu/
  • Hot tip: Avoid any language that seems targeted towards certain individuals, roles or positions. Make the responsibility of overcoming challenges a group effort, including the evaluator.
  • Hot tip: Take a sign of defensiveness as a positive. Often this is a sign that staff is truly invested in the program and their work. Directing this energy toward more productive means can be a bit of a struggle but be rewarding in the long run.

 

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

We are Sue Hunter, a librarian and the Planning & Evaluation Coordinator with the National Network of Libraries of Medicine (NN/LM) Middle Atlantic Region (MAR) (http://nnlm.gov/mar/) at New York University, Langone Medical Center and Cindy Olney, Evaluation Specialist with the Outreach Evaluation Resource Center, NN/LM. Funded through the National Library of Medicine, NN/LM is a nationwide program for a network of health sciences libraries and information centers (called “network members”) with the goal of advancing the progress of medicine and improving public health through equal access to health information. The MAR supports network members in Delaware, New Jersey, New York, and Pennsylvania.

We embarked on a project of focus groups using Appreciative Inquiry to obtain feedback from network members on the NN/LM MAR program for the purpose of involving them in the MAR’s development of a 5-year contract proposal. Focus groups were conducted by staff who work in the NN/LM MAR program. Due to a short timeline, the focus groups were conducted online using Adobe Connect web conferencing software. The Appreciate Inquiry method was selected due to the format that would allow network members to focus their discussion on what is valuable to them within the realm of the MAR programs and services.

Hot Tip: Appreciative Inquiry is a useful tool for generating affirmative discussions in a focus group. Participants were able to describe peak experiences they had with the MAR program and services, and to pose concrete suggestions based on those experiences for future development in the MAR. We got the exact type of information we needed for our proposal, without a lot of “off-topic” discussion, allowing us to analyze the findings quickly and put them to use. The questions, which generated affirmative discussion, allowed for a comfortable and honest exchange between network members and the staff.

Lesson learned: The focus groups were conducted by the MAR staff. This allowed all staff to be included in the process and staff members obtained immediate feedback about their program areas directly from network members. The interview guide was simple and straightforward, so that even staff with minimal evaluation experience could participate.

Rad Resource: Adobe Connect web conferencing software. We conducted focus groups online using Adobe Connect which has a built in audio recorder. Sound quality is good, and the playback and pause options made transcription fairly easy. Conducting the focus groups online was convenient for the facilitator and participants. Adobe Connect is not a free tool, but one can request a free trial to explore its many options. http://www.adobe.com/products/acrobatconnectpro/

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Sue and Cindy? They’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· ·

Archives

To top