AEA365 | A Tip-a-Day by and for Evaluators

CAT | Qualitative Methods

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Myia Welsh, an independent consultant working with nonprofit and community organizations. Much of my work is done with organizations that provide services to survivors of human trafficking. What’s that, you ask? Trafficking is any enterprise where someone makes a profit from the exploitation of another by force, fraud or coercion. Just like the sale of drugs or weapons, the sale of humans occurs both in the U.S. and around the world. Find out more about human trafficking here.

Lesson Learned: Conducting evaluation with these organizations has required me to learn my way around engaging trauma survivors in evaluation – especially in focus groups. Focus groups with trauma survivors can be challenging if you don’t know what to expect. They require slightly different planning and facilitation skill. I recommend the following preparations:

  • Understand what you’re dealing with. Do some reading on trauma, so that you know how to recognize dynamics in the room.
  • Review your protocol for trigger questions. Stick with what’s essential to the evaluation.
  • Consult knowledgeable stakeholders to help you be aware of causing potential harm, and brainstorm about how to avoid it.
  • Be prepared for an emotional response, and have a plan to handle it with respect and support. An abrupt or uncomfortable response from the facilitator could silence participants. So, check your reactions. Have tissues ready in case of tears and tactile toys/objects around to help manage anxiety.
  • Make safety a factor in your planning: Where will this group feel safe? Physical space and location should be taken into consideration. Will bringing additional note takers or co-facilitators into the situation enhance or threaten perceived safety?
  • Check your facilitation practices. In most focus groups, a zoned-out participant would be prompted to participate. With a group of trauma survivors, this might be a signal that the reflection brought on by the discussion is getting overwhelming. Have a plan ready so that you can recognize it and continue on without disruption. Consider a non-verbal cue that you can set up in the beginning, a colored index card for instance. A participant can set their card on the table as a signal that this is getting tough. Make sure everyone knows that they can step away if they need to.
  • What’s your wrap-up plan? Have a strategy ready for ending in a positive way, soothing the emotions that may have emerged. Guide discussion to future hopes or recent accomplishments.

Lesson Learned: Even if it might be emotional or messy, service recipients are key stakeholders who’s voice cannot be left out of an evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Lisa Melchior, President of The Measurement Group LLC, a consulting firm focused on the evaluation of health and social services for at-risk and vulnerable populations. In response to Sheila B. Robinson’s recent post that reported what AEA 365 readers said they want to see in 2015, I’m writing about developing, sharing, and storing lessons learned from evaluation. Although this is written from the perspective of evaluation at the initiative level, it could also apply to lessons learned by an individual program.

The United Nations Environment Programme gives a useful definition of lessons learned as “knowledge or understanding gained from experience.” In a grant initiative, lessons learned might address ways to implement the projects supported through that initiative; strategies for overcoming implementation problems; best practices for conducting services (whether or not the projects employed all of them); strategies for involving key stakeholders to optimize the outcomes of the projects and their sustainability; and ideas for future directions. Statements of lessons learned are an important outcome of any grants initiative; the richness and complexity of those statements can be, in part, an indicator of the overall success of the initiative. Funders often utilize the lessons learned by their grantees to inform the development of future investments.

Hot Tips:

Developing lessons learned. If possible, work with the funder to collect examples of lessons learned using the funder’s progress reporting mechanism. When the evaluator has access to such reports, qualitative approaches can be used to catalog and identify themes among the lessons learned. Another benefit of integrating the documentation of lessons learned into ongoing programmatic reporting is that trends over the life of a project or initiative can emerge, since many initiatives request this type of information from grantees on a semi-annual or quarterly basis. Active collaboration between funder and evaluator is key to this approach.

Sharing lessons learned. Don’t wait until the end of a project to share lessons learned! Stakeholders can benefit from lessons learned in early implementation. For example, my colleagues and I highlighted interim outcomes and lessons learned during the first three years of the Archstone Foundation’s five-year Elder Abuse and Neglect Initiative in an article in the Journal of Elder Abuse and Neglect.

In a more summative mode, toolkits are a useful vehicle for sharing lessons learned with those interested in possible replication of a particular program, model, or initiative. Social media and blogs are great for more informal sharing.

Storing lessons learned. Qualitative data tools such as NVivo are invaluable to organizing lessons learned.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Nora F. Murphy, a developmental evaluator and co-founder of TerraLuna Collaborative. Qualitative Methods have been a critical component of every developmental evaluation I have been a part of. Over the years I’ve learned a few tricks about making qualitative methods work in a developmental evaluation context.

Hot Tip: Apply systems thinking. When using developmental evaluation to support systems change it’s important to apply systems thinking. When thinking about the evaluation’s design and methods I am always asking: Where are we drawing the boundaries in this system? Whose perspectives are we seeking to understand? What are the important inter-relationships to explain? And who benefits or is excluded by the methods that I choose? Qualitative methods can be time and resource intensive and we can’t understand everything about systems change. But it’s important, from a methodological and ethical perspective to be intentional about where we draw the boundaries, whose perspectives we include, and which inter-relationships we explore.  

Hot TipPractice flexible budgeting. I typically budget for qualitative inquiry but create the space to negotiate the details of that inquiry. In one project I budgeted for qualitative inquiry that would commence six months after the contract was finalized. It was too early to know how strategy would develop and what qualitative method would best for learning about the developing strategy. In the end we applied systems thinking and conducted case studies that looked at the developing strategy in three ways: from the perspective of individual educators’ transformation, from the perspective educators participating in school change, and from the perspective of school leaders leading school change. It would have been impossible to predict that this was the right inquiry for the project at the time the budget was developed.

Hot Tip: Think in layers. The pace of developmental evaluations can be quick and there is a need for timely data and spotting patterns as they emerge. But often there is a need for a deeper look at what is developing using a method that takes more time. So I think in layers. With the case studies, for example, we structured the post-interview memos so they can be used with program developer to spot emergent patterns by framing memos around pattern surfacing questions such as: “I was surprised…  A new concept for me was… This reinforced for me… I’m wondering…” The second layer was sharing individual case studies. The third layer was the cross-analysis that surfaced deeper themes. Throughout we engaged various groups of stakeholders in the meaning making and pattern spotting.

Rad Resources:

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I’m Eric Barela, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Janet Usinger of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

In my time as an evaluator, I have noticed that discussions of methodology with clients can take on several forms. Most often, clients are genuinely interested in knowing how I collected and analyzed my data and why I made the methodological choices I did. However, clients have occasionally tried to use what I like to call “methodological red herrings” to dispute less-than-positive findings. I once worked with a client who disagreed with my findings because they were not uniformly positive. She accused me of analyzing only the data that would show the negative aspects of her program. I was able to show the codebook I had developed and how I went about developing the thematic content of the report based on my data analysis, which she was not prepared for me to do. I was able to defend my analytic process and get the bigwigs in the room to understand that, while there were some aspects of the program that could be improved, there were also many positive things happening. The happy ending is that the program continued to be funded, in part because of my client’s efforts to discredit my methodological choices!

Lesson Learned: Include a detailed description of your qualitative inquiry process in evaluation reports. I include it as an appendix so it’s there for clients who really want to see it. It can take time to write a detailed account of your qualitative data collection and analysis processes, but it will be time well spent!

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and using detailed descriptions of qualitative inquiry choices and processes, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Michael Quinn Patton. I train evaluators in qualitative evaluation methods and analysis. Qualitative interviews, open-ended survey questions, and social media entries can yield massive amounts of raw data. Course participants ask: “How can qualitative data be analyzed quickly, efficiently, and credibly to provide timely feedback to stakeholders? How do every day program evaluators engaged in ongoing monitoring handle analyzing lots of qualitative responses?”

Hot Tip: Focus on priority evaluation questions. Don’t think of qualitative analysis as including every single response. Many responses aren’t relevant to priority evaluation questions. Like email you delete immediately, skip irrelevant responses.

Hot Tip: Group participants’ responses together that answer the same evaluation question even if the responses come from different items in the interview or survey. Evaluation isn’t item by item analysis for the sake of analysis. It’s analysis to provide answers to important evaluation questions. Analyze and report accordingly.

Hot Tip: Judge substantive significance. Qualitative analysis has no statistical significance test equivalent. You, the evaluation analyst, must determine what is substantively significant. That’s your job. Make judgments about merit, worth, and significance of qualitative responses. Own your judgments.

Hot Tip: Keep qualitative analysis first and foremost qualitative. Ironically, the adjectives “most,” “many,” “some,” or “a few” can be more accurate than a precise number. It’s common to have responses that could be included or omitted, thus changing the number. Don’t add a quote to a category just to increase the number. Add it because it fits. When I code 12 of 20 saying something, I’m confident reporting that “many” said that. Could have been 10, or could have been 14, depending on the coding. But it definitely was many.

Cool trick: Watch for interoccular findings — the comments, feedback, and recommendations that hit us between the eyes. The “how many said that” question can distract from prioritizing substantive significance. One particularly insightful response may prove more valuable than lots of general comments. If 2 of 15 participants said they were dropping out because of sexual harassment, that’s “only” 13%. But any sexual harassment is unacceptable. The program has a problem.

Lesson Learned: Avoid laundry list reporting. Substantive significance is not about how many bulleted items you report. It’s about the quality, substantive significance, and utility of findings,

Lesson Learned: Practice analysis with colleagues. Like anything, you can up your game with practice and feedback, increasing speed, quality, and confidence.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

Rad Resources:

  • Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello from snowy Boston! I’m Leslie Goodyear, one of the co-leaders of the Qualitative Methods TIG, and a co-editor, with Jennifer Jewiss, Janet Usinger and Eric Barela, of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

When I was a new evaluator, I had a major “a-ha experience” while interviewing a group of women who participated in an HIV/AIDS training for parents. They were bilingual Spanish-English speakers, and I was definitely the least fluent in Spanish in the room. As they discussed ways in which HIV could be transmitted, one woman referred to a specific sexual activity in Spanish, and all the others laughed and laughed. But I didn’t know for sure what they meant; I had an idea, but I wasn’t sure. Of course, I laughed along with them, but wondered what to do: Ask for them to define the term (and break the momentum)? Go with the flow and not be sure what they were talking about? Well, I decided I’d better ask. When I did, and the woman said what she meant, another woman said, “Oh, no! That’s not what it means!” She went on to explain, and the next woman said she thought it meant something else. And on and on with each woman! It turns out that none of them agreed on the term, but they all thought they knew what it was.

Lesson Learned: Ask stupid questions! I was worried I would look stupid when I asked them to explain. But in fact, we all learned something important in discussing the term, but also in talking about how we can think we all agree on something, but if it’s not clarified, we can’t know for sure.

Lesson Learned: Putting aside ego and fear are critical to getting good information in qualitative evaluation. Often, stupid questions open up dialogue and understanding. Sometimes they just clarify what’s being discussed. Other times, even though you might already know the answer, they give participants an important opportunity to share their perspectives in greater depth.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and asking stupid questions, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Janet Usinger, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Eric Barela of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The process of interviewing participants in an evaluation shares a few characteristics with counseling sessions. Establishing rapport between the interviewer and interviewee is essential to gather meaningful data. Evaluators generally enter the interview session with confidence that a constructive conversation can be launched quickly. There are times, however, when the evaluator finds him or herself at odds with what the interviewer is saying. Sometimes the tension is because there is a philosophical difference of opinion; other times, it is just that the two individuals do not particularly like each other. I have had several experiences interviewing adolescents (and adults) who simply pushed my buttons. Yet removing the individual from the study was inappropriate and counterproductive to the goals of the evaluation.

Hot Tip: Put on your interviewer hat. Your responsibility is to understand the situation from the interviewee’s perspective, not get caught up in your feelings about their statements.

Hot Tip: Be intensely curious about why the person holds the particular view. This can shift the focus in a constructive direction and deepen your understanding of the interviewee’s underlying experiences and perspectives of the issue at hand.

Hot Tip: Leave your ego at the door. Remember, it is their story, not yours.

Lesson Learned: Once I took my feelings out of the equation, interviews with people with whom I do not click have become some of the most meaningful interviews I’ve conducted. This is not necessarily easy, and I generally need to have a little private conversation with myself before the interview. However, once I do, I am able to dig deeper in trying to understand their perspectives, frustrations, and worldviews.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Michael Quinn Patton and I am an independent evaluation consultant. Development of more-nuanced and targeted purposeful sampling strategies has increased the utility of qualitative evaluation methods over the last decade. In the end, whatever conclusions we draw and judgments we make depend on what we have sampled.

Hot Tip: Make your qualitative sampling strategic and purposeful — the criteria of qualitative excellence.

Hot Tip: Convenience sampling is neither purposeful nor strategic. Convenience sampling means interviewees are selected because they happen to be available, for example, whoever happens to be around a program during a site visit. While convenience and cost are real considerations, first priority goes to strategically designing the sample to get the most information of greatest utility from the limited number of cases selected.

Hot Tip: Language matters. Both terms, purposeful and purposive, describe qualitative sampling. My work involves collaborating with non-researchers who say they find the term purposive academic, off-putting, and unclear. So stay purposeful.

Hot Tip: Be strategically purposeful. Some label qualitative case selection “nonprobability sampling” making explicit the contrast to probability sampling. This defines qualitative sampling by what it is not (nonprobability) rather than by what it is (strategically purposeful).

Hot Tip: A purposefully selected rose is still a rose. Because the word “sampling” is associated in many people’s minds with random probability sampling (generalizing from a sample to a population), some prefer to avoid the word sampling altogether in qualitative evaluations and simply refer to case selection. As always in evaluation, use terminology and nomenclature that is understandable and meaningful to primary intended users contextually.

Hot Tip: Watch for and resist denigration purposeful sampling. One international agency stipulates that purposeful samples can only be used for learning, not for accountability or public reporting on evaluation of public sector operations. Only randomly chosen representative samples are considered credible. This narrow view of purposeful sampling limits the potential contributions of strategically selected purposeful samples.

Cool Trick: Learn purposeful sampling options. Forty options (Patton, 2015, pp. 266-272) mean there is a sampling strategy for every evaluation purpose.

Lesson Learned: Be strategic and purposeful in all aspects of evaluation design, including especially qualitative case section.

Rad Resources:

  • Patton, M.Q. (2014) Qualitative inquiry in utilization-focused evaluation. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass, pp. 25-54.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.
  • Patton, M.Q. (2014) Top 10 Developments in Qualitative Evaluation for the Last Decade.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi again – Leslie Goodyear, Jennifer Jewiss, Janet Usinger, and Eric Barela, the co-leaders of the AEA Qualitative Methods TIG, back with another lesson we learned as we co-edited a book that explores how qualitative inquiry and evaluation fit together. Our last blog focused on the five elements of quality in qualitative evaluation. Underpinning these five elements is a deep understanding and consideration of context.

Lesson Learned: Context includes the setting, program history, and programmatic values and goals. It also includes the personalities of and relationships among the key stakeholders, along with the cultures in which they operate. In their chapter on competencies for qualitative evaluators, Stevahn and King describe this understanding as a sixth sense.

Lesson Learned: Understanding context was one of the driving forces in the early adoption of qualitative inquiry in evaluation. In their book chapter, Schwandt and Cash discuss how the need to explain outcomes – and therefore better understand program complexities and the experiences of participants – drove evaluators to employ qualitative inquiry in their evaluations.

Lesson Learned: Understanding context is not always highlighted in descriptions of high quality evaluations, perhaps because it is a basic assumption of effective evaluators who use qualitative inquiry in their practice.

9780470447673.pdfRad Resource: Further discussion about the importance of understanding context appears in several chapters of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass), an edited volume featuring many of our field’s experts on qualitative evaluation.

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top