AEA365 | A Tip-a-Day by and for Evaluators

CAT | Qualitative Methods

Hi, I’m Nora F. Murphy, a developmental evaluator and co-founder of TerraLuna Collaborative. Qualitative Methods have been a critical component of every developmental evaluation I have been a part of. Over the years I’ve learned a few tricks about making qualitative methods work in a developmental evaluation context.

Hot Tip: Apply systems thinking. When using developmental evaluation to support systems change it’s important to apply systems thinking. When thinking about the evaluation’s design and methods I am always asking: Where are we drawing the boundaries in this system? Whose perspectives are we seeking to understand? What are the important inter-relationships to explain? And who benefits or is excluded by the methods that I choose? Qualitative methods can be time and resource intensive and we can’t understand everything about systems change. But it’s important, from a methodological and ethical perspective to be intentional about where we draw the boundaries, whose perspectives we include, and which inter-relationships we explore.  

Hot TipPractice flexible budgeting. I typically budget for qualitative inquiry but create the space to negotiate the details of that inquiry. In one project I budgeted for qualitative inquiry that would commence six months after the contract was finalized. It was too early to know how strategy would develop and what qualitative method would best for learning about the developing strategy. In the end we applied systems thinking and conducted case studies that looked at the developing strategy in three ways: from the perspective of individual educators’ transformation, from the perspective educators participating in school change, and from the perspective of school leaders leading school change. It would have been impossible to predict that this was the right inquiry for the project at the time the budget was developed.

Hot Tip: Think in layers. The pace of developmental evaluations can be quick and there is a need for timely data and spotting patterns as they emerge. But often there is a need for a deeper look at what is developing using a method that takes more time. So I think in layers. With the case studies, for example, we structured the post-interview memos so they can be used with program developer to spot emergent patterns by framing memos around pattern surfacing questions such as: “I was surprised…  A new concept for me was… This reinforced for me… I’m wondering…” The second layer was sharing individual case studies. The third layer was the cross-analysis that surfaced deeper themes. Throughout we engaged various groups of stakeholders in the meaning making and pattern spotting.

Rad Resources:

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I’m Eric Barela, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Janet Usinger of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

In my time as an evaluator, I have noticed that discussions of methodology with clients can take on several forms. Most often, clients are genuinely interested in knowing how I collected and analyzed my data and why I made the methodological choices I did. However, clients have occasionally tried to use what I like to call “methodological red herrings” to dispute less-than-positive findings. I once worked with a client who disagreed with my findings because they were not uniformly positive. She accused me of analyzing only the data that would show the negative aspects of her program. I was able to show the codebook I had developed and how I went about developing the thematic content of the report based on my data analysis, which she was not prepared for me to do. I was able to defend my analytic process and get the bigwigs in the room to understand that, while there were some aspects of the program that could be improved, there were also many positive things happening. The happy ending is that the program continued to be funded, in part because of my client’s efforts to discredit my methodological choices!

Lesson Learned: Include a detailed description of your qualitative inquiry process in evaluation reports. I include it as an appendix so it’s there for clients who really want to see it. It can take time to write a detailed account of your qualitative data collection and analysis processes, but it will be time well spent!

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and using detailed descriptions of qualitative inquiry choices and processes, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Michael Quinn Patton. I train evaluators in qualitative evaluation methods and analysis. Qualitative interviews, open-ended survey questions, and social media entries can yield massive amounts of raw data. Course participants ask: “How can qualitative data be analyzed quickly, efficiently, and credibly to provide timely feedback to stakeholders? How do every day program evaluators engaged in ongoing monitoring handle analyzing lots of qualitative responses?”

Hot Tip: Focus on priority evaluation questions. Don’t think of qualitative analysis as including every single response. Many responses aren’t relevant to priority evaluation questions. Like email you delete immediately, skip irrelevant responses.

Hot Tip: Group participants’ responses together that answer the same evaluation question even if the responses come from different items in the interview or survey. Evaluation isn’t item by item analysis for the sake of analysis. It’s analysis to provide answers to important evaluation questions. Analyze and report accordingly.

Hot Tip: Judge substantive significance. Qualitative analysis has no statistical significance test equivalent. You, the evaluation analyst, must determine what is substantively significant. That’s your job. Make judgments about merit, worth, and significance of qualitative responses. Own your judgments.

Hot Tip: Keep qualitative analysis first and foremost qualitative. Ironically, the adjectives “most,” “many,” “some,” or “a few” can be more accurate than a precise number. It’s common to have responses that could be included or omitted, thus changing the number. Don’t add a quote to a category just to increase the number. Add it because it fits. When I code 12 of 20 saying something, I’m confident reporting that “many” said that. Could have been 10, or could have been 14, depending on the coding. But it definitely was many.

Cool trick: Watch for interoccular findings — the comments, feedback, and recommendations that hit us between the eyes. The “how many said that” question can distract from prioritizing substantive significance. One particularly insightful response may prove more valuable than lots of general comments. If 2 of 15 participants said they were dropping out because of sexual harassment, that’s “only” 13%. But any sexual harassment is unacceptable. The program has a problem.

Lesson Learned: Avoid laundry list reporting. Substantive significance is not about how many bulleted items you report. It’s about the quality, substantive significance, and utility of findings,

Lesson Learned: Practice analysis with colleagues. Like anything, you can up your game with practice and feedback, increasing speed, quality, and confidence.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

Rad Resources:

  • Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello from snowy Boston! I’m Leslie Goodyear, one of the co-leaders of the Qualitative Methods TIG, and a co-editor, with Jennifer Jewiss, Janet Usinger and Eric Barela, of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

When I was a new evaluator, I had a major “a-ha experience” while interviewing a group of women who participated in an HIV/AIDS training for parents. They were bilingual Spanish-English speakers, and I was definitely the least fluent in Spanish in the room. As they discussed ways in which HIV could be transmitted, one woman referred to a specific sexual activity in Spanish, and all the others laughed and laughed. But I didn’t know for sure what they meant; I had an idea, but I wasn’t sure. Of course, I laughed along with them, but wondered what to do: Ask for them to define the term (and break the momentum)? Go with the flow and not be sure what they were talking about? Well, I decided I’d better ask. When I did, and the woman said what she meant, another woman said, “Oh, no! That’s not what it means!” She went on to explain, and the next woman said she thought it meant something else. And on and on with each woman! It turns out that none of them agreed on the term, but they all thought they knew what it was.

Lesson Learned: Ask stupid questions! I was worried I would look stupid when I asked them to explain. But in fact, we all learned something important in discussing the term, but also in talking about how we can think we all agree on something, but if it’s not clarified, we can’t know for sure.

Lesson Learned: Putting aside ego and fear are critical to getting good information in qualitative evaluation. Often, stupid questions open up dialogue and understanding. Sometimes they just clarify what’s being discussed. Other times, even though you might already know the answer, they give participants an important opportunity to share their perspectives in greater depth.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and asking stupid questions, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Janet Usinger, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Eric Barela of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The process of interviewing participants in an evaluation shares a few characteristics with counseling sessions. Establishing rapport between the interviewer and interviewee is essential to gather meaningful data. Evaluators generally enter the interview session with confidence that a constructive conversation can be launched quickly. There are times, however, when the evaluator finds him or herself at odds with what the interviewer is saying. Sometimes the tension is because there is a philosophical difference of opinion; other times, it is just that the two individuals do not particularly like each other. I have had several experiences interviewing adolescents (and adults) who simply pushed my buttons. Yet removing the individual from the study was inappropriate and counterproductive to the goals of the evaluation.

Hot Tip: Put on your interviewer hat. Your responsibility is to understand the situation from the interviewee’s perspective, not get caught up in your feelings about their statements.

Hot Tip: Be intensely curious about why the person holds the particular view. This can shift the focus in a constructive direction and deepen your understanding of the interviewee’s underlying experiences and perspectives of the issue at hand.

Hot Tip: Leave your ego at the door. Remember, it is their story, not yours.

Lesson Learned: Once I took my feelings out of the equation, interviews with people with whom I do not click have become some of the most meaningful interviews I’ve conducted. This is not necessarily easy, and I generally need to have a little private conversation with myself before the interview. However, once I do, I am able to dig deeper in trying to understand their perspectives, frustrations, and worldviews.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Michael Quinn Patton and I am an independent evaluation consultant. Development of more-nuanced and targeted purposeful sampling strategies has increased the utility of qualitative evaluation methods over the last decade. In the end, whatever conclusions we draw and judgments we make depend on what we have sampled.

Hot Tip: Make your qualitative sampling strategic and purposeful — the criteria of qualitative excellence.

Hot Tip: Convenience sampling is neither purposeful nor strategic. Convenience sampling means interviewees are selected because they happen to be available, for example, whoever happens to be around a program during a site visit. While convenience and cost are real considerations, first priority goes to strategically designing the sample to get the most information of greatest utility from the limited number of cases selected.

Hot Tip: Language matters. Both terms, purposeful and purposive, describe qualitative sampling. My work involves collaborating with non-researchers who say they find the term purposive academic, off-putting, and unclear. So stay purposeful.

Hot Tip: Be strategically purposeful. Some label qualitative case selection “nonprobability sampling” making explicit the contrast to probability sampling. This defines qualitative sampling by what it is not (nonprobability) rather than by what it is (strategically purposeful).

Hot Tip: A purposefully selected rose is still a rose. Because the word “sampling” is associated in many people’s minds with random probability sampling (generalizing from a sample to a population), some prefer to avoid the word sampling altogether in qualitative evaluations and simply refer to case selection. As always in evaluation, use terminology and nomenclature that is understandable and meaningful to primary intended users contextually.

Hot Tip: Watch for and resist denigration purposeful sampling. One international agency stipulates that purposeful samples can only be used for learning, not for accountability or public reporting on evaluation of public sector operations. Only randomly chosen representative samples are considered credible. This narrow view of purposeful sampling limits the potential contributions of strategically selected purposeful samples.

Cool Trick: Learn purposeful sampling options. Forty options (Patton, 2015, pp. 266-272) mean there is a sampling strategy for every evaluation purpose.

Lesson Learned: Be strategic and purposeful in all aspects of evaluation design, including especially qualitative case section.

Rad Resources:

  • Patton, M.Q. (2014) Qualitative inquiry in utilization-focused evaluation. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass, pp. 25-54.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.
  • Patton, M.Q. (2014) Top 10 Developments in Qualitative Evaluation for the Last Decade.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi again – Leslie Goodyear, Jennifer Jewiss, Janet Usinger, and Eric Barela, the co-leaders of the AEA Qualitative Methods TIG, back with another lesson we learned as we co-edited a book that explores how qualitative inquiry and evaluation fit together. Our last blog focused on the five elements of quality in qualitative evaluation. Underpinning these five elements is a deep understanding and consideration of context.

Lesson Learned: Context includes the setting, program history, and programmatic values and goals. It also includes the personalities of and relationships among the key stakeholders, along with the cultures in which they operate. In their chapter on competencies for qualitative evaluators, Stevahn and King describe this understanding as a sixth sense.

Lesson Learned: Understanding context was one of the driving forces in the early adoption of qualitative inquiry in evaluation. In their book chapter, Schwandt and Cash discuss how the need to explain outcomes – and therefore better understand program complexities and the experiences of participants – drove evaluators to employ qualitative inquiry in their evaluations.

Lesson Learned: Understanding context is not always highlighted in descriptions of high quality evaluations, perhaps because it is a basic assumption of effective evaluators who use qualitative inquiry in their practice.

9780470447673.pdfRad Resource: Further discussion about the importance of understanding context appears in several chapters of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass), an edited volume featuring many of our field’s experts on qualitative evaluation.

 

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Ann Zukoski and I am a Senior Research Associate at Rainbow Research, Inc. in Minneapolis, Minnesota.

Founded as a nonprofit organization in 1974, Rainbow Research’s mission is to improve the effectiveness of socially-concerned organizations through capacity building, research, and evaluation. Projects range in scale from one-time program evaluations to multi-year, multi-site research studies and designs that explicitly include participatory approaches designed to lead to program improvement.

Through my work, I am always looking for creative ways to capture evaluation data. Here is one rad resource and a hot tip on a participatory tool to add to your tool box.

Rad Resource: Participatory evaluation approaches are used extensively by international development organizations. This web page is a great resource for exploring different rapid appraisal methods that can be adapted to the US context.

ELDIS –http://www.eldis.org/go/topics/resource-guides/participation/participatory-methodology#.UwwFaf1z8ds

ELDIS provides descriptions and links to a variety of information sources on participatory evaluation approaches, including online documents, organization’s web sites, databases, library catalogues, bibliographies, and email discussion lists, research project information, map and newspaper collections. Eldis is hosted by the Institute of Development Studies in Sussex, U.K.

Hot Tip: Evaluators are often asked to identify program impacts and measure key outcomes of community based projects. Impact and outcome measures are often externally determined by the funder. Many times, however, collaborative projects lead to unanticipated outcomes that are seen to be of great value by program participants but are overlooked by formal evaluation designs. One participatory technique, Most Significant Change (MSC), offers an alternative approach to address this issue and can be used to surface promising practices.

Most Significant Change Technique (MSC) – MSC is a participatory qualitative data collection process that uses stories to identify the impact of the program. This approach involves a series of steps where stakeholders search for significant program outcomes and deliberate on the value of these outcomes in a systematic and transparent manner. Stakeholders are asked to write stories of what they see as “significant change” and then dialogue with others to select stories of most importance. The goal of the process is to make explicit what stakeholders (program staff, program beneficiaries and others) value as significant change. The process allows participants to gain a clearer understanding of what is and what is not being achieved. The process can be used for program improvement, identifying promising practices as well as to uncover key outcomes by helping evaluators identify areas of change that warrant additional description and measurement.

Where to go for more information: http://www.mande.co.uk/docs/MSCGuide.pdf

Have you used this tool? Let us all know your thoughts!

The American Evaluation Association is celebrating Best of aea365 week. The contributions all this week are reposts of great aea365 blogs from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hey there. My name is Kath McNiff and I’m an online community manager at QSR International (the makers of NVivo).

Lessons Learned: We’re heading into a brand new year of evaluation (actually, THE year of evaluation). Oh, the joys of a clean slate! A chance to right the wrongs, sharpen the tools, clear the decks and morph into the best 2015 version of ourselves.

McNiff

Here are some resolutions to consider:

  • Do an Inbox detoxIs your cluttered inbox making you feel overwhelmed and out of control? Vital details can easily slip through the cracks as you flounder around in the digital debris. Get yourself a copy of How to be a Productivity Ninja and follow Graham Allcott’s simple steps for getting your inbox to zero.
  • Decide on your Digital Strategy Have you been following the same old tweeters for the past two years? And what about those cobwebs on your LinkedIn profile? It’s time to make sure your digital footprint is polished and professional. If you’re looking at social media as a rich reserve of qualitative data, you need to make decisions about platforms, data collection and ethics.
  • Get the right tools Seriously consider using the free version of Evernote. While you’re out in the field you can use your phone or tablet to record interviews, take field notes, snap photos and clip relevant content from the web. Then, back at your desk you can synch notebooks and have easy access to all your material (and then bring it into NVivo for analysis).
  • Develop good NVivo habits Bring your research design documents into NVivo and refer to them regularly as you analyze your data. Start a project journal in NVivo and write, write, write – remembering to link to the data that supports your emerging insights. Then, when a client demands to know how you reached your conclusions – you can turn to your journal (complete with charts, word clouds and models). Check the QSR website for details about pricing.

Well, that should get us to February at least. Start the year feeling in control of your virtual world so you can spend more time celebrating in the physical one :

Hot Tip: Take a fresh look at the tools you use everyday. Are you missing out on some really useful feature because you always follow the same well-worn path? In the case of NVivo, spend some time watching the YouTube videos or read the help – and explore features you haven’t used before (framework matrices anyone?).

Rad Resources: If you want to get your ToDo list under control, try these free apps: Todoist or Trello

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Norma Martinez-Rubin, a public health practitioner, program evaluator, and occasional trainer. Work projects that integrate opportunities to learn about the people for whom they are designed excite me. Hence, I find qualitative inquiry quite fitting. Focus groups and semi-structured interviews have been primary data-collection methods on evaluation studies I’ve led, guided, or to which I’ve contributed. They’ve yielded candid remarks and surprising insights about mainstream topics. At times, too, once guarded opinions have become less so when moderated discussions foster honest expression intended for project improvement.

How does one go about identifying the best course of inquiry? What methodological decisions must be made to maximize use of limited time and funding to produce maximum results? How, in a culturally appropriate way, does one acknowledge study participants’ involvement? Is there a necessity to balance the evaluator role differently when taking a consultative approach as a new, external evaluator? 

Lesson Learned: Much planning, discussion, drafting protocols, redesigning, and renegotiating occurs before implementing an evaluation study. Whether working solo or collaboratively with study sponsors and other colleagues, one has to establish and foster working relationships to carry out an evaluation that yields useful findings.

Lesson Learned: Transitioning from one professional approach to another, from internal to external evaluator or vice versa requires taking stock of one’s professional strengths and using them as levers. (Quashing those strengths serves no one.)

Lesson Learned: Evaluation requires a knack for relationship building. Introverts’ inquiring minds and predilection for reflection are advantageous attributes in qualitative inquiry. They are usable when moderating discussion groups and for the focus required when doing data analysis.

Hot Tip: Quickly building rapport serves to trigger rich discussions. Never mind the misplaced argument for maintaining a sense of objective neutrality. Cold, calculated exchange is simply that: cold, calculated exchange, not genuine communication or inquiry.

Rad Resource: The editors of Qualitative Inquiry in Evaluation (Jossey-Bass, 2014) compiled authors’ research and experiences that illuminate the theory, purpose, and application of qualitative inquiry. The editors write of their own discoveries in the process of producing the book. They also invite the reader to examine what constitutes quality in qualitative inquiry.

Rad Resource: The chapter titled, Balancing Inside-Outsider Roles as a New, External Evaluator in Qualitative Inquiry in Evaluation: From Theory to Practice (Jossey-Bass, 2014) illustrates how personal curiosity, professional training, and personal experiences can function as levers when designing and implementing protocols for focus groups and semi-structured interviews.

9780470447673.pdf

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

<< Latest posts

Older posts >>

Archives

To top