AEA365 | A Tip-a-Day by and for Evaluators

CAT | Qualitative Methods

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at Georgia Southern University and University Sector Representative and Board Member for the Southeast Evaluation Association (SEA).

So you want to be an evaluator but you’re unfamiliar how to moderate focus group discussions – a key qualitative approach involved with formative, process, and summative evaluations. Plus, there are limited to no focus group specific courses in your program of study. Do not lose hope. All it takes is some creative thinking.

Focus group discussions are a qualitative research method that involves a focused set of questions that are asked of six to 10 focus group participants. The keyword in this definition is focused – discussions revolve around a specific topic.

Lesson Learned: Focus groups are done when you are interested in group dynamics, participant language, stories and experiences, and a breadth of information. Focus groups are wonderful; however, they are designed for a very specific purpose and have limitations that should be considered (e.g., difficulty with recruitment, brief stories or snippets of information, etc.).

Hot Tips: These resources will help you learn about focus groups and how to moderate discussion:

  1. Find a mentor: Most of my training and expertise in focus group research was gained through hands-on experience. I worked with experienced qualitative researchers who enabled me to co-facilitate, and then later conduct focus groups and train others. Many evaluators are open to mentoring those starting out in the field. Technology can facilitate your mentor search process by providing opportunities for remote relationships.       Try searching university expertise databases for potential mentors or the American Evaluation Association’s evaluator database.
  2. Read everything you can about focus group research: One of the focus group research resources is Krueger’s Focus Group Toolkit.       Although a new copy of this toolkit may stretch your budget, used copies are available. Start with Krueger’s free resource on focus group research. The toolkit takes you through everything from recruitment, participatory approaches, focus group research, question development, and to data analysis and report writing. It’s a worthy investment.
  3. Look for other virtual resources: A terrific resource for focus group research is the Community Toolbox, which provides access to numerous focus group resources.
  4. Attend (many) conferences: Reconsider spending your student loan check on a vacation and head to a conference! You can do both; for example, the annual University of South Florida’s Social Marketing Conference is held at a lovely beach resort. This conference historically provides a course in focus group research.

Conducting focus group research takes practice, practice, and more practice. Good luck on becoming a well-trained focus group moderator!

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Lynne Franco: Vice President for Technical Assistance and Evaluation at EnCompass LLC, and Jonathan Jones: Senior Monitoring and Evaluation Technical Advisor with CAMRIS International. Jonathan is also co-chair of AEA‘s International and Cross Cultural TIG.

Focus groups are an important tool in the data collection tool box, allowing the evaluator to explore peoples’ thinking on a particular topic in some depth. The very interaction among participants during a focus group can generate rich discussion as they respond, positively and negatively to each other’s ideas. During our evaluation careers, we have conducted numerous focus groups all over the world. We have learned that ‘supercharging’ focus groups with creative adult facilitation techniques can generate especially rich and meaningful data in group settings for anywhere from 5 people to 50.

Hot Tip: Ensure that participants can use more than their ears to retain what others are saying. Use a large sticky wall and index cards (or flip chart paper and big post its). Have participants write ideas on cards and then present them to the group. This is a great way to have all participants’ ideas up in front of the group – enabling group reflection and processing in real time.

AEA4

Hot Tip: Help introverts to participate. Asking participants to provide their input through writing gives introverts (and everyone) time to put their thoughts together before speaking about them.

Hot Tip: Give participants an environment that enhances creativity. Make the room colorful! Research shows that color encourages creative thinking. We often scatter pipe cleaners on the table. It is amazing what participants create during the focus group! We also use scented markers — this always generates many laughs while creating a relaxing and creative atmosphere.

AEA5

Rad Resource: We have found Brain Writing, a variation on brainstorming, to be an excellent focus group facilitation technique. It enables simultaneous group thinking and processing that is also focused and individualistic – and can appeal to both the introvert and the extrovert.

Rad Resource: Check out the forthcoming AEA New Directions in Evaluation: Evaluation and Facilitation

Rad Resource: Join our session at Eval 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Lynne and Jonathan? They’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

My name is Sebastian. Before pursuing my PhD at UCLA, I served as a senior evaluation consultant at Ramboll Management – a Copenhagen-based consulting firm. My current interests revolve around research syntheses and causal modeling techniques.

A common practice in evaluation is to examine the existing body of evidence of the type of intervention to be evaluated. The most well established approach is perhaps the generic literature review, often provided as a setting-the-scene segment in evaluation reports. The purpose of today’s tip is to push for a more interpretive approach when coding findings from existing evaluations.

The approach – called causation coding – is grounded in qualitative data analysis. In the words of Saldaña (2013), causation coding is appropriate for discerning motives (by or toward something or someone), belief systems, worldviews, processes, recent histories, interrelationships, and the complexity of influences and affects on human actions and phenomena (p.165).

In its practical application, causation coding aims to map out causal chains (CODE1 > CODE2 > CODE3), corresponding to a delivery mechanism, an outcome, and a mediator linking the delivery mechanism and outcome (ibid). These types of causal triplets are often made available in evaluation reports, as authors explain how and why the evaluated intervention generated change.

In a recent review of M4P Market development programs, I employed causation coding to capture causally relevant information in 13 existing evaluations and to develop hypotheses about how and why these programs generate positive outcomes. The latter informed the evaluation of a similar market development program.

Lessons Learned:

(1) It is important to award careful attention to the at times conflated distinction between empirically supported and hypothetically predicted causal chains. The latter express how the author(s) intended the program to work. In many evaluation studies, the eagerness to predict the success of the intervention often contributes to the inclusion of these hypothetical scenarios in results sections. Attention should be awarded the empirically supported causal chains.

(2) Causal chains are rarely summarized in a three-part sequence from cause(s) to mechanism(s) to outcome(s). As such, causation coding often involves a high degree of sensitivity to words such as “because”, “in effect”, “therefore” and “since” that might indicate an underlying causal logic (ibid).

Rad Resource: The coding manual for qualitative researchers (second edition) by Saldaña.

We’re celebrating 2-for-1 Week here at aea365. With tremendous interest in the blog lately, we’ve had many authors eager to share their evaluation wisdom, so for one special week, readers will be treated to two blog posts per day! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Myia Welsh, an independent consultant working with nonprofit and community organizations. Much of my work is done with organizations that provide services to survivors of human trafficking. What’s that, you ask? Trafficking is any enterprise where someone makes a profit from the exploitation of another by force, fraud or coercion. Just like the sale of drugs or weapons, the sale of humans occurs both in the U.S. and around the world. Find out more about human trafficking here.

Lesson Learned: Conducting evaluation with these organizations has required me to learn my way around engaging trauma survivors in evaluation – especially in focus groups. Focus groups with trauma survivors can be challenging if you don’t know what to expect. They require slightly different planning and facilitation skill. I recommend the following preparations:

  • Understand what you’re dealing with. Do some reading on trauma, so that you know how to recognize dynamics in the room.
  • Review your protocol for trigger questions. Stick with what’s essential to the evaluation.
  • Consult knowledgeable stakeholders to help you be aware of causing potential harm, and brainstorm about how to avoid it.
  • Be prepared for an emotional response, and have a plan to handle it with respect and support. An abrupt or uncomfortable response from the facilitator could silence participants. So, check your reactions. Have tissues ready in case of tears and tactile toys/objects around to help manage anxiety.
  • Make safety a factor in your planning: Where will this group feel safe? Physical space and location should be taken into consideration. Will bringing additional note takers or co-facilitators into the situation enhance or threaten perceived safety?
  • Check your facilitation practices. In most focus groups, a zoned-out participant would be prompted to participate. With a group of trauma survivors, this might be a signal that the reflection brought on by the discussion is getting overwhelming. Have a plan ready so that you can recognize it and continue on without disruption. Consider a non-verbal cue that you can set up in the beginning, a colored index card for instance. A participant can set their card on the table as a signal that this is getting tough. Make sure everyone knows that they can step away if they need to.
  • What’s your wrap-up plan? Have a strategy ready for ending in a positive way, soothing the emotions that may have emerged. Guide discussion to future hopes or recent accomplishments.

Lesson Learned: Even if it might be emotional or messy, service recipients are key stakeholders who’s voice cannot be left out of an evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Lisa Melchior, President of The Measurement Group LLC, a consulting firm focused on the evaluation of health and social services for at-risk and vulnerable populations. In response to Sheila B. Robinson’s recent post that reported what AEA 365 readers said they want to see in 2015, I’m writing about developing, sharing, and storing lessons learned from evaluation. Although this is written from the perspective of evaluation at the initiative level, it could also apply to lessons learned by an individual program.

The United Nations Environment Programme gives a useful definition of lessons learned as “knowledge or understanding gained from experience.” In a grant initiative, lessons learned might address ways to implement the projects supported through that initiative; strategies for overcoming implementation problems; best practices for conducting services (whether or not the projects employed all of them); strategies for involving key stakeholders to optimize the outcomes of the projects and their sustainability; and ideas for future directions. Statements of lessons learned are an important outcome of any grants initiative; the richness and complexity of those statements can be, in part, an indicator of the overall success of the initiative. Funders often utilize the lessons learned by their grantees to inform the development of future investments.

Hot Tips:

Developing lessons learned. If possible, work with the funder to collect examples of lessons learned using the funder’s progress reporting mechanism. When the evaluator has access to such reports, qualitative approaches can be used to catalog and identify themes among the lessons learned. Another benefit of integrating the documentation of lessons learned into ongoing programmatic reporting is that trends over the life of a project or initiative can emerge, since many initiatives request this type of information from grantees on a semi-annual or quarterly basis. Active collaboration between funder and evaluator is key to this approach.

Sharing lessons learned. Don’t wait until the end of a project to share lessons learned! Stakeholders can benefit from lessons learned in early implementation. For example, my colleagues and I highlighted interim outcomes and lessons learned during the first three years of the Archstone Foundation’s five-year Elder Abuse and Neglect Initiative in an article in the Journal of Elder Abuse and Neglect.

In a more summative mode, toolkits are a useful vehicle for sharing lessons learned with those interested in possible replication of a particular program, model, or initiative. Social media and blogs are great for more informal sharing.

Storing lessons learned. Qualitative data tools such as NVivo are invaluable to organizing lessons learned.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi, I’m Nora F. Murphy, a developmental evaluator and co-founder of TerraLuna Collaborative. Qualitative Methods have been a critical component of every developmental evaluation I have been a part of. Over the years I’ve learned a few tricks about making qualitative methods work in a developmental evaluation context.

Hot Tip: Apply systems thinking. When using developmental evaluation to support systems change it’s important to apply systems thinking. When thinking about the evaluation’s design and methods I am always asking: Where are we drawing the boundaries in this system? Whose perspectives are we seeking to understand? What are the important inter-relationships to explain? And who benefits or is excluded by the methods that I choose? Qualitative methods can be time and resource intensive and we can’t understand everything about systems change. But it’s important, from a methodological and ethical perspective to be intentional about where we draw the boundaries, whose perspectives we include, and which inter-relationships we explore.  

Hot TipPractice flexible budgeting. I typically budget for qualitative inquiry but create the space to negotiate the details of that inquiry. In one project I budgeted for qualitative inquiry that would commence six months after the contract was finalized. It was too early to know how strategy would develop and what qualitative method would best for learning about the developing strategy. In the end we applied systems thinking and conducted case studies that looked at the developing strategy in three ways: from the perspective of individual educators’ transformation, from the perspective educators participating in school change, and from the perspective of school leaders leading school change. It would have been impossible to predict that this was the right inquiry for the project at the time the budget was developed.

Hot Tip: Think in layers. The pace of developmental evaluations can be quick and there is a need for timely data and spotting patterns as they emerge. But often there is a need for a deeper look at what is developing using a method that takes more time. So I think in layers. With the case studies, for example, we structured the post-interview memos so they can be used with program developer to spot emergent patterns by framing memos around pattern surfacing questions such as: “I was surprised…  A new concept for me was… This reinforced for me… I’m wondering…” The second layer was sharing individual case studies. The third layer was the cross-analysis that surfaced deeper themes. Throughout we engaged various groups of stakeholders in the meaning making and pattern spotting.

Rad Resources:

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I’m Eric Barela, another of the co-leaders of the Qualitative Methods TIG, and a co-editor with Leslie Goodyear, Jennifer Jewiss, and Janet Usinger of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

In my time as an evaluator, I have noticed that discussions of methodology with clients can take on several forms. Most often, clients are genuinely interested in knowing how I collected and analyzed my data and why I made the methodological choices I did. However, clients have occasionally tried to use what I like to call “methodological red herrings” to dispute less-than-positive findings. I once worked with a client who disagreed with my findings because they were not uniformly positive. She accused me of analyzing only the data that would show the negative aspects of her program. I was able to show the codebook I had developed and how I went about developing the thematic content of the report based on my data analysis, which she was not prepared for me to do. I was able to defend my analytic process and get the bigwigs in the room to understand that, while there were some aspects of the program that could be improved, there were also many positive things happening. The happy ending is that the program continued to be funded, in part because of my client’s efforts to discredit my methodological choices!

Lesson Learned: Include a detailed description of your qualitative inquiry process in evaluation reports. I include it as an appendix so it’s there for clients who really want to see it. It can take time to write a detailed account of your qualitative data collection and analysis processes, but it will be time well spent!

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and using detailed descriptions of qualitative inquiry choices and processes, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Michael Quinn Patton. I train evaluators in qualitative evaluation methods and analysis. Qualitative interviews, open-ended survey questions, and social media entries can yield massive amounts of raw data. Course participants ask: “How can qualitative data be analyzed quickly, efficiently, and credibly to provide timely feedback to stakeholders? How do every day program evaluators engaged in ongoing monitoring handle analyzing lots of qualitative responses?”

Hot Tip: Focus on priority evaluation questions. Don’t think of qualitative analysis as including every single response. Many responses aren’t relevant to priority evaluation questions. Like email you delete immediately, skip irrelevant responses.

Hot Tip: Group participants’ responses together that answer the same evaluation question even if the responses come from different items in the interview or survey. Evaluation isn’t item by item analysis for the sake of analysis. It’s analysis to provide answers to important evaluation questions. Analyze and report accordingly.

Hot Tip: Judge substantive significance. Qualitative analysis has no statistical significance test equivalent. You, the evaluation analyst, must determine what is substantively significant. That’s your job. Make judgments about merit, worth, and significance of qualitative responses. Own your judgments.

Hot Tip: Keep qualitative analysis first and foremost qualitative. Ironically, the adjectives “most,” “many,” “some,” or “a few” can be more accurate than a precise number. It’s common to have responses that could be included or omitted, thus changing the number. Don’t add a quote to a category just to increase the number. Add it because it fits. When I code 12 of 20 saying something, I’m confident reporting that “many” said that. Could have been 10, or could have been 14, depending on the coding. But it definitely was many.

Cool trick: Watch for interoccular findings — the comments, feedback, and recommendations that hit us between the eyes. The “how many said that” question can distract from prioritizing substantive significance. One particularly insightful response may prove more valuable than lots of general comments. If 2 of 15 participants said they were dropping out because of sexual harassment, that’s “only” 13%. But any sexual harassment is unacceptable. The program has a problem.

Lesson Learned: Avoid laundry list reporting. Substantive significance is not about how many bulleted items you report. It’s about the quality, substantive significance, and utility of findings,

Lesson Learned: Practice analysis with colleagues. Like anything, you can up your game with practice and feedback, increasing speed, quality, and confidence.

Qual research & eval 9780470447673.pdf

 

 

 

 

 

Rad Resources:

  • Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice.Jossey-Bass.
  • Patton, M.Q. (2015) Qualitative Research and Evaluation methods, 4thSage Publications.

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello from snowy Boston! I’m Leslie Goodyear, one of the co-leaders of the Qualitative Methods TIG, and a co-editor, with Jennifer Jewiss, Janet Usinger and Eric Barela, of a new book about qualitative evaluation called Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

When I was a new evaluator, I had a major “a-ha experience” while interviewing a group of women who participated in an HIV/AIDS training for parents. They were bilingual Spanish-English speakers, and I was definitely the least fluent in Spanish in the room. As they discussed ways in which HIV could be transmitted, one woman referred to a specific sexual activity in Spanish, and all the others laughed and laughed. But I didn’t know for sure what they meant; I had an idea, but I wasn’t sure. Of course, I laughed along with them, but wondered what to do: Ask for them to define the term (and break the momentum)? Go with the flow and not be sure what they were talking about? Well, I decided I’d better ask. When I did, and the woman said what she meant, another woman said, “Oh, no! That’s not what it means!” She went on to explain, and the next woman said she thought it meant something else. And on and on with each woman! It turns out that none of them agreed on the term, but they all thought they knew what it was.

Lesson Learned: Ask stupid questions! I was worried I would look stupid when I asked them to explain. But in fact, we all learned something important in discussing the term, but also in talking about how we can think we all agree on something, but if it’s not clarified, we can’t know for sure.

Lesson Learned: Putting aside ego and fear are critical to getting good information in qualitative evaluation. Often, stupid questions open up dialogue and understanding. Sometimes they just clarify what’s being discussed. Other times, even though you might already know the answer, they give participants an important opportunity to share their perspectives in greater depth.

9780470447673.pdfRad Resource: More stories about being in the trenches of qualitative inquiry in evaluation, and asking stupid questions, can be found in the final chapter of our new book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014, Jossey-Bass).

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top