AEA365 | A Tip-a-Day by and for Evaluators

CAT | Qualitative Methods

Greetings! We are Laura Sefton from the University of Massachusetts Medical School’s Center for Health Policy and Research and Linda Cabral from Care Transformation Collaborative-Rhode Island. When choosing qualitative interviews as the data collection method for your evaluation project, developing an appropriate interview guide is key to gathering the information you need. The interviews should aim to collect data that informs your evaluation aims and avoids collecting superfluous information. From our experience in developing interview guides over the last 10 years, we have the following insights to offer:

Hot Tips:

Wording is key.  Questions should be straightforward and gather insights from your respondents. Your goal should be to develop questions that are non-judgmental and facilitate conversation. Word your questions in ways that elicit more than yes/no responses. Avoid questions that ask “why,” as they may put your respondent on the defensive. Adjust your wording according to the intended respondent; what works for a program CEO may not work for a client of the same program.

Begin with a warm-up and end with closure.  The first question should be one that your respondent can answer easily (e.g., “Tell me about your job responsibilities.”). This initial rapport-building can put you and the respondent at ease with one another and make the rest of the interview flow more smoothly. To provide closure to the interview, we often ask respondents for any final thoughts they want to share with us. This provides them with an opportunity to give us information we may not have asked about but that they felt was important to share.

Probe for more detail.  Probes, or prompts, are handy when you are not getting the information you had hoped for or you want to be sure to get as complete information as possible on certain questions. A list of probes for key questions can help you elicit more detailed and elaborate responses (e.g., “Can you tell me more about that?” “What makes you feel that way?”).

Consider how much time you have.  Once you have your set of key questions, revisit them to see if you can pack them down into fewer questions. We found that we can generally get through approximately ten in-depth questions and any necessary probes in a one-hour interview. Be prepared to ask only your key questions. Your actual interview time may be less than planned or some questions may take longer to get through.

Lessons Learned:

It’s ok to revise the interview guide after starting data collection.  After completing your first few interviews, you may find that certain questions didn’t give you the information you wanted, were difficult for your respondents to understand or answer, or didn’t flow well. Build in time to debrief with your data collection team (and your client, if appropriate) on your early interviews and make adjustments to the guide as necessary.

Rad Resource: As with many topics related to qualitative research, Michael Quinn Patton’s Qualitative Research & Evaluation Methods serves as a useful resource for developing interview guides.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, we are Greg Lestikow, CEO and Fatima Frank, Project Manager of evalû, a small consulting firm that focuses exclusively on rigorous evaluations of social and economic development initiatives.  We champion impact evaluation that maintains academic rigor but is based entirely on our clients’ need to improve strategic and operational effectiveness and increase profitability.

In a recent project, we were tasked with designing a qualitative instrument to complement quantitative data around the sensitive topic of gender-based violence.

Rad Resource: We approached this challenge by designing a focus group discussion (FGD) protocol informed by an article on the “Participatory Ranking Method” (PRM), in which participants rank potential indicators from most to least important. PRM acknowledges project beneficiaries as experts and recognizes the local community as capable of identifying and measuring their progress towards positive change. As such, PRM incorporates local perspectives in the construction of research instruments. By using PRM, we were able to select indicators that are meaningful to the project’s local beneficiaries (in our case adolescent girls affected by violence) and reflective of the concepts they find useful when tracking their own progress. PRM is an ideal evaluation methodology for measuring awareness of sensitive topics and tracking outcomes over time, particularly for projects that may not see any kind of impact in the short or medium term.

Hot Tips:

  • Start with a participatory activity to gauge local perspectives and to understand which social practices are considered more or less acceptable in the community. In our case, we asked participants what gender-based violence meant to them.
  • To facilitate ranking, show a series of cards labeled with different kinds of social practices (in our case: Shout; Insult, Threaten, Push, Hit, Beat, Kill) and have participants order them from the most to the least acceptable, asking them to explain their decisions.  Alternatively, participants can free-list social practices that are common in their communities and then rank-order them.
  • Include an open-ended discussion to understand which social practices are acceptable in different relational and social contexts.

Lessons Learned:

  • Make sure moderator and note-taker are gender appropriate.
  • If you want to obtain a broad range of perspectives but anticipate potential problems with mixing certain community members in the same FGD, create a few FGD groups and separate participants.
  • Ask local evaluation or project teams about any other cultural practices to consider before an FGD. For example, in Sierra Leone we started each FGD with a prayer, as this is a standard practice when people meet.

Please share your stories on challenges, solutions, and experiences in dealing with sensitive topics by leaving a comment here or contacting us.

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Marti Frank, a researcher and evaluator based in Portland, Oregon. Over the last three years I’ve worked in the energy efficiency and social justice worlds, and it’s given me the opportunity to see how much these fields have to teach one another.

For evaluators working with environmental programs – and energy efficiency in particular – I’ve learned two lessons that will help us do a better job documenting the impacts of environmental programs.

Lessons Learned:

1) A program designed to address an environmental goal – for example, reduce energy use or clean up pollution, will almost always have other, more far reaching impacts. As evaluators, we need to be open to these in order to capture the full range of the program’s benefits.

Example: A weatherization workshop run by Portland non-profit Community Energy Project (where I am on the Board), teaches people how to make simple, inexpensive changes to their home to reduce drafts and air leaks. While the program’s goal is to reduce energy use, participants report many other benefits: more disposable income, reduced need for public assistance, feeling less worried about paying bills, having more time to spend with family.

2) Not all people will be equally impacted by an environmental program, or even impacted in the same way. Further, there may be systematic differences in how, and how much, people are impacted.

Example #1: Energy efficiency programs assign a single value for energy savings, even though the same quantity of savings will mean very different things to different households, depending in large part on their energy burden  (or the percent of their income they spend on energy).

Example #2: A California energy efficiency program provided rebates on efficient household appliances, like refrigerators. Although the rebates were available to everyone, the households who redeemed them (and thus benefited from the program) were disproportionately wealthy and college-educated, relative to all Californians.

Rad Resources:

I’ve found three evaluation approaches to be helpful in identifying unintended impacts of environmental programs.

Outcome harvesting. This evaluation practice encourages us to look for all program outcomes, not just those that were intended. Ricardo Wilson-Grau, who developed it, hosts this site with materials to get you started.

Intersectionality. This conceptual approach originated in feminist theory and reminds us to think about how differing clusters of demographic characteristics influence how we experience the world and perceive benefits of social programs.

Open-ended qualitative interviews. It’s hard to imagine unearthing unexpected outcomes using closed-ended questions. I always enjoy what I learn from asking open-ended questions, giving people plenty of time to respond, and even staying quiet a little too long. And, I’ve yet to find an interviewee who doesn’t come up with another interesting point when asked, “Anything else?”

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Good day, I’m Bernadette Wright, program evaluator with Meaningful Evidence, LLC. Conducting interviews as part of a program evaluation is a great way better understand the specific situation from stakeholders’ perspectives. Online, interactive maps are a useful technique for presenting findings from that qualitative data to inform action for organization leaders who are working to improve and sustain their programs.

Rad Resource: KUMU is free to use to create public maps. A paid plan is required to create private projects (visible only to you and your team).

Here are the basic steps for using KUMU to integrate and visualize findings from stakeholder conversations.

1) Identify concepts and causal relationships from interviews.

Using the transcripts, you focus on the causal relationships. In the example below, we see “housing services helps people to move from homelessness to housing” (underlined).

2) Diagram concepts and causal relationships, to form a map.

Next, diagram the causal relationships you identified in step one. Each specific thing that is important becomes a “bubble” on the map. We might also call them “concepts,” “elements,” “nodes,” or “variables.”

Lessons Learned:

  • Make each concept (bubble) a noun.
  • Keep names of bubbles short.

 

3) Add details in the descriptions for each bubble and arrow.

When you open your map in KUMU, you can click any bubble or arrow to see the item’s “description” on the left (see picture below). Edit the description to add details such as example quotes.

4) Apply “Decorations” to highlight key information.

You can add “decorations” to bubbles (elements) and arrows (connections) using the editor to the right of your map. For the example map below, bigger bubbles show concepts that people mentioned in more interviews.

Also, green bubbles show project goals, such as the goal “People transitioned out of homelessness.”

Cool Tricks:

  • Create “Views” to focus on what’s most relevant to each stakeholder group. To make a large map manageable, create and save different “views” to focus on sections of the map, such as views by population served, views by organization, or views by sub-topic.
  • Create “Presentations” to walk users through your map. Use KUMU’s presentation feature to create a presentation to share key insights from your map with broad audiences.

Rad Resources:

  • KUMU docs. While KUMU takes time and practice to master, KUMU’s online doc pages contain a wealth of information to get you started.
  • Example maps. Scroll down the KUMU Community Page for links to the 20 most visited projects to get inspiration for formatting your map.
  • KUMU videos. Gene Bellinger has created a series of videos about using KUMU, available on YouTube here.

Organizations we work with have found these map presentations helpful for understanding and the situation and planning collaborative action. We hope they are useful for your evaluation projects!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at Georgia Southern University and University Sector Representative and Board Member for the Southeast Evaluation Association (SEA).

So you want to be an evaluator but you’re unfamiliar how to moderate focus group discussions – a key qualitative approach involved with formative, process, and summative evaluations. Plus, there are limited to no focus group specific courses in your program of study. Do not lose hope. All it takes is some creative thinking.

Focus group discussions are a qualitative research method that involves a focused set of questions that are asked of six to 10 focus group participants. The keyword in this definition is focused – discussions revolve around a specific topic.

Lesson Learned: Focus groups are done when you are interested in group dynamics, participant language, stories and experiences, and a breadth of information. Focus groups are wonderful; however, they are designed for a very specific purpose and have limitations that should be considered (e.g., difficulty with recruitment, brief stories or snippets of information, etc.).

Hot Tips: These resources will help you learn about focus groups and how to moderate discussion:

  1. Find a mentor: Most of my training and expertise in focus group research was gained through hands-on experience. I worked with experienced qualitative researchers who enabled me to co-facilitate, and then later conduct focus groups and train others. Many evaluators are open to mentoring those starting out in the field. Technology can facilitate your mentor search process by providing opportunities for remote relationships.       Try searching university expertise databases for potential mentors or the American Evaluation Association’s evaluator database.
  2. Read everything you can about focus group research: One of the focus group research resources is Krueger’s Focus Group Toolkit.       Although a new copy of this toolkit may stretch your budget, used copies are available. Start with Krueger’s free resource on focus group research. The toolkit takes you through everything from recruitment, participatory approaches, focus group research, question development, and to data analysis and report writing. It’s a worthy investment.
  3. Look for other virtual resources: A terrific resource for focus group research is the Community Toolbox, which provides access to numerous focus group resources.
  4. Attend (many) conferences: Reconsider spending your student loan check on a vacation and head to a conference! You can do both; for example, the annual University of South Florida’s Social Marketing Conference is held at a lovely beach resort. This conference historically provides a course in focus group research.

Conducting focus group research takes practice, practice, and more practice. Good luck on becoming a well-trained focus group moderator!

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Lynne Franco: Vice President for Technical Assistance and Evaluation at EnCompass LLC, and Jonathan Jones: Senior Monitoring and Evaluation Technical Advisor with CAMRIS International. Jonathan is also co-chair of AEA‘s International and Cross Cultural TIG.

Focus groups are an important tool in the data collection tool box, allowing the evaluator to explore peoples’ thinking on a particular topic in some depth. The very interaction among participants during a focus group can generate rich discussion as they respond, positively and negatively to each other’s ideas. During our evaluation careers, we have conducted numerous focus groups all over the world. We have learned that ‘supercharging’ focus groups with creative adult facilitation techniques can generate especially rich and meaningful data in group settings for anywhere from 5 people to 50.

Hot Tip: Ensure that participants can use more than their ears to retain what others are saying. Use a large sticky wall and index cards (or flip chart paper and big post its). Have participants write ideas on cards and then present them to the group. This is a great way to have all participants’ ideas up in front of the group – enabling group reflection and processing in real time.

AEA4

Hot Tip: Help introverts to participate. Asking participants to provide their input through writing gives introverts (and everyone) time to put their thoughts together before speaking about them.

Hot Tip: Give participants an environment that enhances creativity. Make the room colorful! Research shows that color encourages creative thinking. We often scatter pipe cleaners on the table. It is amazing what participants create during the focus group! We also use scented markers — this always generates many laughs while creating a relaxing and creative atmosphere.

AEA5

Rad Resource: We have found Brain Writing, a variation on brainstorming, to be an excellent focus group facilitation technique. It enables simultaneous group thinking and processing that is also focused and individualistic – and can appeal to both the introvert and the extrovert.

Rad Resource: Check out the forthcoming AEA New Directions in Evaluation: Evaluation and Facilitation

Rad Resource: Join our session at Eval 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Lynne and Jonathan? They’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

My name is Sebastian. Before pursuing my PhD at UCLA, I served as a senior evaluation consultant at Ramboll Management – a Copenhagen-based consulting firm. My current interests revolve around research syntheses and causal modeling techniques.

A common practice in evaluation is to examine the existing body of evidence of the type of intervention to be evaluated. The most well established approach is perhaps the generic literature review, often provided as a setting-the-scene segment in evaluation reports. The purpose of today’s tip is to push for a more interpretive approach when coding findings from existing evaluations.

The approach – called causation coding – is grounded in qualitative data analysis. In the words of Saldaña (2013), causation coding is appropriate for discerning motives (by or toward something or someone), belief systems, worldviews, processes, recent histories, interrelationships, and the complexity of influences and affects on human actions and phenomena (p.165).

In its practical application, causation coding aims to map out causal chains (CODE1 > CODE2 > CODE3), corresponding to a delivery mechanism, an outcome, and a mediator linking the delivery mechanism and outcome (ibid). These types of causal triplets are often made available in evaluation reports, as authors explain how and why the evaluated intervention generated change.

In a recent review of M4P Market development programs, I employed causation coding to capture causally relevant information in 13 existing evaluations and to develop hypotheses about how and why these programs generate positive outcomes. The latter informed the evaluation of a similar market development program.

Lessons Learned:

(1) It is important to award careful attention to the at times conflated distinction between empirically supported and hypothetically predicted causal chains. The latter express how the author(s) intended the program to work. In many evaluation studies, the eagerness to predict the success of the intervention often contributes to the inclusion of these hypothetical scenarios in results sections. Attention should be awarded the empirically supported causal chains.

(2) Causal chains are rarely summarized in a three-part sequence from cause(s) to mechanism(s) to outcome(s). As such, causation coding often involves a high degree of sensitivity to words such as “because”, “in effect”, “therefore” and “since” that might indicate an underlying causal logic (ibid).

Rad Resource: The coding manual for qualitative researchers (second edition) by Saldaña.

We’re celebrating 2-for-1 Week here at aea365. With tremendous interest in the blog lately, we’ve had many authors eager to share their evaluation wisdom, so for one special week, readers will be treated to two blog posts per day! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Greetings! I’m Galen Ellis, President of Ellis Planning Associates Inc., which has long specialized in participatory planning and evaluation services. In online meeting spaces, we’ve learned to facilitate group participation that – in the right circumstances – can be even more meaningful than in person. But we had to adapt.

Although I knew deep inside that our clients would benefit from online options, I couldn’t yet imagine creating the magic of a well-designed group process in the virtual environment. Indeed, we stepped carefully through various minefields before reaching gold.

As one pioneer observes,

Just because you’re adept at facilitating face-to-face meetings, don’t assume your skills are easily transportable. The absence of visual cues and the inability to discern the relative level of engagement makes leading great virtual meetings infinitely more complex and challenging. Assume that much of what you know about leading great meetings is actually quite irrelevant, and look for ways to learn and practice needed skills (see Settle-Murphy below).

We can now engage groups online in facilitation best practices such as ToP methods and Appreciative Inquiry and group engagement processes such as logic model development, focus groups, consensus building, and other collaborative planning and evaluation methods (see our video demonstration).

Lessons Learned:

  • Everyone participates. Skillfully designed and executed virtual engagement methods can be more effective in engaging the full group than in-person ones. Some may actually prefer this mode: one client noted that a virtual meeting drew out participants who had been typically silent in face-to-face meetings.
  • Software platforms come with their own sets of strengths and weaknesses. The simpler ones often lack interactive tools; but the ones that do allow interaction tend to be more costly and complex.
  • Tame the technical gremlins. Participants without suitable levels of internet speed, technological experience, or hardware—such as microphoned headsets—will require additional preparation. Meeting hosts need to know ahead of time what sorts of devices and internet access participants will be using. Participants should always be invited into the meeting space early for technical troubleshooting.
  • Don’t host it alone. One host can produce the meeting (manage layouts, video, etc.) while another facilitates.
  • Plan and script it. Virtual meetings require a far more detailed script than a simple agenda. Indicate who will do and say what, and when.
  • Practice, practice, practice. Run through successive drafts of the script with the producing team.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Myia Welsh, an independent consultant working with nonprofit and community organizations. Much of my work is done with organizations that provide services to survivors of human trafficking. What’s that, you ask? Trafficking is any enterprise where someone makes a profit from the exploitation of another by force, fraud or coercion. Just like the sale of drugs or weapons, the sale of humans occurs both in the U.S. and around the world. Find out more about human trafficking here.

Lesson Learned: Conducting evaluation with these organizations has required me to learn my way around engaging trauma survivors in evaluation – especially in focus groups. Focus groups with trauma survivors can be challenging if you don’t know what to expect. They require slightly different planning and facilitation skill. I recommend the following preparations:

  • Understand what you’re dealing with. Do some reading on trauma, so that you know how to recognize dynamics in the room.
  • Review your protocol for trigger questions. Stick with what’s essential to the evaluation.
  • Consult knowledgeable stakeholders to help you be aware of causing potential harm, and brainstorm about how to avoid it.
  • Be prepared for an emotional response, and have a plan to handle it with respect and support. An abrupt or uncomfortable response from the facilitator could silence participants. So, check your reactions. Have tissues ready in case of tears and tactile toys/objects around to help manage anxiety.
  • Make safety a factor in your planning: Where will this group feel safe? Physical space and location should be taken into consideration. Will bringing additional note takers or co-facilitators into the situation enhance or threaten perceived safety?
  • Check your facilitation practices. In most focus groups, a zoned-out participant would be prompted to participate. With a group of trauma survivors, this might be a signal that the reflection brought on by the discussion is getting overwhelming. Have a plan ready so that you can recognize it and continue on without disruption. Consider a non-verbal cue that you can set up in the beginning, a colored index card for instance. A participant can set their card on the table as a signal that this is getting tough. Make sure everyone knows that they can step away if they need to.
  • What’s your wrap-up plan? Have a strategy ready for ending in a positive way, soothing the emotions that may have emerged. Guide discussion to future hopes or recent accomplishments.

Lesson Learned: Even if it might be emotional or messy, service recipients are key stakeholders who’s voice cannot be left out of an evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Lisa Melchior, President of The Measurement Group LLC, a consulting firm focused on the evaluation of health and social services for at-risk and vulnerable populations. In response to Sheila B. Robinson’s recent post that reported what AEA 365 readers said they want to see in 2015, I’m writing about developing, sharing, and storing lessons learned from evaluation. Although this is written from the perspective of evaluation at the initiative level, it could also apply to lessons learned by an individual program.

The United Nations Environment Programme gives a useful definition of lessons learned as “knowledge or understanding gained from experience.” In a grant initiative, lessons learned might address ways to implement the projects supported through that initiative; strategies for overcoming implementation problems; best practices for conducting services (whether or not the projects employed all of them); strategies for involving key stakeholders to optimize the outcomes of the projects and their sustainability; and ideas for future directions. Statements of lessons learned are an important outcome of any grants initiative; the richness and complexity of those statements can be, in part, an indicator of the overall success of the initiative. Funders often utilize the lessons learned by their grantees to inform the development of future investments.

Hot Tips:

Developing lessons learned. If possible, work with the funder to collect examples of lessons learned using the funder’s progress reporting mechanism. When the evaluator has access to such reports, qualitative approaches can be used to catalog and identify themes among the lessons learned. Another benefit of integrating the documentation of lessons learned into ongoing programmatic reporting is that trends over the life of a project or initiative can emerge, since many initiatives request this type of information from grantees on a semi-annual or quarterly basis. Active collaboration between funder and evaluator is key to this approach.

Sharing lessons learned. Don’t wait until the end of a project to share lessons learned! Stakeholders can benefit from lessons learned in early implementation. For example, my colleagues and I highlighted interim outcomes and lessons learned during the first three years of the Archstone Foundation’s five-year Elder Abuse and Neglect Initiative in an article in the Journal of Elder Abuse and Neglect.

In a more summative mode, toolkits are a useful vehicle for sharing lessons learned with those interested in possible replication of a particular program, model, or initiative. Social media and blogs are great for more informal sharing.

Storing lessons learned. Qualitative data tools such as NVivo are invaluable to organizing lessons learned.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top