AEA365 | A Tip-a-Day by and for Evaluators

CAT | Qualitative Methods

Hello! We are Maureen Hawes from the University of Minnesota’s Systems Improvement Group, Arlene Russell, independent consultant, and Jason Altman from the TerraLuna Collaborative. We are writing to share our experience with fuzzy set Qualitative Comparative Analysis (QCA).

You may have faced questions similar to ones that we grappled with as evaluators, using quantitative analysis as part of a mixed methods approach. We wondered:

  1. Is there a method more adept at addressing nuance and complexity better than more traditional methods?
  2. Can quantitative efforts uncover the causes of future effects for developmental and formative work, or just prove impacts, or the effects of past causes?
  3. Does regressing cases to means misalign with our values and efforts to elevate the voice of those that are often not heard.
  4. Should we be removing outlier cases before analysis? Note: see Bob Williams’ argument that we should approach “outlying data with the possibility of it being there for a reason” rather than by chance.

In supporting our partner, we set out from the beginning, knowing that each of our cases (school buildings) were complex systems. Two major considerations were particular sticking points for us:

  1. Equifinality: We expected that there would be more than one pathway to implementation
  2. Conjuncturality: We expected that variable influence would be in combination rather than isolation

 

Hot Tip: Our solution was a QCA, based on set theory and logic and not statistics. QCA is a case-oriented method allowing systematic and scientific comparison of any number of cases as configurations of attributes and set membership. We loved that QCA helped answer the question “What works best, why and under what circumstances” using replicable empirical analysis.

QCA is either the crisp-set variety (conditions judged to be present or absent) or more contemporarily, fuzzy set QCA (fsQCA). fsQCA allows for sets in which elements are not limited to status as either a member or non-member, but in which different degrees of membership exist.

Lessons Learned: Our fsQCA analysis of a medium-sized sample of 21 buildings (in 6 districts) uncovered a message our partners could act on. Among other findings, the analysis identified a pathway to positive program outcomes that relied on ALL 3 of the following factors being in place:

  1. Project engagement
  2. Leadership/ infrastructure
  3. Data collection/ use

Worth considering: The number of QCA applications has increased during the past few years though there are still relatively few applications. Since it was introduced by Charles Ragin in 1987, QCA has been modified, extended, and improved, contributing to a better applicability of QCA to evaluation settings.

Rad Resources:

  1. We have a longer read (complete with references) available
  2. Charles Ragin, houses information that he finds pertinent to the technique, and tools that he has developed to complete analysis
  3. Compass hosts a bibliographical database where the user can sort through previous applications of fsQCA

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello! I’m Clara Pelfrey, Translational Research Evaluation TIG past chair and evaluator for the Clinical and Translation Science Collaborative at Case Western Reserve University. I’m joined by graphic recorder Johnine Byrne, owner of See Your Words, and Darcy Freedman, Associate Director of the Prevention Research Center for Healthy Neighborhoods (PRCHN). We’d like to extend our previous AEA365 post on graphic recording and show how it can be used to create a shared vision between researchers doing community engaged research and community members.

Graphic recording (GR) is a visual capturing of people’s ideas and expressions. The GR shown below was created at an annual retreat of the PRCHN’s community advisors. It visually captured the community’s ideas around the major areas of work done by the center, helping to identify priority areas for future work and opportunities for collaboration. The PRCHN used the GR to show what role its partners play, the questions they have, what the bottlenecks are and any risks or unintended consequences to attend to.

graphic recording

(click for larger image)

Hot Tip:

Evaluation uses of graphic recording (GR) in community based research/community engagement:

  • Provide qualitative analysis themes. GR acts as a visual focus group report, providing opportunities to interact with your study findings.
  • GR can show system complexity. A non-profit organization working on youth justice commissioned a systems model GR so that all the service providers for youth experiencing homelessness could: 1) see where they fit into the wider system; 2) identify gaps and redundancies; 3) identify feedback loops; 4) find reinforcements.
  • Focus group participants may be reluctant to speak up in a group. Seeing images on the GR encourages participants to speak.
  • GR allows everyone to share their ideas in real-time. This immediacy creates energy and fosters more discussion.
  • Get right to the heart of the matter. Concepts on the GR become objects and lose their attribution to a person, fostering conversation that is more open and honest. This is especially useful when discussing sensitive issues (e.g. racism).
  • Compare changes over time. In the community setting, GR allows for an evolving group of people to honor the engagement of prior groups and provides a benchmark for the future.
  • Hear all perspectives. The graphic recorder mirrors the ideas in the room capturing the full range of opinions including the divergent or outsider perspectives.
  • GR helps the late arrivals catch up on what transpired at the meeting while helping everyone review.

Lessons Learned:

  • Get a good facilitator! An experienced facilitator manages room dynamics. The graphic recorder is the “silent partner.”
  • Schedule time to review and discuss the GR at the end. This helps uncover possible opportunities by asking: “What haven’t we talked about?”
  • Display last year’s GR for comparison and encourage everyone to compare and ask the question: “Have we made progress?”
  • GR requires a democratic belief in participatory approaches, empowering multiple perspectives and not just the leaders’ ideas.
  • PowerPoint slides and GR do not mix. GR best captures the dialog, not the slide content.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Anupama Shekar

Anupama Shekar

Hi, we are Dr. Anupama Shekar, Director of Evaluation and Dr. Matt Pierson, Program Officer, Alumni Network at Teaching Trust, an education leadership nonprofit in Dallas, Texas. Teaching Trust offers high-quality training and support to future school leaders, school leadership teams, and teacher leaders to ensure children in low-income schools across North Texas have access to an excellent education.

Matt Pierson

Matt Pierson

The goal of our small qualitative project was to better understand how  programming has had an impact on participants and specifically how our Teaching Trust Network’s programming can be strengthened to

benefit more kids in the Dallas Fort Worth area and beyond. We presented this as an AEA Coffee Break webinar on the “power of collaborative coding in qualitative analyses” and want to share some key learnings that could help drive your work with colleagues around qualitative data and analyses.

Lessons Learned:

Be an explorer. Conducting rigorous qualitative analyses is hard, but in some ways remember that you are only an explorer trying to unpack the meaning behind each word, sentence and paragraph. So, it is critical to ask “why” at every stage of qualitative work while gathering data, transcribing, writing memos, coding, re-coding, identifying patterns and themes and drawing conclusions. We learned that the struggle with ideas is essential as it gets you closer to making meaning from the complex and identify areas for further exploration.

We also learned that conducting qualitative work is a journey. The value in qualitative work is that it lets you see the story beneath those numbers and continue to ask “why” to help drive decisions.

Embrace collaboration. The qualitative journey can sometimes be an isolating experience but we learned that the heart of this work is in the collaboration. In our project, we were three people who took notes during data gathering, coded, wrote memos and made meaning in an ongoing way. The collaboration not only adds validity to our findings but also helps us wrestle with those ideas at a deeper level throughout the analyses process.

Qualitative coding approaches. In our project, Anu Shekar used QSR’s NVivo to code while Matt Pierson and our Teaching Trust Alumni Network’s Program Coordinator Haley Pittman used pen and paper to code. NVivo allows researchers and evaluators to connect and collapse codes across several pieces of qualitative data and see patterns through models at a much faster pace than pen and paper. Michael Quinn Patton’s book on Qualitative Research & Evaluation Methods has some practical tips on how to approach coding manually.

However, the value of using pen and paper to also code, write memos, draw visualizations and triangulate NVivo generated findings is powerful and adds a layer of rigor. Using multiple approaches to code is also especially valuable if you have some colleagues who do not know how to use the NVivo software but would like to engage in qualitative analyses. NVivo is a paid product and you can learn more about it here: http://www.qsrinternational.com/nvivo/nvivo-products. There are several published books also available that you can use to learn about qualitative data analyses with NVivo.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings! We are Laura Sefton from the University of Massachusetts Medical School’s Center for Health Policy and Research and Linda Cabral from Care Transformation Collaborative-Rhode Island. When choosing qualitative interviews as the data collection method for your evaluation project, developing an appropriate interview guide is key to gathering the information you need. The interviews should aim to collect data that informs your evaluation aims and avoids collecting superfluous information. From our experience in developing interview guides over the last 10 years, we have the following insights to offer:

Hot Tips:

Wording is key.  Questions should be straightforward and gather insights from your respondents. Your goal should be to develop questions that are non-judgmental and facilitate conversation. Word your questions in ways that elicit more than yes/no responses. Avoid questions that ask “why,” as they may put your respondent on the defensive. Adjust your wording according to the intended respondent; what works for a program CEO may not work for a client of the same program.

Begin with a warm-up and end with closure.  The first question should be one that your respondent can answer easily (e.g., “Tell me about your job responsibilities.”). This initial rapport-building can put you and the respondent at ease with one another and make the rest of the interview flow more smoothly. To provide closure to the interview, we often ask respondents for any final thoughts they want to share with us. This provides them with an opportunity to give us information we may not have asked about but that they felt was important to share.

Probe for more detail.  Probes, or prompts, are handy when you are not getting the information you had hoped for or you want to be sure to get as complete information as possible on certain questions. A list of probes for key questions can help you elicit more detailed and elaborate responses (e.g., “Can you tell me more about that?” “What makes you feel that way?”).

Consider how much time you have.  Once you have your set of key questions, revisit them to see if you can pack them down into fewer questions. We found that we can generally get through approximately ten in-depth questions and any necessary probes in a one-hour interview. Be prepared to ask only your key questions. Your actual interview time may be less than planned or some questions may take longer to get through.

Lessons Learned:

It’s ok to revise the interview guide after starting data collection.  After completing your first few interviews, you may find that certain questions didn’t give you the information you wanted, were difficult for your respondents to understand or answer, or didn’t flow well. Build in time to debrief with your data collection team (and your client, if appropriate) on your early interviews and make adjustments to the guide as necessary.

Rad Resource: As with many topics related to qualitative research, Michael Quinn Patton’s Qualitative Research & Evaluation Methods serves as a useful resource for developing interview guides.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, we are Greg Lestikow, CEO and Fatima Frank, Project Manager of evalû, a small consulting firm that focuses exclusively on rigorous evaluations of social and economic development initiatives.  We champion impact evaluation that maintains academic rigor but is based entirely on our clients’ need to improve strategic and operational effectiveness and increase profitability.

In a recent project, we were tasked with designing a qualitative instrument to complement quantitative data around the sensitive topic of gender-based violence.

Rad Resource: We approached this challenge by designing a focus group discussion (FGD) protocol informed by an article on the “Participatory Ranking Method” (PRM), in which participants rank potential indicators from most to least important. PRM acknowledges project beneficiaries as experts and recognizes the local community as capable of identifying and measuring their progress towards positive change. As such, PRM incorporates local perspectives in the construction of research instruments. By using PRM, we were able to select indicators that are meaningful to the project’s local beneficiaries (in our case adolescent girls affected by violence) and reflective of the concepts they find useful when tracking their own progress. PRM is an ideal evaluation methodology for measuring awareness of sensitive topics and tracking outcomes over time, particularly for projects that may not see any kind of impact in the short or medium term.

Hot Tips:

  • Start with a participatory activity to gauge local perspectives and to understand which social practices are considered more or less acceptable in the community. In our case, we asked participants what gender-based violence meant to them.
  • To facilitate ranking, show a series of cards labeled with different kinds of social practices (in our case: Shout; Insult, Threaten, Push, Hit, Beat, Kill) and have participants order them from the most to the least acceptable, asking them to explain their decisions.  Alternatively, participants can free-list social practices that are common in their communities and then rank-order them.
  • Include an open-ended discussion to understand which social practices are acceptable in different relational and social contexts.

Lessons Learned:

  • Make sure moderator and note-taker are gender appropriate.
  • If you want to obtain a broad range of perspectives but anticipate potential problems with mixing certain community members in the same FGD, create a few FGD groups and separate participants.
  • Ask local evaluation or project teams about any other cultural practices to consider before an FGD. For example, in Sierra Leone we started each FGD with a prayer, as this is a standard practice when people meet.

Please share your stories on challenges, solutions, and experiences in dealing with sensitive topics by leaving a comment here or contacting us.

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I’m Marti Frank, a researcher and evaluator based in Portland, Oregon. Over the last three years I’ve worked in the energy efficiency and social justice worlds, and it’s given me the opportunity to see how much these fields have to teach one another.

For evaluators working with environmental programs – and energy efficiency in particular – I’ve learned two lessons that will help us do a better job documenting the impacts of environmental programs.

Lessons Learned:

1) A program designed to address an environmental goal – for example, reduce energy use or clean up pollution, will almost always have other, more far reaching impacts. As evaluators, we need to be open to these in order to capture the full range of the program’s benefits.

Example: A weatherization workshop run by Portland non-profit Community Energy Project (where I am on the Board), teaches people how to make simple, inexpensive changes to their home to reduce drafts and air leaks. While the program’s goal is to reduce energy use, participants report many other benefits: more disposable income, reduced need for public assistance, feeling less worried about paying bills, having more time to spend with family.

2) Not all people will be equally impacted by an environmental program, or even impacted in the same way. Further, there may be systematic differences in how, and how much, people are impacted.

Example #1: Energy efficiency programs assign a single value for energy savings, even though the same quantity of savings will mean very different things to different households, depending in large part on their energy burden  (or the percent of their income they spend on energy).

Example #2: A California energy efficiency program provided rebates on efficient household appliances, like refrigerators. Although the rebates were available to everyone, the households who redeemed them (and thus benefited from the program) were disproportionately wealthy and college-educated, relative to all Californians.

Rad Resources:

I’ve found three evaluation approaches to be helpful in identifying unintended impacts of environmental programs.

Outcome harvesting. This evaluation practice encourages us to look for all program outcomes, not just those that were intended. Ricardo Wilson-Grau, who developed it, hosts this site with materials to get you started.

Intersectionality. This conceptual approach originated in feminist theory and reminds us to think about how differing clusters of demographic characteristics influence how we experience the world and perceive benefits of social programs.

Open-ended qualitative interviews. It’s hard to imagine unearthing unexpected outcomes using closed-ended questions. I always enjoy what I learn from asking open-ended questions, giving people plenty of time to respond, and even staying quiet a little too long. And, I’ve yet to find an interviewee who doesn’t come up with another interesting point when asked, “Anything else?”

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Good day, I’m Bernadette Wright, program evaluator with Meaningful Evidence, LLC. Conducting interviews as part of a program evaluation is a great way better understand the specific situation from stakeholders’ perspectives. Online, interactive maps are a useful technique for presenting findings from that qualitative data to inform action for organization leaders who are working to improve and sustain their programs.

Rad Resource: KUMU is free to use to create public maps. A paid plan is required to create private projects (visible only to you and your team).

Here are the basic steps for using KUMU to integrate and visualize findings from stakeholder conversations.

1) Identify concepts and causal relationships from interviews.

Using the transcripts, you focus on the causal relationships. In the example below, we see “housing services helps people to move from homelessness to housing” (underlined).

2) Diagram concepts and causal relationships, to form a map.

Next, diagram the causal relationships you identified in step one. Each specific thing that is important becomes a “bubble” on the map. We might also call them “concepts,” “elements,” “nodes,” or “variables.”

Lessons Learned:

  • Make each concept (bubble) a noun.
  • Keep names of bubbles short.

 

3) Add details in the descriptions for each bubble and arrow.

When you open your map in KUMU, you can click any bubble or arrow to see the item’s “description” on the left (see picture below). Edit the description to add details such as example quotes.

4) Apply “Decorations” to highlight key information.

You can add “decorations” to bubbles (elements) and arrows (connections) using the editor to the right of your map. For the example map below, bigger bubbles show concepts that people mentioned in more interviews.

Also, green bubbles show project goals, such as the goal “People transitioned out of homelessness.”

Cool Tricks:

  • Create “Views” to focus on what’s most relevant to each stakeholder group. To make a large map manageable, create and save different “views” to focus on sections of the map, such as views by population served, views by organization, or views by sub-topic.
  • Create “Presentations” to walk users through your map. Use KUMU’s presentation feature to create a presentation to share key insights from your map with broad audiences.

Rad Resources:

  • KUMU docs. While KUMU takes time and practice to master, KUMU’s online doc pages contain a wealth of information to get you started.
  • Example maps. Scroll down the KUMU Community Page for links to the 20 most visited projects to get inspiration for formatting your map.
  • KUMU videos. Gene Bellinger has created a series of videos about using KUMU, available on YouTube here.

Organizations we work with have found these map presentations helpful for understanding and the situation and planning collaborative action. We hope they are useful for your evaluation projects!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Dr. Moya Alfonso, MSPH, and I’m an Associate Professor at Georgia Southern University and University Sector Representative and Board Member for the Southeast Evaluation Association (SEA).

So you want to be an evaluator but you’re unfamiliar how to moderate focus group discussions – a key qualitative approach involved with formative, process, and summative evaluations. Plus, there are limited to no focus group specific courses in your program of study. Do not lose hope. All it takes is some creative thinking.

Focus group discussions are a qualitative research method that involves a focused set of questions that are asked of six to 10 focus group participants. The keyword in this definition is focused – discussions revolve around a specific topic.

Lesson Learned: Focus groups are done when you are interested in group dynamics, participant language, stories and experiences, and a breadth of information. Focus groups are wonderful; however, they are designed for a very specific purpose and have limitations that should be considered (e.g., difficulty with recruitment, brief stories or snippets of information, etc.).

Hot Tips: These resources will help you learn about focus groups and how to moderate discussion:

  1. Find a mentor: Most of my training and expertise in focus group research was gained through hands-on experience. I worked with experienced qualitative researchers who enabled me to co-facilitate, and then later conduct focus groups and train others. Many evaluators are open to mentoring those starting out in the field. Technology can facilitate your mentor search process by providing opportunities for remote relationships.       Try searching university expertise databases for potential mentors or the American Evaluation Association’s evaluator database.
  2. Read everything you can about focus group research: One of the focus group research resources is Krueger’s Focus Group Toolkit.       Although a new copy of this toolkit may stretch your budget, used copies are available. Start with Krueger’s free resource on focus group research. The toolkit takes you through everything from recruitment, participatory approaches, focus group research, question development, and to data analysis and report writing. It’s a worthy investment.
  3. Look for other virtual resources: A terrific resource for focus group research is the Community Toolbox, which provides access to numerous focus group resources.
  4. Attend (many) conferences: Reconsider spending your student loan check on a vacation and head to a conference! You can do both; for example, the annual University of South Florida’s Social Marketing Conference is held at a lovely beach resort. This conference historically provides a course in focus group research.

Conducting focus group research takes practice, practice, and more practice. Good luck on becoming a well-trained focus group moderator!

The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Lynne Franco: Vice President for Technical Assistance and Evaluation at EnCompass LLC, and Jonathan Jones: Senior Monitoring and Evaluation Technical Advisor with CAMRIS International. Jonathan is also co-chair of AEA‘s International and Cross Cultural TIG.

Focus groups are an important tool in the data collection tool box, allowing the evaluator to explore peoples’ thinking on a particular topic in some depth. The very interaction among participants during a focus group can generate rich discussion as they respond, positively and negatively to each other’s ideas. During our evaluation careers, we have conducted numerous focus groups all over the world. We have learned that ‘supercharging’ focus groups with creative adult facilitation techniques can generate especially rich and meaningful data in group settings for anywhere from 5 people to 50.

Hot Tip: Ensure that participants can use more than their ears to retain what others are saying. Use a large sticky wall and index cards (or flip chart paper and big post its). Have participants write ideas on cards and then present them to the group. This is a great way to have all participants’ ideas up in front of the group – enabling group reflection and processing in real time.

AEA4

Hot Tip: Help introverts to participate. Asking participants to provide their input through writing gives introverts (and everyone) time to put their thoughts together before speaking about them.

Hot Tip: Give participants an environment that enhances creativity. Make the room colorful! Research shows that color encourages creative thinking. We often scatter pipe cleaners on the table. It is amazing what participants create during the focus group! We also use scented markers — this always generates many laughs while creating a relaxing and creative atmosphere.

AEA5

Rad Resource: We have found Brain Writing, a variation on brainstorming, to be an excellent focus group facilitation technique. It enables simultaneous group thinking and processing that is also focused and individualistic – and can appeal to both the introvert and the extrovert.

Rad Resource: Check out the forthcoming AEA New Directions in Evaluation: Evaluation and Facilitation

Rad Resource: Join our session at Eval 2015.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Lynne and Jonathan? They’ll be presenting as part of the Evaluation 2015 Conference Program, November 9-14 in Chicago, Illinois.

My name is Sebastian. Before pursuing my PhD at UCLA, I served as a senior evaluation consultant at Ramboll Management – a Copenhagen-based consulting firm. My current interests revolve around research syntheses and causal modeling techniques.

A common practice in evaluation is to examine the existing body of evidence of the type of intervention to be evaluated. The most well established approach is perhaps the generic literature review, often provided as a setting-the-scene segment in evaluation reports. The purpose of today’s tip is to push for a more interpretive approach when coding findings from existing evaluations.

The approach – called causation coding – is grounded in qualitative data analysis. In the words of Saldaña (2013), causation coding is appropriate for discerning motives (by or toward something or someone), belief systems, worldviews, processes, recent histories, interrelationships, and the complexity of influences and affects on human actions and phenomena (p.165).

In its practical application, causation coding aims to map out causal chains (CODE1 > CODE2 > CODE3), corresponding to a delivery mechanism, an outcome, and a mediator linking the delivery mechanism and outcome (ibid). These types of causal triplets are often made available in evaluation reports, as authors explain how and why the evaluated intervention generated change.

In a recent review of M4P Market development programs, I employed causation coding to capture causally relevant information in 13 existing evaluations and to develop hypotheses about how and why these programs generate positive outcomes. The latter informed the evaluation of a similar market development program.

Lessons Learned:

(1) It is important to award careful attention to the at times conflated distinction between empirically supported and hypothetically predicted causal chains. The latter express how the author(s) intended the program to work. In many evaluation studies, the eagerness to predict the success of the intervention often contributes to the inclusion of these hypothetical scenarios in results sections. Attention should be awarded the empirically supported causal chains.

(2) Causal chains are rarely summarized in a three-part sequence from cause(s) to mechanism(s) to outcome(s). As such, causation coding often involves a high degree of sensitivity to words such as “because”, “in effect”, “therefore” and “since” that might indicate an underlying causal logic (ibid).

Rad Resource: The coding manual for qualitative researchers (second edition) by Saldaña.

We’re celebrating 2-for-1 Week here at aea365. With tremendous interest in the blog lately, we’ve had many authors eager to share their evaluation wisdom, so for one special week, readers will be treated to two blog posts per day! Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Older posts >>

Archives

To top