AEA365 | A Tip-a-Day by and for Evaluators

I’m Erika Steele, a Health Professions Education Evaluation Research Fellow at the Veteran Affairs (VA) National Center for Patient Safety (NCPS).  For the past few years I have been collaborating with Research Associates at the Center for Program Design and Evaluation at the Dartmouth Institute for Health Policy and Clinical Practice (CDPE) to evaluate the VA’s Chief Resident in Patient Safety Program (CRQS). The CRQS is a one-year post-residency experience to develop skills in leadership and teaching quality improvement (QI) and patient safety (PS).  Since 2014, Research Associates at CDPE have conducted annual evaluations of the CRQS program.  In 2015, we began evaluating the CRQS curriculum by developing a reliable tool to assess QI/PS projects lead by CRs.

One of the joys and frustrations of being an education evaluator is designing an assessment tool, testing it and discovering that your clients, apply the tool inconsistently.  This blog will focus on the lesson learned about norming or calibrating a rubric for rater consistency during pilot testing the Quality Improvement Project Evaluation Rubric  (QIPER) with faculty at NCPS.

Hot Tips:

  1. Develop Understanding the goals of the assessment tool
    Sometimes raters have a hard time separating grading from assessing how well the program’s curriculum prepares learners. To help faculty at NCPS view the QIPER as a tool for program evaluation, we pointed out patterns in CRs scores.  Once faculty started to see patterns in scores themselves, the conversations moved away individual performance on the QIPER and back evaluating how well the curriculum prepares CRs to lead a QI/PS project. 

Once raters understood the goal of using the QIPER, insistences of leniency, strictness and first impression errors were reduced and rater agreement improved.

  1. Create an environment of respect
    All raters need the opportunity to share their ideas with others for score negotiation and consensus building to occur.  We used the Round Robin Sharing (RRS) technique to allow faculty to discuss their expectations, rationale for scoring, and ways to make reaching consensus easier.  We used the graphic organizer in Figure 1 to guide discussions.

RRS helped faculty develop common goals related to program expectations for leading QI/PS projects which led to increased rater agreement on scoring projects.

Figure 1: Round Robin Sharing Conversation Guidance

  1. Build Strong Consensus
    Clear instructions are an important aspect for ensuring that raters apply assessment tools consistently. Using the ideas generated during RRS, we engaged the faculty in building a document to operationalize the items on the QIPER and offer guidance in applying the rating scale.  The guidance document served as a reference for faculty when rating presentations.

Having a reminder of the agreed upon standards helped raters to apply the QIPER more consistently when scoring presentations.

Rad Resources:

  1. Strategies and Tools for Group Processing.
  2. Trace J, Meier V, Janssen G. “I can see that”: Developing shared rubric category interpretations through score negotiation. Assessing Writing. 2016;30:32-43.
  3. Quick Guide to Norming on Student Work for Program Level Assessment.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Stephen Axelrad the chair of the Military and Veteran Evaluation (MVE) Topical Interest Group. One of the reasons why I wanted to start this TIG was to help evaluators navigate the complex web of stakeholders in the military community. Many evaluators have little to no experience in the military and its formal structure. However, the military has a long history of valuing systematic evidence to inform decisions about policies and programs. Many uniformed leaders turn to civilian sources to understand innovative, evidence-based methods for addressing national security issues as well as social and organizational problems that affect the military (suicide, sexual harassment, sexual assault, financial literacy, domestic violence, opioid abuse).

Hot Tips:

Civilian evaluators are not expected to know everything about the military to make effective connections. Evaluators just need to apply the same culturally responsive methods they apply to other sub-cultures to military stakeholders. Here are some tips that can set culturally responsive evaluators up for success.

  • The military is not monolithic: the popular press often refers to the military as the “Pentagon” and makes it seem like there is only one military perspective; the actual reality is far from the truth; the military community is composed of communities that vary based on Service branch (Army, Navy, Air Force, Marines, Coast Guard), Component (Active-duty, Reserve, National Guard), rank (commissioned officer, enlisted officer, enlisted), career field and other factors.
  • Not all members of the uniformed military are soldiers: another common mistake in the popular press is to refer to uniformed military members as soldiers but that only applies to the Army; the other terms for military are – sailors (Navy), airmen (Air Force), marines (Marine Corps), guardmen (National Guard, Coast Guard); these terms are central to their identities so getting the term right will help you build rapport with the uniformed military
  • Military installations are like mini-cities: the installation commander is like the mayor and there are usually one or two commands that act like the major employer; installations attract workforces with specific skill sets and interests that give each installation a unique culture
  • Leaders are change agents: one of the few consistent qualities across the military system is the value placed on leadership; leadership is frequently defined through rank and other formal authority; however, the military sees leaders at all ranks and leverages peer leaders to create positive social change

Rad Resources:

The following web sites were developed to help civilian professionals understand military structure

Lesson Learned: Best opportunity for evaluators to help with data-driven, decision making is to come within the first 90 days of a senior military leader’s taking on control of the command. During this period, leaders are in a learning mode, want data relevant to the command, and want to understand ways of improving their commands.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Collaborative evaluation principles have been used to bolster projects and gain representative stakeholder input. I’m Julianne Rush-Manchester of the Military and Veterans TIG. I’m an implementation science and evaluation professional working in the Department of Defense. I’ve learned some tips for facilitating stakeholder input in clinical settings that may be more hierarchical (rather than collaborative) in nature.  These tips could be applied in military and non-military settings.

Lessons Learned: 

  • Push for early involvement of stakeholders, with targeted discussions, to execute projects successfully (according to plan).  It is expected that adjustments to the implementation and evaluation plan will occur; however, these should be modest rather than substantive if stakeholders have provided input on timing, metrics, access to data, program dosage, recruitment challenges, and so forth.  This is particularly true in military settings, where bureaucratic structures dictate logistics and access.
  • Plan for unintended effects, along with intended ones, in new contexts for the program. A replicated program may look slightly different as it must accommodate for nuances of the organization (military member participants, contractors, mandatory vs. volunteer programs, program support from senior leadership). Expected outcomes may be variations of intended ones as the program adjusts to its host setting.

Rad Resources:

This article refers to the use of collaborative evaluation principles when there is an anticipation of systems change as a result of implementation (Manchester et al., 2014)The paper may be helpful in strategizing for collaborative evaluations around evidence based practices in clinical and non-clinical settings, military or otherwise.

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Annette L Gardner and I am a faculty member at the University of California, San Francisco. Developing information-rich case studies can be one of the most rewarding evaluation methods.  Not only do they speak to stakeholders on a deep level but as described below they can create a legacy that endures and has the potential to reach a broad base of stakeholders.

In 2012, The Veterans Administration Office of Academic Affairs launched the Centers of Excellence in Primary Care Education (CoPCE) to seek and implement improvement strategies for interprofessional, patient-centered clinical education and methods to prepare health professions team leaders. A mixed-method study was conducted to assess implementation, trainee outcomes and new approaches to team-based interprofessional care.

I worked closely with CoEPCE Coordinating Center and the five Centers  to develop The Centers of Excellence in Primary Care Education Compendium of Five Case Studies: Lessons for Interprofessional Teamwork in Education and Clinical Learning Environments 2011-2016. Each case describes the contextual and developmental issues behind five unique examples of integrated interprofessional curriculum to support the clinical education workplace.  Peer-reviewed by the National Center for Interprofessional Practice and Education, this compendium provides tools and resources to help prepare professionals for interprofessional collaborative practice. These cases include: 

  • Boise VA Medical Center and the CoEPCE’s“Interprofessional Case Conferences for High Risk/High Need Patients- The PACT ICU Model”
  • Louis Stokes Cleveland VA Medical Center and the CoEPCE’s “Dyad Model”
  • San Francisco VA Health Care System and the CoEPCE’s “Huddling for Higher Performing Teams”
  • VA Puget Sound Health Care System Seattle Division CoEPCE “Panel management Model”
  • Connecticut VA Health Care System West Haven Campus CoEPCE “Initiative to Minimize Pharmaceutical Risk in Older Veterans (IMPROVE) Polypharmacy Model”

Hot Tips:

So what makes these cases different from other case studies? For starters, these cases were developed in an environment that values experimental designs and has the sample sizes to support them.  Sensitivity to stakeholder perceptions of ‘evidence’ were critical.  A contributing factor to the positive reception of these cases may have the been the sharing of these case study initiatives across sites and with VA leadership  prior to the development of the compendium. Their preparation represents a partnership effort with high Center involvement. Second, there was a strong desire to support adoption in other training settings. The VA took dissemination very seriously and launched an aggressive campaign to distribute the compendium through multiple platforms, including the VA website, Government Printing Office, LinkedIn, and the Institute for Healthcare Improvement Playbook. Third, VA staff are monitoring the uptake and use of the cases, a rare occurrence in evaluation design, and are soliciting input using an impact using online questionnaire.

Lessons Learned:

Partnerships and a creative approach to dissemination have the potential to keep evaluation findings from being consigned to the ‘dustbin of history’ and facilitate learning beyond the immediate program stakeholders.

Rad Resource:

VA CoEPCE Case Studies Quality Improvement Questionnaire

The American Evaluation Association is celebrating MVE TIG Week with our colleagues in the Military and Veteran’s Issues Topical Interest Group. The contributions all this week to aea365 come from our MVE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

AEA365 Curator note: Back in January, AEA365 readers asked to read about how evaluators deliver negative findings to clients and stakeholders. This week, we feature 5 articles with four evaluator perspectives on this topic.  

Hello! I’m Kylie Hutchinson, independent evaluation consultant and trainer with Community Solutions Planning & Evaluation and author of A Short Primer on Innovative Evaluation Reporting.

The following is Part 2 of practical tips for delivering negative evaluation findings to stakeholders, gleaned from my own experience. (Note that these ideas won’t work in all circumstances; their use depends on the context of a specific evaluation.)

Hot Tips:

  • Use constructive feedback versus criticism. Criticism comes from a place of judgement and is focused on the past, e.g., “The program didn’t meet its target.” (Never mind that evaluation is admittedly about making judgements, we can still be sensitive when presenting bad news.) People can’t change the past, and it doesn’t motivate anyone to move forward. Constructive feedback, on the other hand, is future-focused and comes from a place of caring and respect. Statements such as, “Let’s talk about ways to better meet the program’s target,” are more empowering and position the evaluator as working alongside staff.
  • Alternate between the active and passive voice. Consider using the second person and active voice for positive results, e.g., “You met the program targets,” and if necessary, the passive voice for negative ones, e.g., “The targets were not met.” This may help to soften any blows.
  • Give them a decent sandwich. The sandwich technique is a well-known method for giving feedback – slip a negative finding between two positives. However, ensure the second positive is as substantial as the first and not a lame compliment at the end, otherwise people will still leave discouraged.
  • Be prepared to be wrong. I have regularly had to go back and review my conclusions and recommendations in light of new information provided by stakeholders. Is there additional information about the program or context in which it operates that might affect the results? This is where additional stakeholder interpretation and an interactive data party comes in very useful.
  • Be sensitive. Sometimes I get so caught up in the data analysis and findings that I forget that real people have put in a lot of blood, sweat, and tears into their program to get where they are. It’s relatively easy to evaluate a program, but a lot harder to work in the non-profit trenches day in and day out for little pay. The incredible daily commitment that non-profit staff demonstrate is humbling given the challenging complexity of most social change interventions. Whenever I mess up presenting negative findings it’s because I’ve forgotten that even minor negative news can come across as discouraging for hard-working staff.

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

AEA365 Curator note: Back in January, AEA365 readers asked to read about how evaluators deliver negative findings to clients and stakeholders. This week, we feature 5 articles with four evaluator perspectives on this topic.  

Hello! I’m Kylie Hutchinson, independent evaluation consultant and trainer with Community Solutions Planning & Evaluation and author of A Short Primer on Innovative Evaluation Reporting.

Delivering negative evaluation findings is possibly one of the hardest things an evaluator has to do. Accomplishing it effectively is a bit like Goldilocks’ porridge. Too harsh and direct and you’ll make people defensive. Too indirect and they may not take action. Just right and they’ll use the results moving forward. I definitely admit to totally bungling this task at times. Here are a few practical tips for presenting bad news gleaned from my own experience. (Note that these ideas won’t work in all circumstances; their use depends on the context of a specific evaluation.)

Hot Tips:

  • Build trust. People can’t learn when they feel nervous or threatened, and we want them to fully absorb and understand what we’re saying. Effective stakeholder engagement, at the beginning and throughout an evaluation, is critical for building trust and developing stakeholder ownership of the final results, both good and bad.
  • Prepare them early. Prepare stakeholders for the possibility of negative results by engaging them in an informal discussion early on in the evaluation, e.g., “How well do you think the program is doing?” or “What would you do if the results were not as you expected?”
  • Drop clues. During the data analysis phase, consider giving the organization small warning signs, such as, “It’s still early days, but we’re seeing lower than expected scores. Can you think of why this might be so?”
  • Be clear. Be prepared to explain in detail how the negative findings were derived and ensure the lines of evidence are crystal clear. The more lines of evidence you are able to demonstrate, the easier a bitter pill might go down.
  • Let others do the talking. When the news is particularly bad, I usually include a greater number of quotes from the qualitative data so people can hear it straight from the horse’s mouth and not me.
  • Consider participatory data analysis. Rather than the evaluator being the bearer of bad news, let people “discover” the bad news themselves by inviting them to a data party and asking for their assistance with interpretation.
  • Don’t send an email bomb. As Chari Smith says in her post, never, ever email a final report without going through it with people first. In some instances, program managers may appreciate a heads up and the opportunity to meet privately to digest the news and plan their response prior to meeting with staff. Nobody wants a nasty surprise or to be put on the spot during a public presentation.
  • Give people time to digest. In other instances, you might wish to give the results to everyone ahead of time so they can fully process the evidence before meeting with you. Then you’re not faced with a barrage of defensive questions from people who haven’t had time to read the full report and understand how you reached your conclusions.

Stayed tuned for more tips in Part 2!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Chari Smith

Chari Smith

AEA365 Curator note: Back in January, AEA365 readers asked to read about how evaluators deliver negative findings to clients and stakeholders. This week, we feature 5 articles with four evaluator perspectives on this topic. 

A client called after reading the evaluation report and said, “The data are wrong.” My curiosity rose, as well as my blood pressure, and I wanted to understand more.

Hi, I’m Chari Smith with Evaluation into Action. This phone call was a turning point in my understanding of how to shift clients to understanding program evaluation as a learning opportunity. The rest of the conversation goes like this:

Me: “Can you elaborate?”

Client: “Participants say we aren’t communicating with them about the activities. But we are. We cannot send this to our funder.”

My thoughts: They are in fear of losing funding. Emotions are driving their fear. How do I shift them from a state of fear to a state of learning?

Hot Tips:

Validate, Educate, Collaborate

©2018. Chari Smith. Evaluation into Action. All Rights Reserved.

  • Validate their concerns: “I understand this is alarming to you. We will discuss how to use the data and share with the funder.”
  • Educate: “The data aren’t wrong, this is what participants said. This means your communication methods with them need to change. Let’s discuss what that can look like. Instead of emails, how about an initial phone call to all nine organizations? Set up a google group so they can discuss as well?
  • Collaborate: We worked together to create a one-page improvement plan, highlighting the finding and providing a brief description how communications will change, and then be measured. This was sent along with the full evaluation report to the funder.

Results:

  • Funder was happy to see transparency.
  • New communication methods worked, participants reported in later survey the felt well-informed and appreciated the change.
  • Client was relieved (me too!), and leveraged that experience to secure additional funds by highlighting how they used data to improve their program.

Lessons Learned: Never, ever email a report. Always go through it with them in person first, and then email it after the meeting.

Rad Resources: I am passionate about this topic, it prompted a white paper: Building a Culture of Evaluation. Please let me know about other resources on this topic. Thanks!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Bernadette Wright

Bernadette Wright

AEA365 Curator note: Back in January, AEA365 readers asked to read about how evaluators deliver negative findings to clients and stakeholders. This week, we feature 5 articles with four evaluator perspectives on this topic. 

Greetings! I’m Bernadette Wright, founder of Meaningful Evidence. We help nonprofits to leverage research to tackle complex issues. Being in the public speaking organization Toastmasters taught me a few useful tips for presenting negative (and positive) evaluation findings. You can even use these techniques for giving feedback to friends, family members, and co-workers!

Collect useful data.

To give a useful evaluation, you need to start by collecting useful data.

In Toastmasters, evaluators plan ahead by reading the speaker’s speech manual to understand the purpose of the speech. Is it to persuade, to entertain, to inform? You can also talk with the speaker to find out their personal goals for their speech.

Similarly, planning a useful evaluation first requires learning about the program. Read the program materials, review the literature, and ask stakeholders how they plan to use evaluation results. That lets you shape your evaluation strategies to fit the purpose.

Start and end with something positive.Collect useful data, start and end with something positive, don't be all positive

In Toastmasters, no matter how much work a speech might need, you always want to start and end your evaluation with something positive and specific (the “sandwich” technique). That lets the speaker know what to keep or do more of. It also gives them encouragement to try again.

For example, you might start with, “I loved the expression in your voice—I felt the emotion!” You might close with, “By making that change, I feel your speech will be highly entertaining. I look forward to your next speech!”

In delivering evaluation results, I always like to start and end with something that went well. It could be the progress made in carrying out planned activities, the strategies that were most beneficial, or the positive effects that were found. That lets program directors know what to keep or expand. It also gives them encouragement to incorporate your evaluation findings to increase their program’s success.

Don’t be all positive.

In Toastmasters, even the most polished speakers are always looking to get better. If a speaker hears nothing but praise, they might wonder whether going to meetings is worth the time. Evaluators are challenged to find at least one small idea for improvement in every speech. It may be as minor as changing a word here or adding a longer pause there.

In evaluation, when a manager wants to maximize their program’s potential, they might feel they’re not getting their money’s worth if an evaluation is nothing but praise. So, always include ideas on how to do even better.

Rad Resource:

You can download Toastmasters International’s guide on “Effective Evaluation” in the Resource Library on their website.

Rad Resource:

If you are interested in learning more about Toastmasters, you can find a club near you to visit.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

AEA365 Curator note: Back in January, AEA365 readers asked to read about how evaluators deliver negative findings to clients and stakeholders. This week, we feature 5 articles with four evaluator perspectives on this topic. 

Hello AEA 365 readers! I’m Glenn Landers, the Director of Health Systems at the Georgia Health Policy Center (Andrew Young School of Policy Studies, Georgia State University). A large portion of our work is evaluation, and we’ve been fortunate enough to work in every state and many of the territories. No one likes being the bearer of bad news, but sometimes it can’t be helped.

Recently, I was engaged in a developmental evaluation of a collective impact initiative that was intended to last ten years with ample funding. Five months in, we realized the initiative was in trouble. One year in, the project was basically over. Several techniques helped incorporate the bad news into the process as learning.

Hot Tip:

Evaluation Advisory Groups! We always try to have an advisory group made up of those whose work is being evaluated and those who will use the products of the evaluation. This way, we can test what we are learning with a small group for feedback before sharing with a wider audience.

Hot Tip:

Feedback loops! We also set up several feedback loops with the funder, the facilitator, and the work’s steering committee. This way, we shared information in small packets and gained the benefit of group sense making so that everyone understood why things weren’t working as planned.

Hot Tip:

Evaluation as Learning! We were fortunate to have a project sponsor who was interested in learning from what was not working just as much as what was working. Knowing this upfront helped us to be more comfortable in being candid.

Lesson Learned:

There’s nothing that can be substituted for being present with the people who are doing the work. Relationships and trust develop over time. The more present you are with them, the more they will be able to be in a position to hear the results – whether good or bad.

What’s worked for you in delivering bad news?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! We are Maureen Hawes from the University of Minnesota’s Systems Improvement Group, Arlene Russell, independent consultant, and Jason Altman from the TerraLuna Collaborative. We are writing to share our experience with fuzzy set Qualitative Comparative Analysis (QCA).

You may have faced questions similar to ones that we grappled with as evaluators, using quantitative analysis as part of a mixed methods approach. We wondered:

  1. Is there a method more adept at addressing nuance and complexity better than more traditional methods?
  2. Can quantitative efforts uncover the causes of future effects for developmental and formative work, or just prove impacts, or the effects of past causes?
  3. Does regressing cases to means misalign with our values and efforts to elevate the voice of those that are often not heard.
  4. Should we be removing outlier cases before analysis? Note: see Bob Williams’ argument that we should approach “outlying data with the possibility of it being there for a reason” rather than by chance.

In supporting our partner, we set out from the beginning, knowing that each of our cases (school buildings) were complex systems. Two major considerations were particular sticking points for us:

  1. Equifinality: We expected that there would be more than one pathway to implementation
  2. Conjuncturality: We expected that variable influence would be in combination rather than isolation

 

Hot Tip: Our solution was a QCA, based on set theory and logic and not statistics. QCA is a case-oriented method allowing systematic and scientific comparison of any number of cases as configurations of attributes and set membership. We loved that QCA helped answer the question “What works best, why and under what circumstances” using replicable empirical analysis.

QCA is either the crisp-set variety (conditions judged to be present or absent) or more contemporarily, fuzzy set QCA (fsQCA). fsQCA allows for sets in which elements are not limited to status as either a member or non-member, but in which different degrees of membership exist.

Lessons Learned: Our fsQCA analysis of a medium-sized sample of 21 buildings (in 6 districts) uncovered a message our partners could act on. Among other findings, the analysis identified a pathway to positive program outcomes that relied on ALL 3 of the following factors being in place:

  1. Project engagement
  2. Leadership/ infrastructure
  3. Data collection/ use

Worth considering: The number of QCA applications has increased during the past few years though there are still relatively few applications. Since it was introduced by Charles Ragin in 1987, QCA has been modified, extended, and improved, contributing to a better applicability of QCA to evaluation settings.

Rad Resources:

  1. We have a longer read (complete with references) available
  2. Charles Ragin, houses information that he finds pertinent to the technique, and tools that he has developed to complete analysis
  3. Compass hosts a bibliographical database where the user can sort through previous applications of fsQCA

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Older posts >>

Archives

To top