AEA365 | A Tip-a-Day by and for Evaluators

TAG | qualitative

word bubble with lego manHi folks! I’m Jill Scheibler, a community psychologist and Senior Research Analyst at Carson Research Consulting, a women-led firm whose mission is to help clients thrive by using data to measure impact, communicate, and fundraise. We’re passionate about storytelling with data to make a difference.

At CRC I’m the “word nerd”, implementing our qualitative projects. Like many evaluators, I’ve had to translate academically-honed skills to the often faster-paced world of evaluation. A recent project for a county health department’s substance abuse initiative provides an example of how I tailor qualitative methods to meet clients’ needs.

Hot Tips

Allot ample time for clarifying goals. As with all good research, methods choices flow from the question at hand. In this case, our client wanted to understand the impact of substance abuse on their county, and new resources to be tapped. Like many clients, they lacked research savvy, and thought they required services exceeding their budget and available time. We gradually learned they had access to lots of quantitative data and support from the state to help interpret it. They were missing community stakeholder feedback. So, we provided a qualitative needs assessment component.

Build in more meetings than you think you’ll need, and bring checklists. Be prepared to leave meetings thinking you have all needed answers and learning afterwards that you’ve been (well-meaningly) misinformed! (Quantitative sidebar example: after building a data dashboard for another client in Excel2013, based on their word, we learned they had Excel2007. A costly reminder to always ask more questions!)

Choose tool(s) carefully to maximize usefulness. I generally opt for interviews where probes can offset “one-shot” data collection situations. Here, I instead designed a qualitative survey, using mostly open-ended questions, for efficient gathering of perspectives. The client collected surveys themselves, disseminating hard copies and a SurveyMonkey.com link, and accessed a targeted sample from within a community coalition.

Familiar guidelines for interview and survey design apply to qualitative surveys, but I advise keeping questions very focused and surveys as short as possible to mitigate higher skip rates with qualitative surveys.

Cool Trick

You may think your reporting options are limited compared to quantitative results. Not so! Instead of writing text-heavy reports that eat up valuable time, and folks are disinclined to read (#TLDR), consider telling “data stories” using bullet points and visualizations. This client received a two-pager for internal, local stakeholder, and state use. I’ll also provide an in-depth explanation of results and action steps in a webinar.

Rad resources

Jansen’s “The Logic of Qualitative Survey Research and its Position in the Field of Social Research Methods.”

Great tips on qualitative surveys from Nielsen Norman.

Awesome tips from CRC colleagues for larger community surveys.

Achievable qual visualization ideas from Ann Emery.

Some tools for qual analysis and visualization from Tech for Change.

I genuinely enjoy working creatively with clients, because it makes evident how suited qualitative methods for linking research to action. I’d love to hear how others do this work, please get in touch!

image of Jill Scheibler

The American Evaluation Association is celebrating Community Psychology TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Patrick Koeppl, cultural anthropologist, mixed methods scientist, Halloween enthusiast and Managing Director at Deloitte Consulting LLP. Throughout my career, I have found mixed methods are often the leading way to conduct broad evaluation of complex systems and situations. Qualitative approaches like in-depth interviews, focus groups, participant observation, policy reviews, and many others have a place in developing understanding. Determining the validity and reliability of qualitative data collected via mixed methods poses both challenges and opportunities for authentic understanding of complex systems and phenomenon.

Lesson Learned: The science of numbers, statistics, randomized samples and double-blind studies may indeed be described as “hard,” but qualitative approaches are not “soft.” Rather, they are “difficult.”

Practitioners of the “soft sciences” often face criticisms that their endeavors are not scientific. Nay-sayers may claim that qualitative research is somehow illegitimate—and too often anthropologists, sociologists and others hide in the dark, brooding corners of the application of their craft, frustrated that their methods, approaches and findings may not be taken seriously by the “real scientists” who frame the discussion. Qualitative evaluators fall into this trap at their own peril—there is nothing inherently unscientific about qualitative methods and the findings and inferences drawn from qualitative data.

Hot Tip: It is the practitioner, the scientist, who should bring rigor and science to qualitative methods. Set up your approach with rigor by asking yourself:

  • Are the evaluation questions clear?
  • Is the evaluation design congruent with the evaluation questions?
  • How well do findings show meaningful parallelism across data sources?
  • Did coding checks show agreement across interviewers and coders?
  • Do the conclusions ring true, make sense, and seem convincing to the reader?

Lesson Learned: Qualitative data are the source of well grounded, richly descriptive insights and explanations of complex events and occurrences in local contexts. They often lead to serendipitous findings and launch new theoretical integrations.  When reached properly, findings from qualitative data have a quality of authenticity and undeniability (what Stephen Colbert calls “truthiness”).

Hot Tip: Establish scientific rigor to determine reliability and validity in the following ways:

  • Use computer assisted data analysis tools such as ATLAS.ti or NVivo for data analysis
  • Develop a codebook and data collection protocols to improve consistency and dependability
  • Engage in triangulation with complementary methods and data sources to draw converging conclusions

Finally, putting qualitative data results into the context of a story and narrative to convey a concrete, vivid and meaningful result is convincing and compelling to evaluators, policy makers, and practitioners. Such questions and tools warrant the scientific use of qualitative data collection and analysis in the quest for “useful” evaluation.

The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a series remembering and honoring evaluation pioneers leading up to Memorial Day in the USA (May 30).

My name is Jennifer Greene, a former AEA President and former co-editor of New Directions for Evaluation. Egon Guba, as both a scholar and a person, left an enduring transformative legacy to the field of evaluation. As a scholar, Egon Guba was a brilliant thinker. At a revolutionary moment in the evolution of both social science inquiry and evaluation theory and practice (the 1960s and 1970s), he was among the leaders of the charge for paradigm expansion. Most often in partnership with his wife, Yvonna Lincoln, he toiled tirelessly to champion a constructivist, qualitative approach to understanding human phenomena. As a person, Egon Guba was a gracious and generous man. He offered critically important mentorship to many aspiring evaluators, including me, and was a pivotal influence on my own development as a scholar.

Pioneering and enduring contributions:

Guba

Egon Guba

Egon Guba wrote multiple articles and books, gave innumerable talks, and became one of constructivist, qualitative evaluation’s most eloquent and persuasive spokespersons. His evaluation legacy is significantly deep and wide having contributed to a transformative methodological expansion of the evaluation field, as part of his championship of constructivist thinking and qualitative methodologies. Following on from Bob Stake’s initial advancement of a responsive versus preordinate approach to evaluation and a case study versus experimentalist methodology for evaluation, Egon took up the reins of activist advocacy for these ideas.

Specifically, with Yvonna Lincoln, Egon developed “fourth-generation evaluation” which foregrounded constructivist philosophy and qualitative methodology, but also advanced a bold, politically engaged role for evaluation. Moving beyond the first three generations of evaluation’s role—measurement, description, and judgment—in fourth-generation evaluation, the evaluator surfaces the relevant perspectives, interests, and value claims of diverse stakeholders and helps them negotiate their diverse standpoints toward greater agreement about priorities. Guba and Lincoln explicitly situated evaluation as a values-committed, emancipatory practice, in contrast to most prior evaluation theories, which either claimed value neutrality or value pluralism. This explicit advocacy for particular values in evaluation was a highly significant contribution to the continuing evolution of evaluation theory and practice—perhaps even more significant than Egon’s championship of a constructivist worldview and qualitative methodology.

Resources:

Greene, J.C. (2008). Memories of a novice, learning from a master. American Journal of Evaluation, 29(3):322-324.

Guba, E.G. (1987). What have we learned about naturalistic evaluation? American Journal of Evaluation, 8(1): 23-43.

Guba, E.G. & Lincoln, Y.S. (1989). Fourth generation evaluation. Sage.

Lincoln, Y.S. & Guba, E.G. (1985). Naturalistic inquiry. Sage.

Patton, M.Q., Schwandt, T.A., Stake, R., Stufflebeam, D. (2008). Tributes to Egon Guba,

American Journal of Evaluation, 29(3): 328-329

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi folks! I’m JT Taylor, Director of Research and Evaluation at Learning for Action (LFA), and I’m here with my colleague Emily Drake, Senior Consultant and Director of Portfolio Alignment. LFA is a San Francisco-based firm that enhances the impact and sustainability of social sector organizations through evaluation, research, strategy development, and capacity-building services. Emily and I can’t wait to share an easy and reliable approach to facilitating participatory, collaborative qualitative analysis processes at this year’s AEA conference.

Lessons Learned: Effective facilitation is essential for leading participatory and collaborative evaluation processes: (1) it helps us to surface and integrate a multitude of perspectives on whether, how, and to what extent a program is working for its intended beneficiaries; (2) it is necessary for building and maintaining trust among stakeholders: trust that they are being heard, that their perspectives are weighted equally among others, and that their participation in the evaluation process is authentic and not tokenized; and (3) it is important for producing the buy-in of stakeholders and relevance of results that ensure evaluation findings will inform real action.

Engaging a variety of stakeholders, including program beneficiaries, in the analysis and interpretation of data in a way that authentically includes their perspective and contributions is important—and takes a set of facilitative skills and tools that go beyond evaluators’ typical training in technical analysis. In our work implementing collaborative evaluations, we have found that the same facilitation techniques that produce great meetings and brainstorming sessions can also be used to elicit great insights and findings from a participatory qualitative analysis process.

Hot Tip: Use participatory analysis techniques when you want to synthesize qualitative data from multiple perspectives and/or data collectors—whether those data collectors are part of your internal team, evaluation partners, or members of the community your work involves.

  • Do the work of “meaning-making” together, so that everyone is in the room to clarify observations and themes, articulate important nuances, and offer interpretation.
  • Use a 1-2 hour working meeting with all data collectors to summarize themes and pull out key insights together. Have each participant write observations from their own data collection, each on a large sticky note. Then group all observations by theme on the wall, having participants clarify or re-organize as needed.
  • Save reporting time later by asking participants to annotate their sticky note observations with references to specific interviews, transcript page numbers, and even quotes from their data collection to make it easy to integrate examples and quotes into your report.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from th eAmerican Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Rebecca? They’ll be presenting as part of the Evaluation 2014 Conference Program, October 15-18 in Denver, Colorado.

·

We’re Tara Gregory, Director of Research and Evaluation, and Bailey Blair, Youth Leadership in Kansas Program Associate, at Wichita State University’s Center for Community Support and Research (CCSR).  CCSR was awarded a grant last year by the Kansas Department for Aging and Disability Services to provide technical assistance and research services to leadership groups for youth with a mental illness diagnosis and their parents.  Our first task was to better understand these groups, so we conducted focus groups with each site, asking questions about the nature of the groups and roles of the youth. Their qualitative responses and our observations indicated that while they highly value their groups, the adults tend to be in charge and youth perform tasks like choosing food, picking up trash, etc.

So our question was: How could we honor what the members love about their groups but also move them toward best practices for positive youth development/leadership (e.g., Eccles and Gootman, 2002)?  Our approach was to gently present “what is” in their own words alongside “what could be” as a way to respect the members’ voices but also offer ideas for enhanced experiences.

Hot Tip:

  • Instead of just giving the leadership groups a written report, we displayed a Wordle™ graphic, which contained all of their qualitative responses, during a gathering of all groups. This was an engaging method that showed them the results “in their own words” without inserting our thoughts.
  • We then displayed a model of meaningful youth participation and asked them to compare it to the  Wordle™ visuals.  This spurred a very energetic and insightful discussion among youth and adults.
  • Next, we gave them an opportunity to incorporate their ideas into a visual representation of their vision or aspirations for the groups as a whole.
  • Finally, we’re following up with technical assistance and written guidelines on options to further incorporate true youth leadership in their groups.

Lessons Learned:

  • By presenting the qualitative data juxtaposed with the ideal model, the groups had an “A-ha!” moment that was totally theirs. They were not left feeling like they had done something wrong and were even able to laugh at the discrepancies. This appeared to be a moment of genuine empowerment.
  • Highlighting the discrepancy between “what is” and “what could be” wasn’t enough. It was essential to make sure they had concrete ideas about how to move toward their self-determined vision.

Figure 1.  Wordle graphic for responses to: “What are youth in charge of in this group”

Gregory Blair 1

 

Figure 2. Ladder of youth voice

Gregory Blair 2

The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org

· ·

I am Lisa R. Holliday, an Evaluation Associate at The Evaluation Group in Columbia, SC.  I was on an evaluation team that recently completed a needs assessment in a large rural school district.  On one survey, there were 3,277 student responses to two open-ended questions.

Initially, I planned to take a sample, rather than analyze all responses.  However, as I skimmed through the replies, I noticed there was a lot of repetition. Also, responses tended to be short with little elaboration or additional context.  This made creating a codebook easy, but made me wonder if there was a way to automate coding for certain responses, given the high amount of repetition.

Hot Tip: Microsoft Access can help in situations like this.

Access is relational database management software from Microsoft that works on Windows systems.  It is included as part of Office Professional or can be purchased separately. You can try it for free with a 30- day trial of Office 365 Home Premium.

Access lets you store, manipulate, and report data efficiently. Unlike Excel, Access can run queries that search for words that are “like” your target, which allows you to account for some variations in spelling.   Using the method described below, I was able to automatically code 70% of responses.

Cool Tricks:

Step 1: Create a new database in Access.  If you saved your data in Excel, you’ll need to import it into Access.  Right click on “Table 1,” select “Import” then “Excel.”  Select your data.

Holliday 1

Step 2: From the ribbon, select “Create” then “Query Design.” Make sure the name of your table is highlighted in the “Show Table” box.  Select “Add,” then close the window.

Step 3: Right click in the Field box and select “Zoom.”

Holliday 2

 

Step 4: In the “Zoom” field, enter the following query:

Column Results Name: IIf([Table Name]![Name of column you are analyzing)] Like ‘*Search term*’ , (Code from Codebook),0)

For example if I wanted to find all responses that mentioned “excel” or “word” in Column 19 of my data, my query would look like this:

Results6: IIf([5-29 Data]![19] Like ‘*excel*’ Or [5-29 Data]![19] Like ‘*word*’,6,0)

Holliday 3

 

This tells Access to look at Column 19 in the table named “5-29 Data,” and identify words like “excel” and “word.”  The results appear in a new column named “Results6,” and matched responses will be coded as “6.”

Repeat steps 3-4 for each item in your codebook.

Step 5: Once you have entered all items from your codebook, select “Run” from the ribbon under the “Design” tab.

Holliday 4

 

Step 6: To export your results to Excel, right click on the name of the query you ran, select “Export,” then “Excel.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hello! I am Bob Kahle and I have operated Kahle Research Solutions for nearly 20 years. I am a program evaluator and specialist in employing qualitative methods. I have written and trained extensively on managing difficult behaviors in groups (see Dominators, Cynics, and Wallflowers: Practical Strategies to Moderate Meaningful Focus Groups), but more recently have developed ways to think about and choose among new qualitative methods.

Not long ago qualitative researchers had little choice. The array of choices included focus groups, individual interviews or ethnography. As ethnographic approaches are generally long term and in-depth, short timelines or tight budgets usually necessitate the group or individual interview.

Today, there are many more options available.

Cool Tricks:

1.) Computer Aided Telephone Focus Groups.Using special software, all participants can hear each other via traditional phone lines and see images or video on a shared screen. This allows groups who have common characteristics but are geographically distant to interact. Key features include:

  • Moderator uses software to “see” who is talking
  • Can use chat, polling or electronic white-board
  • Can be done with or without webcams

2.) Bulletin Board Focus Groups. These are asynchronous discussion forums typically lasting 3-7 days. The moderator pre-posts questions and can probe individuals or the entire group. Participants can upload photos or short videos to illustrate their points and usually are required to login 1-2 times per day.

3.) Mobile Qualitative: Any data you collect via computer, you can now gather via mobile devices, smart phones, and tablets. The mobility inherent in these devices allows for capturing data (text, digital audio, images or video) as respondents experience a place or event, rather than based on recall like many traditional methods.

Hot Tips: Assess whether Digital Qualitative methods are right for you.

  1. Does your target audience have Internet access and do they use smart phones? If yes, consider digital approaches.
  2. Is there a need to protect the confidentiality of the client’s stimulus, or any other aspect of the evaluation? If no, consider digital qualitative approaches.
  3. Is providing participants a sensory experience essential? If no, assess new qualitative techniques for application.

Rad Resources: Free training is available. Since much of this methodological innovation is going on in the market research space, check out these vendors for information, training, and access to tools. Most tools are easily portable to the evaluation context.

http://www.2020research.com/

http://www.civi.com/marketingresearch/

http://www.itracks.com/

New Qualitative Research Methods & Tools and companion NewQualitative.org website are produced by GreenBook with the support of the Qualitative Research Consultants Association (QRCA). Check out this site for blogs, companies who offer software solutions, and an active blog discussing application of these new methods.

Clipped from http://www.newqualitative.org/

Want to learn more from Bob? Register for his workshop Qualitative Research Design in the Age of Choice at Evaluation 2013 in Washington, DC. 

This is one in an occasional series of posts by people who will be presenting Professional Development workshops at Evaluation 2013 in Washington, DC. Click here for a complete listing of Professional Development workshops offered at Evaluation 2013. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings. We are Linda Cabral and Laura Sefton from the University of Massachusetts Medical School, Center for Health Policy and Research. We are part of a multi-disciplinary team evaluating the Massachusetts Patient Centered Medical Home Initiative (MA-PCMHI), a state-wide, multi-site demonstration project engaging 46 primary care practices in organizational transformation to adopt the PCMH primary care model.  To adopt a mixed methods approach, this evaluation utilizes 1) multiple surveys targeted at different stakeholders (e.g., staff, patients), 2) analysis of cost and utilization claims, 3) practice site visits, and 4) interviews with Medical Home Facilitators (MHFs).

We wanted to connect data from the TransforMED’s Medical Home Implementation Quotient (MHIQ) survey with our MHF interview data. We did this to better understand the practices’ MA-PCMHI experience. MHFs provide a range of technical assistance to aid their assigned practices in their transformation process, making them a great source of information about their practices’ transformation. In an effort to triangulate our evaluation findings, we presented the MHIQ results to the MHFs as part of a traditional semi-structured interview. Presenting site specific survey data to MHFs served the following purposes:

  • It allowed for MHFs to share their reflections on why their practices scored the way they did on various domains;
  • It prompted MHFs to point out major differences between their assigned sites;
  • Focused the MHFs on providing practice-specific information; and  instead of generalities across all the sites to which they were assigned
  • MHFs provided insight into some of the strengths and limitations of the survey instrument.

Lessons Learned

  • Sharing survey data and having respondents reflect on it during the course of an interview, connecting data, proved to be a very helpful strategy. Specifically, we received more detailed responses from interviewees by asking “Why do you think Practice ABC scored a 5 on the care coordination module”? vs. “What can you tell me about how Practice ABC is implementing care coordination?” MHFs would make the case for or against why a practice scored the way they did on a particular domain.
  • Involving the MHFs as “experts” on their assigned sites increased the MHFs’ investment in the evaluation process and their willingness to participate in future evaluation activities.

Hot Tip

  • We held these MHF interviews prior to doing practice site visits. The practice-specific information that MHFs shared with us deepened our familiarity with the sites prior to conducting site visits.

Rad Resources

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, we are Linda Cabral and Laura Sefton from the Center for Health Policy and Research at UMass Medical School. We often collect qualitative data from interviews and focus groups. One challenge we frequently face is how to quickly and efficiently transcribe audio data. We have experimented using voice recognition software (VRS), and we’d like to share our approach.

You will need headphones, a microphone (stand-alone or attached to a headset), and a computer with audio playback and VRS installed on it. We use Dragon Naturally Speaking Premium Version 11.5 voice recognition software, however other VRS is available. Use of audio playback software will allow you to control the playback speed, so you can slow it down, pause, fast forward, and rewind as needed.

Open the audio file in the playback software and open a new document in the VRS. While listening to the audio via the headphones, repeat what you hear into the microphone. During this step, you can format the document to indicate who is speaking and to add punctuation. Because VRS works best when trained to understand a single voice, a designated team member should repeat all spoken content, regardless of how many voices are in the audio file.

This process will generate a document in the VRS that can be saved to your computer as a Word file. As a final review, read through the Word file while listening to the audio file and make needed corrections. This could be done by another member of the project team as a double check of the document’s accuracy.

Hot Tips:

  • Spend time training the VRS to recognize your voice. A few practice sessions with the software may be needed where you can read dummy data into the software in order for it to learn your voice. This will improve the transcription quality, minimizing the time spent editing.
  • Train the VRS to recognize project-specific acronyms or terminology prior to starting transcription.

Lessons Learned:

  • Often, financial resources for evaluation projects are limited. In an effort to keep the transcription process in-house, our administrative staff transcribed the audio files. By using the VRS and someone from our project team familiar with the data as the designated recorder, we have found savings in time and efficiencies.
  • No transcription yet has captured 100% content accurately the first time. Therefore, build in time to listen to the recording and to make manual edits.

Rad Resources:

These resources may be helpful as you explore whether VRS is right for you.

  • VRS products Review by consumersearch: “In reviews, it’s generally Dragon vs. Dragon”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, we are Christine Johnson and Terri Anderson, members of the Massachusetts Patient Centered Medical Home Initiative (MA-PCMHI). MA-PCMHI is Massachusetts’ state-wide, multi-site PCMH demonstration project engaging 46 primary care practices in organizational transformation to adopt the PCMH primary care model.  Our roles as Transformation and Quality Improvement Director (Christine) and Qualitative Evaluation Study Team Lead (Terri) require us to understand the 46 practices’ progress towards PCMH model adoption in distinct yet complementary ways.  Our colleagues sometimes assume that we must remain distant to conduct our best possible work.  Their concerns are that our close working relationship will somehow contaminate the initiative or weaken the evaluation’s credibility.  However, we find that maintaining our connection is vital for success of both of the initiative and the evaluation.  We’d like to share the following:

Lessons Learned:

  • Transformation and Quality Improvement (Transformation/QI) and evaluation both seek to understand how the practices best adopt the PCMH model and to describe the practices’ progress.  To promote our mutual interest, we regularly attend each other’s team meetings. Doing so increases the opportunity to share our perspectives on the MA-PCMHI. To date the evaluators have advised some formative project adjustments while the MA-PCMHI intervention team has increased the evaluators’ understanding of the survey and performance data submitted from the practices. Currently, the project team and the evaluators collectively are establishing criteria to select six practices for in-depth site visits.
  • Transformation/QI and evaluation often use the same data sources but in different ways.  Specifically, the practices use patient record data in their Plan-Do-Study-Act (PDSAs) cycles then submit the same data for the evaluation’s clinical impact measures.  The practices initially resisted this dual data use.  However, through our Transformation/QI-Evaluator connection we increased the practices’ understanding of how their use of data in the PDSAs improved their clinical performance which in turn improved the evaluation’s ability to report a clinical quality impact. Presently, performance data reporting for clinical impact measures and practices’ use of PDSAs have increased.

Hot Tip: Develop a handout describing the similarities and differences between research, evaluation and quality improvement.  Having this information readily available has helped us to address concerns about bias in the evaluation.

Rad Resources:

Clipped from http://www.ihi.org/knowledge/Pages/Tools/PlanDoStudyActWorksheet.aspx

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Older posts >>

Archives

To top