AEA365 | A Tip-a-Day by and for Evaluators

TAG | mixed methods

Hi, we are Ann Lawthers, Sai Cherala, and Judy Steinberg, UMMS PCMHI Evaluation Team members from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s blog title sounds obvious, doesn’t it? Your definition of success influences your findings. Today we talk about stakeholder perspectives on success and how evaluator decisions about what is “success” can change the results of your evaluation.

As part of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), the 45 participating practices submitted clinical data (numerators and denominators only) through a web portal. Measures included HEDIS® look-alikes such as diabetes outcomes and asthma care, as well as measures developed for this initiative, e.g., high risk members with a care plan. Policy makers were interested in whether the PCMH initiative resulted in improved clinical performance, although they also wanted to know “Who are the high- or low-performing practices on the clinical measures after 18 months in the initiative?” The latter question could be about either change or attainment. Practices were more interested in how their activities affected their clinical performance.

To address both perspectives we chose to measure clinical performance in terms of both change and attainment. We then used data from our patient survey, our staff survey, and the Medical Home Implementation Quotient (MHIQ) to find factors associated with both change and attainment.

Lesson Learned: Who are the high performers? “It depends.” High performance defined by high absolute levels of performance disproportionately rewarded practices that began the project with excellent performance. High performance defined by magnitude of change slighted practices that began at the top, as these practices had less room to change. The result? The top five performers defined by each metric were different.

Hot Tip:

  • Do you want to reward transformation? Choose metrics that measure change over the life of your project.
  • Do you want to reward performance? Choose metrics that assess attainment of a benchmark.
  • The results of each metric will include different lists of high performers.

Lesson Learned: The practices wanted to know: “What can we do to make ourselves high-performers?” Our mixed methods approach found leadership and comfort with Health Information Technology predicted attainment, but only low baseline performance predicted change.

Hot Tip: A mixed methods approach provides a rich backdrop for interpreting your findings and providing detail for stakeholders who need/want detail.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Ann Lawthers, Principal Investigator for the evaluation of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), and faculty in the Department of Family Medicine and Community Health at the University of Massachusetts Medical School (UMMS). This week, the UMMS PCMHI Evaluation Team will be presenting a series of Hot Tips, Lessons Learned and Rad Resources for evaluating large, complex and multi-stakeholder projects. We cover issues that surfaced during the design, data collection and analysis phases.

Hot Tip: When beginning to think about the design of a large, complex project, consider using a mixed methods approach to maximize the breadth and depth of evaluation perspectives. The Massachusetts PCMHI’s principal stakeholders – state government officials, insurers, and the practices themselves – have invested time and financial resources in helping primary care practices adopt the core competencies of a medical home. Each stakeholder group came into the project with different goals and agendas.

We selected a mixed methods evaluation approach to answer three deceptively simple questions:

  1. To what extent and how do practices transform to become medical homes?
  1. To what extent and in what ways do patients become active partners in their health care?
  1. What is the initiative’s impact on service use, clinical quality, patient and provider outcomes?

Lesson Learned: Our mixed methods approach allowed us to tap into the perspectives and interests of multiple stakeholder groups. The primary care practices participating in the PCMHI demonstration were keenly interested in the “how” of transformation (Question 1) while state policy makers wanted to know “if” practices transformed (also Question 1). We addressed the “how” principally through qualitative interviews with practice staff and the TransforMED Medical Home Implementation Quotient (MHIQ) questionnaire, completed by practice staff.

Participating practices also cared a great deal about the initiative’s affect on patients. Did patients perceive a change and become more actively involved in their health care (Question 2)? We used patient surveys to address this question.

Finally, all stakeholder groups were interested in the impact question (Question 3). Claims data, clinical data reported by practices, staff surveys and patient surveys all provided different views of how the PCMHI affected service use, clinical quality and other outcomes.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings. We are Linda Cabral and Laura Sefton from the University of Massachusetts Medical School, Center for Health Policy and Research. We are part of a multi-disciplinary team evaluating the Massachusetts Patient Centered Medical Home Initiative (MA-PCMHI), a state-wide, multi-site demonstration project engaging 46 primary care practices in organizational transformation to adopt the PCMH primary care model.  To adopt a mixed methods approach, this evaluation utilizes 1) multiple surveys targeted at different stakeholders (e.g., staff, patients), 2) analysis of cost and utilization claims, 3) practice site visits, and 4) interviews with Medical Home Facilitators (MHFs).

We wanted to connect data from the TransforMED’s Medical Home Implementation Quotient (MHIQ) survey with our MHF interview data. We did this to better understand the practices’ MA-PCMHI experience. MHFs provide a range of technical assistance to aid their assigned practices in their transformation process, making them a great source of information about their practices’ transformation. In an effort to triangulate our evaluation findings, we presented the MHIQ results to the MHFs as part of a traditional semi-structured interview. Presenting site specific survey data to MHFs served the following purposes:

  • It allowed for MHFs to share their reflections on why their practices scored the way they did on various domains;
  • It prompted MHFs to point out major differences between their assigned sites;
  • Focused the MHFs on providing practice-specific information; and  instead of generalities across all the sites to which they were assigned
  • MHFs provided insight into some of the strengths and limitations of the survey instrument.

Lessons Learned

  • Sharing survey data and having respondents reflect on it during the course of an interview, connecting data, proved to be a very helpful strategy. Specifically, we received more detailed responses from interviewees by asking “Why do you think Practice ABC scored a 5 on the care coordination module”? vs. “What can you tell me about how Practice ABC is implementing care coordination?” MHFs would make the case for or against why a practice scored the way they did on a particular domain.
  • Involving the MHFs as “experts” on their assigned sites increased the MHFs’ investment in the evaluation process and their willingness to participate in future evaluation activities.

Hot Tip

  • We held these MHF interviews prior to doing practice site visits. The practice-specific information that MHFs shared with us deepened our familiarity with the sites prior to conducting site visits.

Rad Resources

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Alexandra Hill and Diane Hirshberg, and we are part of the Center for Alaska Education Policy Research at the University of Alaska Anchorage.  The evaluation part of our work ranges from tiny projects – just a few hours spent helping someone design their own internal evaluation – to rigorous and formal evaluations of large projects.

In Alaska, we often face the challenge of conducting evaluations with very small numbers of participants in small, remote communities. Even in Anchorage, our largest city, there are only 300,000 residents. We also work with very diverse populations, both in our urban and rural communities. Much of our evaluation work is on federal grants, which need to both meet federal requirements for rigor and power, and be culturally responsive across many settings.

Lesson Learned: Using mixed-methods approaches allows us to both 1) create a more culturally responsive evaluation; and 2) provide useful evaluation information despite small “sample” sizes. Quantitative analyses often have less statistical power in our small samples than in larger studies, but we don’t simply want to accept lower levels of statistical significance, or report ‘no effect’ when low statistical power is unavoidable.

Rather, we start with a logic model to ensure we’ve fully explored pathways through which the intervention being evaluated might work, and those through which it might not work as well.  This allows us to structure our qualitative data collection to explore and examine the evidence for both sets of pathways.  Then we can triangulate with quantitative results to provide our clients with a better sense of how their interventions are working.

At the same time, the qualitative side of our evaluation lets us lets us build in measures that are responsive to local cultures, include and respect local expertise, and (when we’re lucky) build bridges between western academic analyses and indigenous knowledge. Most important, it allows us to employ different and more appropriate ways of gathering and sharing information across indigenous and other diverse communities. 

Rad Resource: For those of you at universities or other large institutions that can purchase access to it we recommend SAGE Research Methods.  This online resource provides access to full text versions of most SAGE research publications, including handbooks of research, encyclopedias, dictionaries, journals, and ALL the Little Green Books and Little Blue Books.

Rad Resource: Another Sage-sponsored resource is Methodspace, an online network for researchers. Sign-up is free, and Methodspace posts selected journal articles, book chapters and other resources, as well as hosting online discussions and blogs about different research methods.

Rad Resource: For developing logic models, we recommend the W.K. Kellogg Foundation Logic Model Development Guide.

Clipped from http://www.methodspace.com/

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

We are from the College of Education and Human Sciences, University of Nebraska-Lincoln, with Timothy Guetterman, the doctoral student, and Delwyn Harnisch, the Professor.

Mixed methods approaches can be useful in assessing needs and readiness to learn among professional workshop participants.  Combining qualitative and quantitative methods can enhance triangulation and completeness of findings.  We recently mixed methods while evaluating a weeklong workshop delivered to medical educators in Kazakhstan and experienced how mixing can aid evaluation activities.

The international collaboration between teams in the U.S. and Kazakhstan presented challenges that we mitigated through technologies, such as email, Skype, and Dropbox.  Surveys administered before, during, and after the workshop through an online tool, Qualtrics, were important to guide implementation, continually assess learning, and understand the participant’ perspective.

Hot Tips:

  • Guiding Implementation. Mixed methods within the needs and readiness assessment served a formative purpose, helping us tailor the workshop to specific participant needs.  Mixed methods analyses yielded rich details about what participants wanted and needed that would be difficult to anticipate with a quantitative instrument.  Online surveys presented a way to connect with participants early.  Beyond quantitative scales, we asked questions (e.g., “What do you hope to learn?”).  Because data were immediately available, findings guided the workshop implementation.
  • Continually Assess Learning. Throughout the workshop, brief (about one minute) surveys at the end of the day helped to gauge understanding of where participants are to develop the community of learners.  Providing a daily survey solicited brief qualitative responses from items (e.g., “Summarize in a few words the most important point from today”; “What point is still confusing?”).  The questions provided valuable information but only took minutes to complete.
  • Understand the Participants’ Perspectives. In the summative evaluation of the workshop, mixed methods allowed us to obtain participant ratings and gain understanding of what participants learned through open-ended qualitative questions.

Lessons Learned:

  • With the use of these tools, we were able to model in this workshop a process for developing a deep and practical understanding of assessment for learning.  With the leaders at this program sharing at their sites, we are beginning to see the vehicle of site-based teacher learning communities.  Each of these sites is using two or three techniques in their own classrooms and then meeting with other colleagues monthly to discuss their experiences and to see what other teachers are doing.
  • The result of this effort is that these teacher learning communities now develop a shared language enabling them to talk to one another about what they are doing.
  • In short, the use of mixed methods allows the team to focus on where the learners are now, where they want to go, and how we can help them get there.

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Mika Yamashita, a program chair of the Mixed Methods Evaluation Topical Interest Group (TIG).  The Mixed Methods Evaluation TIG was founded in 2010 to be a space for members to “examine the use of mixed methods evaluation through reflective analysis of philosophy, theory and methodology that is developing in the field of mixed methods” (Petition submitted to AEA in 2010). Evaluation 2012 will be our third year to sponsor sessions.

Mixed Methods Evaluation TIG members who presented at past conferences contributed this week’s posts.  A majority of presentations focused on findings from mixed methods evaluations, analysis of data collection and analysis methods, and strategies used in evaluation teams.  So, posts for this week will cover these topics. On Monday, Tayo Fabusuyi and Tori Hill will highlight the framework used for the evaluation of a minority leadership program. On Tuesday, Leanne Kallemeyn and her colleagues at Loyola University will share lessons learned from and tips for conducting integrated analysis. On Wednesday, Kristy Moster and Jan Matulis will walk us through how their evaluation team members worked to analyze data from multiple sources.  On Thursday, Hongling Sun will share lessons learned from conducting a mixed methods evaluation. Finally, on Friday, Terri Anderson will share her evaluation team’s experience using the National Institute of Health’s guide, Best Practices for Mixed Methods Research in the Health Sciences to understand an unexpected evaluation result

Rad Resources: Listed are resources I found helpful for learning about Mixed Methods Evaluation.

Hot Tips: 

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · ·

We are Melanie Kawano-Chiu, Program Director at the Alliance for Peacebuilding (AfP) and Andrew Blum, Director of Evaluation and Learning at the United States Institute of Peace.  More than two years ago I teamed up with Andrew Blum, Director of Learning and Evaluation at the United States Institute of Peace (USIP) to launch an initiative called the Peacebuilding Evaluation Project: A Forum for Donors and Implementers (PEP).

In December 2011, with support from USIP and the Carnegie Corporation of New York, USIP and AfP hosted a day-long Peacebuilding Evidence Summit. The closed event examined nine different evaluation approaches in order to identify their strengths and weaknesses when applied to peacebuilding programs, which operate in complex, chaotic, and sometimes dangerous environments.

 Rad Resource: The discussions among donors, implementers and evaluation experts at the Peacebuilding Evidence Summit were synthesized into a report, Proof of Concept: Learning from Nine Examples of Peacebuilding Evaluation. In addition to themes that emerged throughout the various analyses of the different approaches examined at the Summit, the report covers each of the evaluation approaches strengths, potential challenges and pitfalls, and applicable lessons.

 Lesson Learned: A reflection on the use of a mixed-method approach, which included an RCT, showed that the RCT was most useful to external audiences, particularly donors. For program managers within the organization, the qualitative research was much more useful. This raised question regarding the deployment of evaluation resources, since the bulk of the resources for the initiative went to the RCT.

 Hop Tip: In developing evaluations for peacebuilding projects in the field, dangers resulting from conflict and post-conflict contexts must be acknowledged. In some cases, methodological rigor must be sacrificed due to security risks or political sensitivities. This calls for creative strategies to maximize rigor within these constraints.

 Lesson Learned: The tension between accountability to donors and organizational learning within implementers is at times stark. There was discussion, although no consensus, at the Summit on whether these two goals are simply irreconcilable. This sparked discussion regarding the continued need for dialogue with donors on what is realistic to expect from evaluations.

Hot Tip: As the peacebuilding field evolves its evaluation practitioners, peacebuilders are increasingly sharing their evaluations and lessons learned in online settings such as the Learning Portal for Design, Monitoring, and Evaluation for Peacebuilding. The Learning Portal is a field-wide repository for evaluation reports and data, as well as best and emerging peacebuilding DM&E practices.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I am Deborah Mattila, Research and Evaluation Director at the Improve Group. Over the last several years I have had the pleasure of evaluating many school-based initiatives that explicitly and implicitly address development and application of 21st Century Skills, a set of learning and innovation skills built around the 4 Cs of Critical Thinking, Communication, Collaboration, and Creativity.

Rad Resources:

  • The Partnership for 21st Century Skills (P21, the national organization advocating for 21st Century readiness) has an interactive road map for 21st Century skills-related information, resources and tools. Here you can check out details behind all the skills areas, and their connections with standards-based learning, instructional practice and learning environments.
  • Intel® Teach Elements are free professional development courses that can help educators, program staff, and evaluators understand different aspects of a 21st Century Classroom (focused on digital learning), develop and use authentic assessments, and examine what student-led data-focused critical thinking looks like.

Hot Tips:

  • A key tenet of 21st Century classrooms is authentic assessment of student learning and achievement. Authentic assessments, which are developed to closely match the expected learning goals and desired skills, can be a great source of documentation for your evaluation; they can give you a full picture of what learning skills students are developing.
  • Many of the skills emphasized in 21st century learning—such as critical thinking or creativity —feel to children like a natural part of who they are, not unique, stand-alone skills. Mixed methods in evaluation – surveys, classroom observations, review of authentic assessments—give a broader view of how 21st Century Skills manifest in each student.
  • 21st Century Skills are not limited to either elementary or secondary grade levels, or to any one subject area. This is important because you can look for student, teacher and learning environment outcomes related to the 21st Century skills, even when they are not an explicit goal of your program or initiative.

Lessons Learned:

  • Teaching staff may see 21st Century Skills as one more demand on their already burdened teaching practice. It may also feel like the pedagogical “flavor of the month” if their administration responds frequently to new content or teaching strategies. In particular, as the Common Core Standards is pushing to the forefront of K-12 education, the 21st Century Skills may be left behind. I have found that framing 21st Century Skills as how kids learn, rather than what kids learn, helps focus the conversation on what we will and will not measure.

I love connecting and sharing ideas with other evaluators – connect with me on Twitter to have a conversation about this or other #eval topics!

Hot Tip: Take a minute and thank a teacher this week!

The American Evaluation Association is celebrating Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

My name is Randahl Kirkendall. I am a public health manager turned evaluator. Platometrics is the name of my consulting business, which for the past three years has been focused on program research, planning, and evaluation. I am also a part-time evaluator for the Science Education Resource Center at Carleton College, which provides faculty professional development programs using a combination of workshops and web-based resources.

Four years ago while overseeing the development of two websites I learned how to use Google Analytics to track and measure website use. My first contract to evaluate website content was two years ago. Since then, I have learned much about evaluating program websites, but still consider myself to be on a steep learning curve in this area. Here is a little bit of what I have learned.

Lesson Learned: Using multiple and mixed evaluation methods that include both quantitative and qualitative metrics is the best way to fully understand the processes by which a website is being used as well as the outcomes that result. Web analytics can reveal much about how users navigate a website, which is something that users have difficulty recalling. Surveys and interviews can measure their motivations behind their website use, the impacts and outcomes of using a website, and descriptive information about the users themselves. Combining the two helps to provide a more complete picture that may also include the interplay between the website and other aspects of a program, such as a workshop or printed material.

Rad Resource: Occam’s Razor by Avinash Kaushik (www.kaushik.net/avinash). This website is built around a blog by an expert in web analytics who presents information in an easy to understand and good humored way. You might also want to check out his book, Web Analytics 2.0.

Hot Tip: I am currently developing a short Guide to Evaluating Program Websites, which I will post on www.platometrics.com later this month. If you would be interested in reviewing a draft or would like to be notified when it is posted, send me a note at rk@platometrics.com.

This is a relatively new and rapidly evolving area of evaluation, so if you know of any other good resources or ideas, please share them.

This contribution is from the aea365 Tip-a-Day alerts, by and for evaluators, from the American Evaluation Association. If you’d like to learn more from Randahl, consider attending his session at the AEA Annual Conference this November in San Antonio. Search the conference program to find Randahl’s session or any of over 600 to be presented.

· ·

Archives

To top