AEA365 | A Tip-a-Day by and for Evaluators

CAT | Mixed Methods Evaluation

Hi, we are Ann Lawthers, Sai Cherala, and Judy Steinberg, UMMS PCMHI Evaluation Team members from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s blog title sounds obvious, doesn’t it? Your definition of success influences your findings. Today we talk about stakeholder perspectives on success and how evaluator decisions about what is “success” can change the results of your evaluation.

As part of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), the 45 participating practices submitted clinical data (numerators and denominators only) through a web portal. Measures included HEDIS® look-alikes such as diabetes outcomes and asthma care, as well as measures developed for this initiative, e.g., high risk members with a care plan. Policy makers were interested in whether the PCMH initiative resulted in improved clinical performance, although they also wanted to know “Who are the high- or low-performing practices on the clinical measures after 18 months in the initiative?” The latter question could be about either change or attainment. Practices were more interested in how their activities affected their clinical performance.

To address both perspectives we chose to measure clinical performance in terms of both change and attainment. We then used data from our patient survey, our staff survey, and the Medical Home Implementation Quotient (MHIQ) to find factors associated with both change and attainment.

Lesson Learned: Who are the high performers? “It depends.” High performance defined by high absolute levels of performance disproportionately rewarded practices that began the project with excellent performance. High performance defined by magnitude of change slighted practices that began at the top, as these practices had less room to change. The result? The top five performers defined by each metric were different.

Hot Tip:

  • Do you want to reward transformation? Choose metrics that measure change over the life of your project.
  • Do you want to reward performance? Choose metrics that assess attainment of a benchmark.
  • The results of each metric will include different lists of high performers.

Lesson Learned: The practices wanted to know: “What can we do to make ourselves high-performers?” Our mixed methods approach found leadership and comfort with Health Information Technology predicted attainment, but only low baseline performance predicted change.

Hot Tip: A mixed methods approach provides a rich backdrop for interpreting your findings and providing detail for stakeholders who need/want detail.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, I’m Ann Lawthers, Principal Investigator for the evaluation of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), and faculty in the Department of Family Medicine and Community Health at the University of Massachusetts Medical School (UMMS). This week, the UMMS PCMHI Evaluation Team will be presenting a series of Hot Tips, Lessons Learned and Rad Resources for evaluating large, complex and multi-stakeholder projects. We cover issues that surfaced during the design, data collection and analysis phases.

Hot Tip: When beginning to think about the design of a large, complex project, consider using a mixed methods approach to maximize the breadth and depth of evaluation perspectives. The Massachusetts PCMHI’s principal stakeholders – state government officials, insurers, and the practices themselves – have invested time and financial resources in helping primary care practices adopt the core competencies of a medical home. Each stakeholder group came into the project with different goals and agendas.

We selected a mixed methods evaluation approach to answer three deceptively simple questions:

  1. To what extent and how do practices transform to become medical homes?
  1. To what extent and in what ways do patients become active partners in their health care?
  1. What is the initiative’s impact on service use, clinical quality, patient and provider outcomes?

Lesson Learned: Our mixed methods approach allowed us to tap into the perspectives and interests of multiple stakeholder groups. The primary care practices participating in the PCMHI demonstration were keenly interested in the “how” of transformation (Question 1) while state policy makers wanted to know “if” practices transformed (also Question 1). We addressed the “how” principally through qualitative interviews with practice staff and the TransforMED Medical Home Implementation Quotient (MHIQ) questionnaire, completed by practice staff.

Participating practices also cared a great deal about the initiative’s affect on patients. Did patients perceive a change and become more actively involved in their health care (Question 2)? We used patient surveys to address this question.

Finally, all stakeholder groups were interested in the impact question (Question 3). Claims data, clinical data reported by practices, staff surveys and patient surveys all provided different views of how the PCMHI affected service use, clinical quality and other outcomes.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! We are Stacy Johnson and Cami Connell from the Improve Group. At Evaluation 2013, we had the opportunity to present on our experiences using a unique mixed methods approach to collecting data.

Your data collection strategy has the potential to seriously impact your evaluation. You might ask yourself questions like: How do we make sure we are getting the whole story? What if one method isn’t appropriate for gathering all the information you need from a single source? How do you engage people in data collection in a way that makes them understand and want to use the findings? One way to address these questions is to think about each stage of data collection as a layered process by directly connecting quantitative and qualitative methods to complement each other and build a more in-depth and accurate story.

How is this different from how we traditionally think about data collection? We still access the same key sources to answer our evaluation questions, but the design includes a feedback loop to allow the evaluator to immediately integrate any initial findings into the data collection process as they emerge. This often means intentionally including additional interviews or focus groups after an initial stage of data collection to present data back to stakeholders and ask for feedback and relevant background about emerging themes.

Lesson Learned: Provide an orientation to data. Not everyone looks at data every day! Walking stakeholders through data increases the chances that they will want to use it to inform decisions.

Hot Tip: Create easy to interpret graphics to make data more accessible.

Lesson Learned: Make it a mutually beneficial process. In addition to gathering important information for the evaluation, it is equally important to make sure people feel like they are heard and that sharing their experiences can positively impact their work.

Hot Tip: Facilitate discussion about how data applies in day-to-day work.

Hot Tip: Encourage problem solving and planning for how data can inform changes or improvements.

Lesson Learned: Understand the stakes and relationships. Depending on the nature of relationships and potential consequences of the evaluation, there is a risk of people painting an overly positive or overly negative picture. In addition, when presenting data from one source to another, careful attention should be paid in masking the identity of the original source, especially when there are easily identifiable groups or an existing adversarial relationships.

Hot Tip: Include people with different perspectives and roles in the data collection process to uncover any underlying dynamics.

Hot Tip: Try to be aware of any adversarial or contentious relationships that may exist. This approach is not always appropriate depending on existing relationships.

Hot Tip: Mask the original source of data as appropriate.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings. We are Linda Cabral and Laura Sefton from the University of Massachusetts Medical School, Center for Health Policy and Research. We are part of a multi-disciplinary team evaluating the Massachusetts Patient Centered Medical Home Initiative (MA-PCMHI), a state-wide, multi-site demonstration project engaging 46 primary care practices in organizational transformation to adopt the PCMH primary care model.  To adopt a mixed methods approach, this evaluation utilizes 1) multiple surveys targeted at different stakeholders (e.g., staff, patients), 2) analysis of cost and utilization claims, 3) practice site visits, and 4) interviews with Medical Home Facilitators (MHFs).

We wanted to connect data from the TransforMED’s Medical Home Implementation Quotient (MHIQ) survey with our MHF interview data. We did this to better understand the practices’ MA-PCMHI experience. MHFs provide a range of technical assistance to aid their assigned practices in their transformation process, making them a great source of information about their practices’ transformation. In an effort to triangulate our evaluation findings, we presented the MHIQ results to the MHFs as part of a traditional semi-structured interview. Presenting site specific survey data to MHFs served the following purposes:

  • It allowed for MHFs to share their reflections on why their practices scored the way they did on various domains;
  • It prompted MHFs to point out major differences between their assigned sites;
  • Focused the MHFs on providing practice-specific information; and  instead of generalities across all the sites to which they were assigned
  • MHFs provided insight into some of the strengths and limitations of the survey instrument.

Lessons Learned

  • Sharing survey data and having respondents reflect on it during the course of an interview, connecting data, proved to be a very helpful strategy. Specifically, we received more detailed responses from interviewees by asking “Why do you think Practice ABC scored a 5 on the care coordination module”? vs. “What can you tell me about how Practice ABC is implementing care coordination?” MHFs would make the case for or against why a practice scored the way they did on a particular domain.
  • Involving the MHFs as “experts” on their assigned sites increased the MHFs’ investment in the evaluation process and their willingness to participate in future evaluation activities.

Hot Tip

  • We held these MHF interviews prior to doing practice site visits. The practice-specific information that MHFs shared with us deepened our familiarity with the sites prior to conducting site visits.

Rad Resources

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Alexandra Hill and Diane Hirshberg, and we are part of the Center for Alaska Education Policy Research at the University of Alaska Anchorage.  The evaluation part of our work ranges from tiny projects – just a few hours spent helping someone design their own internal evaluation – to rigorous and formal evaluations of large projects.

In Alaska, we often face the challenge of conducting evaluations with very small numbers of participants in small, remote communities. Even in Anchorage, our largest city, there are only 300,000 residents. We also work with very diverse populations, both in our urban and rural communities. Much of our evaluation work is on federal grants, which need to both meet federal requirements for rigor and power, and be culturally responsive across many settings.

Lesson Learned: Using mixed-methods approaches allows us to both 1) create a more culturally responsive evaluation; and 2) provide useful evaluation information despite small “sample” sizes. Quantitative analyses often have less statistical power in our small samples than in larger studies, but we don’t simply want to accept lower levels of statistical significance, or report ‘no effect’ when low statistical power is unavoidable.

Rather, we start with a logic model to ensure we’ve fully explored pathways through which the intervention being evaluated might work, and those through which it might not work as well.  This allows us to structure our qualitative data collection to explore and examine the evidence for both sets of pathways.  Then we can triangulate with quantitative results to provide our clients with a better sense of how their interventions are working.

At the same time, the qualitative side of our evaluation lets us lets us build in measures that are responsive to local cultures, include and respect local expertise, and (when we’re lucky) build bridges between western academic analyses and indigenous knowledge. Most important, it allows us to employ different and more appropriate ways of gathering and sharing information across indigenous and other diverse communities. 

Rad Resource: For those of you at universities or other large institutions that can purchase access to it we recommend SAGE Research Methods.  This online resource provides access to full text versions of most SAGE research publications, including handbooks of research, encyclopedias, dictionaries, journals, and ALL the Little Green Books and Little Blue Books.

Rad Resource: Another Sage-sponsored resource is Methodspace, an online network for researchers. Sign-up is free, and Methodspace posts selected journal articles, book chapters and other resources, as well as hosting online discussions and blogs about different research methods.

Rad Resource: For developing logic models, we recommend the W.K. Kellogg Foundation Logic Model Development Guide.

Clipped from http://www.methodspace.com/

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

I’m Cheryl Poth and I am an assistant professor at the Centre for Applied Studies in Measurement and Evaluation in the department of Educational Psychology, Faculty of Education at the University of Alberta in Edmonton, Canada. My area of research is focused on how developmental evaluators build evaluation capacity within their organizations. My use of mixed methods is pragmatically-driven, that is, I use it when the research/evaluation question(s) require the integration of both qualitative themes and quantitative measures to generate a more comprehensive understanding. Most recently, my work within research teams has provided the impetus for research and writing about the role of a mixed methods practitioner within such teams.

Lessons Learned:

  • Develop and teach courses. In 2010, I developed (and offered) a doctoral mixed methods research (MMR) course in response to the demand from graduate students for opportunities to gain skills within MMR. The course was oversubscribed and at the end of the term we formed a mixed methods reading group, which continues to provide support as students are working their way through their research process. I am fortunate to be able to offer this course again this winter and already it is full!
  • Offer workshops. To help build MMR capacity, I have offered workshops in a variety of locations, most recently at the 9th Annual Qualitative Research Summer Intensive held in Chapel Hill, NC in late summer and at the 13th Thinking Qualitatively Workshop Series offered by the International Institute for Qualitative Methodology held in Edmonton, AB in early summer. These workshops remind me of the reality for many researchers that their graduate programs required completion of an advanced research methods course that was either qualitatively- or quantitatively-focused and of the need to build a community of MM researchers and that the community can exist locally or using technology can exist globally! It has been a pleasure to watch as new and experienced researchers begin to learn about MMR designs and integration procedures.
  • Join a community. One of the places where I have begun to find my community MM researchers was through a group currently working on forming the International Association of Mixed Methods, at the International Mixed Methods conference, and the mixed methods researchers on Methodspace.

Hot Tip:

Rad Resource:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hi, I’m Katherine Hay. I’ve spent the last 15 years in India working on development, research, and evaluation.

Lessons Learned:

  • A mantra I use all the time is:  ‘there is no gender neutral policy, program, or evaluation.’  If I hear one of these things described as ‘gender neutral’ I start to probe.  Usually when an intervention is called ‘gender neutral,’ it is actually gender blind.
  • South Asia, my home and the place I work, has the worst gender inequities in the world.
  • Evaluation can reinforce or reflect social inequities – or it can challenge them. I want to play a part in challenging them. To do that, evaluation has to help us figure out what shows promise in shifting inequities and what does not.  This is what draws me to feminist evaluation.
  • Mainstream development, and by extension mainstream evaluation, grapples with mainstream questions.   This has resulted in designs, approaches, and tools which are not particularly well suited to understanding inequities.   Feminist analysis brings inequity to the foreground.

Hot Tips:

  • I’m often asked, ‘But how do you do feminist evaluation?’  There are no shortcuts.  The answer is, ‘by applying feminist principles at different stages in an evaluation.’   For example:
  1. At the start of the evaluation feminist analysis can be used to ask, ‘whose questions are these?’ and, ‘whose questions are being excluded?’
  2. A rigorous feminist evaluation uses the mix of methods that matches the questions.  But some designs factor out the perspectives of marginalized groups.  Feminist evaluation designs include them.
  3. At the judgment stage, feminist evaluations recognize that there are different and often competing definitions of success in development interventions. Feminist analysis brings these differences to the surface for debate.
  4. At the use stage, feminist analysis brings recognition that particular pathways may be strategic, blocked, or risky. A feminist approach also brings responsibility to take responsible action on findings.
  • Get Involved. Peer support has been invaluable to my evaluation practice.  I’m part of a group in South Asia trying to strengthen our work through feminist analysis.  We share our designs, instruments, processes and challenges.  We are critical but supportive.  Being part of this group reminds me why evaluation matters. Try to find a group of peers to challenge and inspire you.  If you want to share resources or get in touch, we have a Feminist Evaluation website.

Rad Resources :

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Elizabeth (AKA: Bessa) Whitmore. Now a retired Professor from the School of Social Work, Carleton University, Ottawa, Canada, I have been a member of the Feminist TIG since its inception. The following entry draws on a chapter I am writing entitled “Researcher/evaluator roles and social justice” in a forthcoming Handbook on Feminist Evaluation (edited by Denise Seigart, Sharon Brisolara and Sumitra SenGupta).

Hot Tips:

  • There are a range of roles played by a feminist evaluator, including facilitator, educator, collaborator, technical expert/methodologist, and activist/advocate. Not everyone can do everything equally well, so self-knowledge and confidence in one’s strengths (and limitations) is essential. The personal characteristics, experience and preferences of the evaluator will dictate which role(s) she/he best plays. It is critical to recognize that what role the evaluator plays and how, is intimately tied to her/his own worldview, history, and biography. There is no objectivity; we need to be aware that we are deeply grounded in our own location and life experience.
  • Good “people skills” are essential when engaging stakeholders in the process. These include active listening, cultural sensitivity, non-verbal communication, motivating participants, coordinating relationships, encouraging interactions, supporting others’ ideas, and an ability to reflect critically on one’s own reactions and behavior.
  • Having fun: We should not dismiss the importance of fun in this work. “If I can’t dance, I don’t want to be part of your revolution” said Emma Goldman back in the 1930s. Long hours without some laughter tend to burn people out, or they just drop out.

Cool Tricks: Here are some questions one might ask when planning and implementing a feminist evaluation:

  • In what ways are women (men, bisexual and transgendered people, etc.) treated differently within the program, and how do their experiences and outcomes differ? In what ways do class, race, and gender combine to expand or contract possibilities for participants?
  • Are both women and men being consulted about objectives and activities? Which women, and which men? Has the potential for community resistance to women’s empowerment activities or organizational resistance to female managers been assessed?
  • Did the project have any unexpected (positive or negative) social and gender equity outcomes?

Lessons Learned:

  • A feminist lens enhances validity in all evaluation approaches. For example, an experimental design pays attention to the sample distribution among men and women, considers gender related factors in the questions asked, and in data analysis. A utilization focused evaluation attends to the gender (and other) distribution (in decision-making). Social justice approaches (such as empowerment, participatory, collaborative, transformative, etc.); consider the equality and quality of gender participation.
  • Get involved: A good place to discuss these and other issues is the Feminist TIG.

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Greetings, I am Denise Seigart, Associate Dean for Nursing at Stvenson University.  Like a great novel, a great feminist evaluation creates the conditions for learning and change, particularly for the benefit of women. In 2008-2009 I implemented a feminist evaluation to study school-based health care in the United States (U.S.), Canada, and Australia. In the process of implementing a qualitative study of school-based health care, I utilized a feminist lens and feminist methods, including reflexivity, interviews focused on active listening and the experiences of the interviewees, collaborative examination of the data with interested stakeholders and other feminists and non-feminists, and diverse dissemination of the results for the purpose of promoting dialog, health care reform, and social justice for children. It was my intent to create conditions for a critical feminist exploration of school health care for children across the three countries, to share this information, and ultimately, promote community learning, action and change.

Lessons Learned:

  • Feminist evaluation is like other evaluation. It is concerned with measuring the effectiveness of programs, judging merit or worth, and examining data to promote change. The difference between feminist approaches and other evaluation models generally lies in the increased attention paid to gender issues, the needs of women, and the promotion of transformative change.
  • Feminist evaluation is interested in promoting social justice for women, but includes other oppressed groups as well. Attention is paid not only to gender but to race, class, sexual orientation, and abilities. In my study, I interviewed 73 school nurses, parents and administrators in Canada, Australia, and the U.S. regarding the presence and quality of school health care in their countries. I paid particular attention to emerging themes that indicated problems with racism, sexism, and classism, and asked additional questions as these emerged. For example, it was apparent that certain groups had more difficulty accessing health care in schools (aborigines, children with special needs) and that depending on the school district, services could vary widely. Teachers (largely women) were often asked to act as health care providers to save school districts money, and nurse practitioners (largely women) experienced difficulty gaining access and approval to provide care in schools.
  • To implement a feminist evaluation, think carefully about the questions you want to ask, the methods you want to use, and the setting. Some facilities may bar access to evaluators who declare themselves as feminist, so the language you use should be carefully chosen. Be sure to involve other feminists and non-feminists, so when planning your design or analyzing data, you can check for misinterpretations or “what would a feminist see?”

Rad Resources:

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! I am Alessandra Galiè, a PhD Candidate at Wageningen University in the Netherlands. From 2006 to 2011 I collaborated with a Participatory Plant Breeding programme coordinated at the International Centre for Agricultural Research in the Dry Areas (ICARDA) to assess the impact of the programme on the empowerment of the newly involved women farmers in Syria. The findings helped to understand how empowerment as a process can take place, and were useful to make the programme’s strategies more gender-sensitive. I chose to work with a small number (Small-N) of respondents (12 women) and a mixture of qualitative methods to provide and in-depth understanding of changes in empowerment as perceived by the women themselves and their community.

Lessons Learned

  • Small-N research is valuable. Small-N in-depth research is often criticised for its limited external validity. However, it was an extremely valuable methodology to explore a field of research that is relatively new with the aim of providing an understanding of complex social processes, of formulating new questions and  identifying new issues for further exploration.
  • Systematic evaluation should include empowerment. Empowerment is an often cited impact of development projects but rarely the focus of systematic evaluation. Assessing changes in empowerment required an approach that was specific to the context and intervention under analysis and that was relevant to the respondents and their specific circumstances. This revealed different positionalities of women in the empowerment process and the inappropriateness of blue print solutions to the ‘empowerment of women’.
  • Measure gender-based implications. An analysis of the impact of a breeding programme on the empowerment of women showed that ‘technical interventions’ have gender-based implications for both technology effectiveness and equity of development concerns.

Resources

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top