AEA365 | A Tip-a-Day by and for Evaluators

CAT | Mixed Methods Evaluation

Hello, I am Donna Podems, founder and director of OtherWISE: Research and Evaluation, a small monitoring and evaluation firm in Cape Town, South Africa.  We work with a wide range of international and local donors who fund a wide variety of technical interventions in areas such as environment, education, health, community development and human rights.

We encourage evaluation use through choosing and mixing different evaluation approaches that will bring credible and useful evaluation findings. Feminist evaluation is one of the approaches that I often draw upon, and this often surprises many of my colleagues.

Feminist evaluation can be useful– even for non-feminist evaluators.

Hot Tip:

  • You do not need to be a feminist to use feminist evaluation. It is important to understand that not all feminist evaluators (or evaluation theorists) agree with me. Over 18 years of conducting evaluation in more than 25 countries, I have had the privilege of working with many talented evaluators, most of whom were not feminists. In more than 15 different evaluations in Africa and Asia, my team members agreed to incorporate various elements of a feminist approach that resulted in useful evaluation processes and findings.

Lessons learned: Three lessons I have learned about addressing the question I hear the most, “How do you apply feminist evaluation if you are not a feminist?

  • Be knowledgeable about what feminist evaluation is, and is not. Many people I work with have a strong reaction to feminist evaluation and yet few can explain what the approach entails. Demonstrate how elements of the approach could enable a credible and useful evaluation.
  • Remove the label. Having two words that often elicit strong reactions together in one phrase is a challenge. Remove the label and explain the approach.
  • Adapt as needed. In my experience, feminist evaluation often provides a useful complement to other evaluation approaches.

Rad Resource:

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hello!  We are Donna Mertens and Mika Yamashita, Chair and Program Chair of Mixed Methods Evaluation TIG.  This week, we offer five posts written by Feminist Issues in Evaluation TIG members. Why is the Mixed Methods TIG co-hosting this week with the Feminist Issues in Evaluation TIG?  Because this week’s posts touch upon issues associated with theoretical and paradigmatic choices and their implications to evaluation design and methods.   “Mixed methods” may give you an impression that it is all about techniques of using quantitative and qualitative methods in one evaluation study.  It is one area of discussion. Mixed Methods Evaluation TIG views that mixing can occur at the level of inquiry purpose, philosophical assumptions, methodological design and/or specific data gathering technique. So, Mixed Methods Evaluation TIG sees our discussion can include the relationship between paradigmatic and theoretical lenses and methods.   The authors of this week’s posts will walk us through how the feminist lens informed inquiry purposes, choice of evaluation design, and methods.

 

Highlighted for FIE/MME week are:

  • Authors will explicitly talk about their worldviews, such as:
    •  what they believe in (they believe social justice),
    • issues they are concerned about (they are concerned with gender issues and marginalized populations),
    • and how their worldviews influenced evaluation questions they asked and their choice of methods.

Lesson Learned: Your evaluation lens is important. The feminist lens helps evaluators to see conflicting views of what the problem is.  With this understanding, evaluators consciously make decisions about what and whose evaluation questions to be asked.   The feminist lens also helps evaluators to see the diversity among a disadvantaged population.

Hot Tips:

  • Be reflective. You will also notice that evaluators are reflective of how they and their evaluations may be perceived by other people. They provide lessons learned from establishing relationships with evaluation participants, evaluation commissioner, and audience.
  • Match your analysis to your evaluation design. Evaluators decided data collection and analysis methods by considering evaluation questions, purpose of evaluation, and settings in which data collection took place. How to include perspectives of marginalized population is an important consideration for deciding methods.

Rad Resources:

The American Evaluation Association is celebrating the Mixed Methods Evaluation and Feminist Issues TIGs (FIE/MME) Week. The contributions all week come from FIE/MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello! We’re Allan Porowski from ICF International and Heather Clawson from Communities In Schools (CIS). We completed a five-year, comprehensive, mixed-method evaluation of CIS, which featured a several study components – including three student-level randomized controlled trials; a school-level quasi-experimental study; eight case studies; a natural variation study to identify what factors distinguished the most successful CIS sites from others; and a benchmarking study to identify what lessons CIS could draw from other youth-serving organizations.  We learned a lot over the years, and wanted to share a few big takeaways with you about conducting evaluations on interventions for at-risk youth.

Lessons Learned:

  • Sometimes, you have to catch falling knives: We found that the students coming into CIS were targeted for services because they were on the strongest downward trajectories on a number of factors (e.g., academics, behavior, family issues, attendance, etc.). There’s an old adage in stock market trading that you should “never catch a falling knife” – but that’s what CIS and other dropout prevention programs do every day. This has implications for how you evaluate the relationship between dosage and outcomes. A negative relationship between dosage and outcomes doesn’t necessarily indicate that services aren’t working – it can actually be an indication that services are going to where they are needed the most.
  • Look for the “Nike Swoosh”: The general pattern of outcomes among CIS students looked like Nike’s “swoosh” logo: There was an initial downward slide followed by a longer, more protracted period of improvement. Reversing that initial downward slide takes time, and this pattern is worth investigating if you’re evaluating programs for at-risk youth.
  • As the prescient rock band Guns n’ Roses put it, “All we need is just a little patience”: Needless to say, it takes a long time to turn a child’s life around. So many evaluations of at-risk students don’t have a long enough time horizon to show improvements, which may in part explain why we see such low effect sizes in dropout prevention research relative to other fields of study.

Rad Resources:

  • Executive Summaryof Communities In School’s Five-year National Evaluation
    • Communities In Schools has great ideas and resources for dealing with at-risk youth. CIS surrounds students with a community of support, empowering them to stay in school and achieve in life. Through a school-based coordinator, CIS connects students and their families to critical community resources, tailored to local needs. Working in nearly 2,700 schools, in the most challenged communities in 25 states and the District of Columbia, CIS serves nearly 1.26 million young people and their families every year.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello! We’re Allan Porowski from ICF International and Heather Clawson from Communities In Schools (CIS). We completed a five-year, comprehensive, mixed-method evaluation of CIS, which featured  several study components – including three student-level randomized controlled trials; a school-level quasi-experimental study; eight case studies; a natural variation study to identify what factors distinguished the most successful CIS sites from others; and a benchmarking study to identify what lessons CIS could draw from other youth-serving organizations.  We learned a lot about mixed-method evaluations over the course of this study, and wanted to share a few of those lessons with you.

Lessons Learned:

  • Complex research questions require complex methods. Disconnects exists between research and practice because the fundamental research question in an impact evaluation (i.e., Does the intervention work?) provides little practical utility for practitioners in their daily work. CIS leadership not only wanted to know whether CIS worked, but also how it worked, why it worked, and in what situations it worked so they could engage in evidence-informed decision making. These more nuanced research questions required a mixed methods approach. Moreover, CIS field staff already believed in what they were doing – they wanted to know how to be more effective. Mixed methods approaches are therefore a key prerequisite to capture the nuance and the process evaluation findings desired by practitioners.
  • Practitioners are an ideal source of information for determining how much “evaluation capital” you have. CIS serves nearly 1.3 million youth in 25 states, which opens up the likelihood that different affiliates may be employing different language, processes, and even philosophies about best practice. In working with such a widespread network of affiliates, we saw the need to convene an “Implementation Task Force” of practitioners to help us set parameters around the evaluation. This group met monthly providing incredibly helpful in (a) identifying language commonly used by CIS sites nationwide to include in our surveys, (b) reviewing surveys and ensuring that they were capturing what was “really happening” in CIS schools, and (c) identifying how much “evaluation capital” we had at our disposal (e.g., how long surveys could take before they posed too much burden).
  • The most important message you can convey: “We’re not doing this evaluation to you; we’re doing this evaluation with you.” Although it was incumbent upon us as evaluators to be dispassionate observers, that did not preclude us from engaging the field. Evaluation – and especially mixed-methods evaluation – requires the development of relationships to acquire data, provide assistance, build evaluation capacity, and message findings. As evaluators, we share the desire of practitioners to learn what works. By including practitioners in our Implementation Task Force and our Network Evaluation Advisory Committee, we were able to ensure that we were learning together and that we were working toward a common goal: to make the evaluation’s results useful for CIS staff working directly with students.

Resources:

  • Executive Summary of CIS’s Five-Year National Evaluation
  • Communities In Schools surrounds students with a community of support, empowering them to stay in school and achieve in life. Through a school-based coordinator, CIS connects students and their families to critical community resources, tailored to local needs. Working in nearly 2,700 schools, in the most challenged communities in 25 states and the District of Columbia, Communities In Schools serves nearly 1.26 million young people and their families every year.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Hello! I’m Terri Anderson, Director for Evaluation at the University of Massachusetts Medical School’s (UMMS) Center for Health Policy and Research. I want to share our evaluation team’s experience using the National Institute of Health’s guide, Best Practices for Mixed Methods Research in the Health Sciences, to understand an unexpected evaluation result.

When combining survey data with in-depth interviews, national guidelines can help. Our UMMS evaluation team with expertise in quantitative and qualitative methods is studying the Massachusetts Patient Centered Medical Home (PCMH) Initiative.  In this project, 46 primary care practices with varying amounts of PCMH experience will transform over a 3 year period and achieve National Council on Quality Assurance PCMH recognition.  Three members from each practice completed a quantitative survey as the baseline assessment of medical home competency.

The assessment results surprised us.  A group of practices with two years of PCMH experience scored lower than the novice groups when we expected just the opposite.  So, we looked to our qualitative results, comparing code summary reports to the quantitative results.  The NIH mixed methods guide terms this approach to integrating multiple forms of data, ‘merging’.

The guide describes ‘connecting’ as well.  To connect, we included the quantitative analyses in the semi-structured guides used for subsequent qualitative data collection. With these results we understood the novice groups’ advantage.  Integrating data further reinforced the importance of teamwork in evaluation work.

Lessons Learned:

  • Form an interdisciplinary team.  We established a ‘mixed methods subgroup’ in which quantitative and qualitative team members work jointly rather than in parallel.  In a team the focus shifts from ‘this approach versus that approach’ to ‘what approach works best’. Regular meeting times allow the members to learn to work together.  Our team originally formed to investigate a single puzzling result but has expanded its work to merge quantitative and qualitative staff satisfaction data.
  • Connect your data.  We plan to continue using quantitative results in semi-structured interview guides to collect qualitative data.  The qualitative results provided an in-depth understanding of the quantitative assessment and the opportunity for interviewees to comment on their practices’ transformation.

Rad Resources:

  • Best Practices for Mixed Methods Research in the Health Sciences  The National Institutes of Health Office of Social and Behavioral and Social Sciences Research commissioned this recently released guide in 2010.  Easily accessible on-line it contains seven sections of advice for a conducting mixed methods project and lists of key references and resources.
  • Mixed Methods Topical Interest Group Through the AEA website we can communicate directly with experts in the growing mixed methods field whose work is referenced in the NIH guide.

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

Hello, my name is Hongling Sun. I am a PhD student from University of Illinois at Urbana-Champaign. I feel honored to have this opportunity to share my mixed methods experience with you, with a focus on the lessons and tips I’ve learned through constructing a mixed methods design for a needs assessment study. That design was initially constructed to fulfill the requirement from Dr. Greene’s mixed methods class, and later has been developed into my dissertation study. Below are a few lessons and tips I have learned through this experience.

Hot Tips:

  • Be clear. You cannot claim you will do a mixed methods study before you are clear with what you want to do in your evaluation. A tip here is to remember methods are always subordinate to substantive studies. Therefore, decide your evaluation purposes and questions first, and then decide whether mixed methods is appropriate for your study, and if yes, decide what specific mixed methods purposes and design fit into your study.
  • Use high quality rationale. You cannot justify your use of mixed methods with ‘general’ words by only quoting mixed methods literature. A high quality rationale is to integrate the substance of your specific evaluation purposes and questions with mixed methods literature.
  • Consider your reasons. It is important to explicitly justify your use of mixed methods in your evaluation. Evaluators who adopt a mixed methods design are encouraged to carefully consider your reasons for using mixing methods (or mixed methods purposes), and your design dimensions (e.g., weight of methods, timing) along with what methods will be used and why, and where the “mixing” will take place in your evaluation.
  • Keep your mixed methods design flexible.Given that mixed methods practice is often more complicated than mixed methods theories, evaluators who design a mixed methods study are suggested to consider what an alternative design can be. For example, a mixed methods study with development purpose could become a complementary purpose, because in practice you may find you have no time to analyze the data from an earlier phase as planned; in that case, you could implement different phases at the same time, and then analyze the data after the completion of the entire data collection.
    • Consider your alternative plan when you construct a mixed methods design. The alternative plan can be a different mixed methods design, or a mono-method design.

These lessons and tips are nothing new. However, I still see them as the most critical lessons I’ve ever truly experienced and understood (not only at the theoretical level), and they remain among the most critical principals in my current practice of mixed methods.

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

·

Hi, we’re Kristy Moster and Jan Matulis. We’re evaluation specialists in the Education and Organizational Effectiveness Department at Cincinnati Children’s Hospital Medical Center.

Over the past year, our team has been engaged in the analysis of data from a three-year project with the Robert Wood Johnson Foundation focused on quality improvement training in healthcare. The data from the project includes information from surveys, interviews, knowledge assessments, observations of training, document analysis, and peer and instructor ratings of participants’ projects. Our task as a team was to pull all of the information together to create a clear, accurate, coherent story of the successes and challenges of quality improvement training at our institution. This work was also discussed as part of a roundtable at the AEA Conference in November 2011.

Hot Tip:

  • Create a visual framework. Guided by an example found in transdisciplinary science, we created a visual framework to represent the extensive data and data sources from the project, including their interconnections. Starting with the logic model, we identified a set of themes being addressed by the evaluation, and then matched individual survey items, interview questions, etc. to the themes. From there we created a framework to show connections between the data sources and themes. This framework helped to create a shared understanding of the data for our research team, some of whom were fairly new to the project when the analysis began. It also provided structure to our thinking and our work. For example, the framework helped us to ensure that all themes were addressed by multiple data sources and also to determine which data sources to target first for different phases of our analysis (in our case, those sources that addressed the most themes of highest interest).
Measurement and Analysis Framework

Partial Example of the Measurement and Analysis Framework

Lesson Learned:

  • Consistency is crucial. By this we mean that the interconnectivity of all instruments and items needs to be well thought out. This is especially difficult in a multi-year evaluation by a research team with membership changing over time. As new instruments are created it is important to understand the connections to other instruments and the relevant themes to enable later comparison and combining of the data.

Resource:

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · ·

Our evaluation team from Loyola University Chicago, School of Education–Leanne Kallemeyn, Assistant Professor; Daniela Schiazza, doctoral candidate and project coordinator; and Ann Marie Ryan, Associate Professor–have been working on an external evaluation of a U.S. Department of Education Teaching American History grant. For two years, we used mixed methods to report GPRA indicators—surveys, tests, tracking databases—and to address the information needs of program providers— interviews, observation, case studies.  We found that historians appreciated the approach of building arguments about implementation and impact from both qualitative and quantitative data sources, but we were only mixing at the level of interpretation.   So, we experimented with conducting an integrated data analysis.  Daniela used this opportunity to document and study our process for her dissertation.

Resources:

Lessons Learned:

  • Develop the question appropriately. Develop an evaluation question that points to the evaluand, and requires both qualitative and quantitative methods to address it.
  • Decide the purpose. Decide on the mixed methods purpose (Refer to Chapter 6 in Greene’s book)
  • Use visual tools. Utilize a Venn diagram to display the overlapping and unique facets of the program that the qualitative and quantitative methods will address to guide integrated analyses.  Refer to it often.
  • Analyze carefully. Initially analyze each data source based on techniques in its own tradition.  Organize preliminary findings by evaluation question(s), displaying qualitative and quantitative data side-by-side to engage in iterative reflections. Include stakeholders in these reflections as they gain valuable insights.
  • Expect dissonance. Do not be concerned when quantitative and qualitative results do not corroborate.  Dissonance provides an opportunity to explore why the conflict exists, which can lead to new insights.  We found dissonance especially helpful during preliminary data analysis.
  • Map findings. When conducting a final integrated data analysis, consider ways in which the findings from one method can be mapped to the findings of the other method.  For example, we had four case studies of teachers.  We conducted a cluster analysis of survey responses from all participating teachers.  We then identified which survey cluster the case study participants were situated within.
  • Be patient and creative. There are no roadmaps for integrated data analysis.  Not every analytic approach will yield useful results.  For example, in comparison to the cluster analysis, we did not find it as helpful to quantify codes from the case studies, and compare them to survey responses.

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

We are Tayo Fabusuyi and Tori Hill, Lead Strategist and Research Scientist respectively of Numeritics, a research and consulting firm based in Pittsburgh, PA.

We conducted an evaluation of the Black Male Leadership Development Institute (BMLDI), a year-long program in Western Pennsylvania for high-school aged African American males. The BMLDI is designed to give participants access to Black male role models, provide opportunities for interaction within a supportive peer group, offer a challenging curriculum and equip the young men with leadership skills with a view towards positively impacting their perspectives and values.

Our evaluation strategy consisted of a mixed method, multi-phase approach with formative and summative components. In implementing the summative part of our strategy, we sought a framework robust enough to adequately capture how effective program activities were in achieving program goals, and to also provide insights on the structure and efficiency of those activities.

The framework that we employed was a modified form of Grove et.al’s EvaluLead Framework. The framework is premised on an open systems environment in which three interrelated forms of behavioral changes at the individual level are examined; “episodic,” “developmental,” and “transformative. These behavioral changes were analyzed using two forms of inquiry; “evidential,” or those measured using quantitative instruments and “evocative,” those assessed through qualitative tools.

This robust strategy has allowed us probe beyond program outputs to a more comprehensive framework that takes into consideration the broader influences that often affect program outcomes of this nature. The evaluation strategy also naturally lends itself to data triangulation, an attribute that helped reduce the risk of incorrect interpretations and strengthen the validity of our conclusions and recommendations made as regards program changes going forward.

Lesson Learned:

  • Given the myriad of factors that may influence program outcomes, the evaluation of programs similar to the BMLDI program are best carried out in an open systems environment. This also guarantees that the evaluation process will be flexible enough to make provisions for exit ramps in the evaluation process and to capture unintended outcomes.

Hot Tips:

  • An equally robust data gathering method is required to monitor progress made towards program goals and adequately capture program outcomes. We would recommend a 2 dimensional evaluation framework – evaluation type x data type.
  • For a behavioral change evaluation, goals should be focused on contribution, not attribution. The emphasis should be to show that program activities aided in achieving outcomes rather than claiming that program activities caused the outcomes.

RAD Resources:

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· ·

My name is Mika Yamashita, a program chair of the Mixed Methods Evaluation Topical Interest Group (TIG).  The Mixed Methods Evaluation TIG was founded in 2010 to be a space for members to “examine the use of mixed methods evaluation through reflective analysis of philosophy, theory and methodology that is developing in the field of mixed methods” (Petition submitted to AEA in 2010). Evaluation 2012 will be our third year to sponsor sessions.

Mixed Methods Evaluation TIG members who presented at past conferences contributed this week’s posts.  A majority of presentations focused on findings from mixed methods evaluations, analysis of data collection and analysis methods, and strategies used in evaluation teams.  So, posts for this week will cover these topics. On Monday, Tayo Fabusuyi and Tori Hill will highlight the framework used for the evaluation of a minority leadership program. On Tuesday, Leanne Kallemeyn and her colleagues at Loyola University will share lessons learned from and tips for conducting integrated analysis. On Wednesday, Kristy Moster and Jan Matulis will walk us through how their evaluation team members worked to analyze data from multiple sources.  On Thursday, Hongling Sun will share lessons learned from conducting a mixed methods evaluation. Finally, on Friday, Terri Anderson will share her evaluation team’s experience using the National Institute of Health’s guide, Best Practices for Mixed Methods Research in the Health Sciences to understand an unexpected evaluation result

Rad Resources: Listed are resources I found helpful for learning about Mixed Methods Evaluation.

Hot Tips: 

The American Evaluation Association is celebrating Mixed Method Evaluation TIG Week. The contributions all week come from MME members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · ·

<< Latest posts

Older posts >>

Archives

To top