AEA365 | A Tip-a-Day by and for Evaluators

CAT | Health Evaluation

I am Elizabeth Tully the Online Toolkit Manager at the Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP). One of the toolkits that I work on regularly is the Measurement, Learning & Evaluation (MLE) Project‘s Measuring Success Toolkit. This toolkit provides guidance on how to use data to plan a health program and to measure its success through monitoring and evaluation (M&E). Using data to design and implement health programs leads to more successful and impactful programs. Data can be used to solve health programming-related problems, inform new program design, assess program effectiveness and efficiency, and suggest evidence-based adaptations and improvements.

But this post is especially about the importance of designing useful resources for M&E practitioners – and the Measuring Success Toolkit is Rad!Tully

Hot Tip #1: Using the Toolkit. The Toolkit is meant to be used! It offers full text documents and usable tools that are in the form of an uploaded file that a user can download or a hyperlink to another website’s rad resources. It is organized both by steps in an M&E plan and by health topic. A handy Toolkit tutorial video is also available to assist new users in navigating the Measuring Success Toolkit.

Hot Tip #2: Curated Content Focuses on Use. The Measuring Success Toolkit team updates the toolkit with useful resources every quarter! This means that the content is curated by M&E experts and includes guides, checklists, protocols, indicators and other tools that can be used by M&E practitioners in the field. While you won’t find peer reviewed journal articles or lengthy end of project reports in this toolkit, you will find the tools and resources to help you plan for, monitor and evaluate a program that would be worthy of an esteemed journal. You’ll certainly be prepared to document your project’s success!

Rad Resources: Within the toolkit you’ll find plenty of great tools – 160 and counting! Here’s a quick list of our most popular (and most downloaded) resources on the site:

Sample Outline of an M&E Plan

Qualitative Research Methods: A Data Collector’s Field Guide

A Guide for Developing a Logical Framework

Please send suggestions for resources to include to contactus@urbanreproductivehealth.org. Our next update will be in mid-October!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi. We’re Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services. As an organization, we are committed to ongoing quality improvement efforts to enhance our services for people with disabilities. Earlier, we wrote about common techniques that help a quality improvement (QI) team to be successful. Today we share some potholes and pitfalls we’ve encountered in group facilitation and our tips for negotiating them successfully:

Lessons Learned:

  • New problems or issues frequently arise in the middle of a QI project. Team members, management, or external events (such as changes in the industry) can generate issues unrelated to the original charge. This can be discouraging for the team members and leader and can delay completion of the project. The following may be helpful.
    • Reaffirm the team’s goals, mission, and review data as a group to ascertain if the new issue should be addressed in this venue or in another way.
    • Allow team members to opt out of participating in the new task. Seek new members for the team as needed to address the new issue(s).
    • Keep a “hot” list of issues that arise to be addressed by future QI teams.
  • Recommendations from team not fully accepted. A less than enthusiastic response from decision- makers to a team’s recommendations is a challenge for any team.
    • Set expectations with the group up front that recommendations might be accepted, rejected or amended.
    • Sustain the group’s enthusiasm during the revision process by reminding them of the importance of their work and input regardless of the outcome.
    • Emphasize the positive feedback before sharing constructive feedback. Thank team members for their efforts.
    • Ensure that relevant decision-makers are regularly briefed so the team can make “mid-course corrections” toward options likely to be approved.
  • Difficulty achieving full team consensus. This can be due to dominating or defensive team member(s), incomplete information or team members needing more time for analysis.
    • Encourage subgroup and individual work on the issue between meetings.
    • Allow the team to live with ambiguity for a while to enable consensus to develop.
    • Document what’s already been decided and refer team members back to prior discussions.

Thoughts to Ponder:

“The best-laid plans of mice and men / Often go awry” – from a poem by Robert Burns. The QI team process does not always go smoothly; however, these unexpected challenges present opportunities for better overall outcomes.

From a motivational poster by the British government in 1939, the facilitator must “keep calm and carry on” through the potholes and pitfalls of the QI team process.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi.  We’re  Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services (DES). Although DES conducts evaluations regarding whether an applicant for public benefits can be found disabled, evaluation as a research endeavor is not our primary focus. Nevertheless, as an organization, we are committed to ongoing quality improvement efforts to enhance our services for people with disabilities. We use a team-based iterative approach to define and address problem functions and processes.

For example, we used the process described herein to develop Quality Assurance systems for our clinical, clerical and technical support processes. We have also used this method to tackle caseload backlogs, and effective processing of incomplete applications.

We’ve discovered over time, regardless of the issue or problem involved, that there are common techniques that help a quality improvement (QI) team be successful. We would like to share some of these lessons learned with you.

Lesson Learned: 

  • Determine and clearly state the issues to be solved and team goals.
  • Involve key staff (line staff doing the work and managers supervising the work) in the development of any QI initiative. They are in “the know” about areas that may be problematic.
  • Incorporate non-judgmental facilitation to keep up the momentum. Key components include:

o   Involving all participants in decision making/discussion;

o   Keeping meeting minutes and agendas;

o   Keeping track and sharing “to do” lists, “next steps” and progress towards goals;

o   Meeting on a regular and ongoing basis (don’t cancel meetings unless absolutely necessary);

o   Seeking management decisions and input as needed; and

o   Making sure you hear from the quiet folks in the room – they may need a little encouragement to speak up, but often offer great insights.

  • Utilize team members/subcommittees to perform specific tasks between meetings.
  • Utilize available qualitative and quantitative data.
  • Collect specific data, as necessary, to help define the problem and suggest solutions.
  • Do fact finding to support decision-making.
  • Maintain a “living” working document(s) as decisions are made to be incorporated into a final product.

Utilize pilot testing to determine feasibility and make changes (i.e., “fix bugs”) prior to full implementation.

  • Provide periodic communication to the rest of the department or organization during the project and at its conclusion.
  • Train all impacted staff on process improvements.
  • Conduct periodic assessments after implementation to assess success of the project.
  • Refine processes as new issues and changes occur.

Hot Tips:

  • Sometimes QI processes take longer than expected. “Keep going even when the going is slow and uncertain.”  G.G. Renee Hill
  • “To discover new ways of doing something – look at a process as though you were seeing it either for the first or last time.” Mitchel Martin

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, we are Linda Cabral and Jillian Richard-Daniels from the Center for Health Policy and Research at University of Massachusetts Medical School. Collecting and reporting data on a set of predefined measures is something that many evaluators are asked to do.  This typically quantitative process involves gathering data, often from multiple sources and stakeholders, to assess the amount, cost, or result of a particular activity. But have you ever thought about what goes into measure development, collection and reporting? A recent evaluation that we completed included interviews with the people involved in this process.

A federal grant interested in testing a set of child health quality measures was awarded to a group of stakeholders in Massachusetts. An example of such a quality measure is the percentage of infants who reached age 15 months during the measurement year, and who has six or more well infant visits during the first 15 months of life. Given that this was the first time that this particular set of measures was being collected, there was interest in learning about the feasibility, relevance and usefulness of these measures.  Qualitative data collection in the form of interviews and focus groups were done with a variety of stakeholders, ranging from the people who defined and calculated the measures to the providers who were seeing measure results specific to their site.

Lessons Learned:

  • Do your homework ahead of time – When talking to people involved in the nitty-gritty of data measurement and calculation, you need to have a solid understanding of the technical aspects involved so that you don’t spend the entire interview asking background questions.  Be comfortable with the subject matter.
  • Be flexible – The measure development process takes time. There can be unanticipated challenges or circumstances that can delay any component of the project. If interviews are planned for the end of a particular component, be flexible with the timing.
  • Orient interviewees, if necessary – Not all stakeholders, particularly consumers, will have a strong understanding of what a measure is.  In order to get the desired feedback, you may need to spend time providing some background and education before you start asking questions.

Hot Tips:

  • In a project that takes place over the course of a number of years with several different components, try to complete interviews when the information will be the most current for the interviewee and you.
  • Have a tracking mechanism set up to help you stay organized with your data collection. For us, this takes the form of an Excel spreadsheet containing fields such as interviewee contact information, dates of contact and data collection, and staff responsible for transcription and quality assurance.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Lisle Hites, Director of the Evaluation and Assessment Unit (EAU) at the University of Alabama at Birmingham (UAB). I’m writing to share my team’s experiences in conducting needs assessments.

We frequently have opportunities to work with our colleagues on campus to conduct needs assessments for grant-funded projects. One such example was a training grant through the School of Nursing, and we provide it to highlight the value of gathering more than one perspective in assessing needs.

In 2012, CDC data revealed that the South is the epicenter of new infections of HIV; compared to other regions, 46% of all new infections occurred in the region, with a higher percentage of women (24%) and African-Americans (58%) represented in the new infections. Therefore, it is critically important that healthcare providers receive HIV/AIDS training in order to provide HIV/AIDS primary care to meet current and future healthcare demands.

To establish workforce training capacity, we sent surveys to two key healthcare audiences: (1) potential training sites (Ryan White Grantees) and (2) future family nurse practitioners (FNPs). Responses identified both a shortage of trained HIV/AIDS healthcare providers as well as an interest by providers and students to establish clinical training opportunities. Additionally, 78% of current FNP students enrolled at one research institution in the south resided within 60 miles of a Ryan White Grantee site in a tri-state region.

Lessons Learned:

  • The design of this needs assessment allowed us to consider the capacity of Ryan White Grantee sites to provide clinical training opportunities for FNP students.
  • The survey captured the interest and desire of FNP students to seek the skills necessary to provide HIV/AIDS primary care.

Despite the current and future needs for a trained healthcare workforce, healthcare providers in the Deep South still encounter many of the same attitudes toward people living with HIV/AIDS as were found in the early years of the epidemic; therefore, it was necessary to identify a pool of potential candidates for training (i.e., FNP students). At the same time, little was known regarding the capacity and willingness of Ryan White Grantee sites to provide an adequate number of opportunities to meet the training needs of these students. By considering both sides of the equation, we could accurately match the number of students and training sites to ensure a high degree of satisfaction and success for both parties.

Rad Resources: 

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi. We’re Judy Savageau and Kathy Muhr from the University of Massachusetts Medical School’s Center for Health Policy and Research. Within our Research and Evaluation Unit, we work on a number of projects using qualitative and quantitative methods as well as primary and secondary data sources. We’ve come to appreciate that different types of data from different sources need varying levels of data management and quality oversight.

One of our current projects is evaluating a screening program that requires primary care providers to screen children for potential behavioral health conditions. Among a random sample of 4000 children seen for a well child visit during one of two study years, we collected data both from medical records (primary data source: both quantitative and qualitative chart notes) as well as administrative/claims data (secondary data source: solely quantitative). Given the nature of data from the two sources, we implemented different data quality checks and cross-checks between them.

Lessons Learned:

  • Claims data comes from the insurance payer having already gone through its own internal data cleaning and data management processes. However, much of the patient demographic data comes at the time of insurance enrollment and not updated at the time of a clinical visit. Some data elements are often incomplete and not updated even after numerous clinical encounters, especially data such as gender, race, ethnicity and primary language. While a provider might ‘know’ this information when seeing a patient, it’s not necessarily updated in administrative datasets.
  • Many practices don’t necessarily collect demographic data in a uniform manner unless they’re required to report on this data. Primary care providers are well connected to their patient’s demographics in terms of needs for interpreters, cultural health beliefs, and age- or gender-specific anticipatory guidance needs. Unfortunately, medical records data often had nearly as much missing data as did the administrative claims data!
  • Cross-checking data between these two sources was an important step for us to take in this project as we hypothesized that there might be differences in screening children for behavioral health needs. Wanting to assess potential health service disparities was an important factor in this evaluation given the interest in vulnerable populations.
  • While electronic medical records (EMRs) were evident in at least 60% of practices where charts were abstracted, it was no surprise to find that EMRs vary practice to practice. It was clear that projects such as this one might then need to use text-based data within the chart notes to obtain vital information in order to assess potential disparities.

Hot Tip: Although data quality is key, find a balance between budgetary and personnel resources and the time required to cross-check data through multiple sources and/or impute missing data using a variety of techniques.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Humberto Reynoso-Vallejo, a private consultant on health services research. A few years ago, I was part of an exploratory study of Latino caregivers in the Boston area caring for a family member suffering Alzheimer’s disease. Difficulties facing those families coping with the disease have promoted the rise of support groups for diverse population group. Support groups for racial/ethnic diverse caregivers were scarce, and in the case of Latino caregivers in the Boston area nonexistent. To respond to this need, I tried to develop a support group for Latinos with the assistance of the Alzheimer’s Association. After several unsuccessful attempts, I conducted a focus group with four caregivers to identify barriers to participation. Findings indicated that caregivers faced a number of issues including: lack of transportation; lack of available time to take off from other responsibilities; the absence of linguistically appropriate support groups; caring for other family members dealing with an array of health problems (multiple caregiving); and, other personal and social stressors.

I designed an alternative and pragmatic model support group, which took the form of a radio program. The “radio support group” directly targeted caregiver’s concerns and aimed to:

a) Disseminate culturally relevant information, largely from the point of view of the caregivers themselves, either as guest in the program or when calling into; and,

b) Reduce the sense of isolation that many caregivers feel on a daily basis as a result of their caregiving roles.

I facilitated the radio support group with the participation of caregivers, professionals and service providers. Four programs were aired exploring topics such as memory problems, identifying signs of dementia, caregiver needs, and access to services. After each radio program was aired, I called the 14 participant caregivers to explore their reactions, and found that the majority of them were not able to participate. Since the “live” radio support group was not accomplishing its original purpose of disseminating information and reducing caregivers sense of isolation, I decided to distribute the edited audiotapes of the 4 programs to all caregivers. Overall, caregivers found the information useful and many established contact with others. 

Lessons Learned:

  • This model of intervention, the radio support group, showed that innovation simultaneously with cultural relevant material is promising.
  • Research and evaluation should adapt to the particular needs and social context of Latino caregivers of family members with Alzheimer’s disease.
  • There is a need for more culturally appropriate types of interventions that mobilize caregivers’ own strengths, values, and resources.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hi, we are Monika Mitra and Lauren Smith from the Disability, Health, and Employment Policy unit in the Center for Health Policy and Research at the University of Massachusetts Medical School.  Our research is focused on health disparities between people with and without disabilities.

Evaluating a Population of People with Disabilities

In collaboration with the Health and Disability Program (HDP) at the Massachusetts Department of Public Health (MDPH), we conducted a health needs assessment of people with disabilities in Massachusetts.  The needs assessment helped us better understand the unmet public health needs and priorities of people with disabilities living in MA.  We learned a tremendous amount in doing this assessment and wanted to share our many lessons learned with the AEA365 readership!

Lessons Learned:

  • 3-Pronged approach

Think about your population and how you can reach people who might be missed by more traditional methodologies:  In order to reach people with disabilities who may not be included in existing health surveys, we used two other approaches to complement data from the MA Behavioral Risk Factor Surveillance System (BRFSS).  They included: an anonymous online survey on the health needs of MA residents with disabilities and interviews with selected members of the MA disability community.

  • Leveraging Partnerships

Think about alternative ways to reach your intended population:  For the online survey, we decided on a snowball sampling method.  This method consists of identifying potential respondents who in turn identify other respondents; it is a particularly useful methodology in populations who are difficult to reach and may generally be excluded from traditional surveys and affect one’s generalizability of findings.  HDP’s Health and Disability Partnership provided a network to spread the survey to people with disabilities, caregivers, advocates, service providers, and friends/family of people with disabilities.

  • Accessibility is Key

Focus on accessibility:  In an effort to increase the accessibility of the survey, Jill Hatcher from DEAF, Inc. developed a captioned vlog (a type of video blog) to inform the Deaf, DeafBlind, Hard of Hearing, and Late-Deafened community about the survey.  In the vlog, she mentioned that anyone could call DEAF, Inc. through videophone if they wanted an English-to-ASL translation of the survey.  Individuals could also respond to the survey via telephone.

Rad Resources:

  • Disability and Health Data System (DHDS)

DHDS is an online tool developed by the CDC providing access to state-level health data about people with disabilities.

  • Health Needs Assessment of People with Disabilities Living in MA, 2013

To access the results of the above-mentioned needs assessment, please contact the Health and Disability Program at MDPH.

  • A Profile of Health Among Massachusetts Residents, 2011

This report published by the MDPH contains information on the health of people with disabilities in Massachusetts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hi, we are Ann Lawthers, Sai Cherala, and Judy Steinberg, UMMS PCMHI Evaluation Team members from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s blog title sounds obvious, doesn’t it? Your definition of success influences your findings. Today we talk about stakeholder perspectives on success and how evaluator decisions about what is “success” can change the results of your evaluation.

As part of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), the 45 participating practices submitted clinical data (numerators and denominators only) through a web portal. Measures included HEDIS® look-alikes such as diabetes outcomes and asthma care, as well as measures developed for this initiative, e.g., high risk members with a care plan. Policy makers were interested in whether the PCMH initiative resulted in improved clinical performance, although they also wanted to know “Who are the high- or low-performing practices on the clinical measures after 18 months in the initiative?” The latter question could be about either change or attainment. Practices were more interested in how their activities affected their clinical performance.

To address both perspectives we chose to measure clinical performance in terms of both change and attainment. We then used data from our patient survey, our staff survey, and the Medical Home Implementation Quotient (MHIQ) to find factors associated with both change and attainment.

Lesson Learned: Who are the high performers? “It depends.” High performance defined by high absolute levels of performance disproportionately rewarded practices that began the project with excellent performance. High performance defined by magnitude of change slighted practices that began at the top, as these practices had less room to change. The result? The top five performers defined by each metric were different.

Hot Tip:

  • Do you want to reward transformation? Choose metrics that measure change over the life of your project.
  • Do you want to reward performance? Choose metrics that assess attainment of a benchmark.
  • The results of each metric will include different lists of high performers.

Lesson Learned: The practices wanted to know: “What can we do to make ourselves high-performers?” Our mixed methods approach found leadership and comfort with Health Information Technology predicted attainment, but only low baseline performance predicted change.

Hot Tip: A mixed methods approach provides a rich backdrop for interpreting your findings and providing detail for stakeholders who need/want detail.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings. I am Christine Johnson, the Director of Transformation and Quality Improvement for the Patient Centered Medical Home Initiative (PCMHI). I am from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s post shares how using self-assessment medical home transformation tools with primary care practices can help practices self-evaluate throughout the transformation process and provide data for the evaluation team.

Medical home transformation involves multiple stakeholders: health insurance payers, practices, policy makers and those providing transformation technical assistance. When practices complete the same tool and then share their results, multiple stakeholders can literally be on the same page with a similar understanding of what has been done, what needs to be done, and identifying gaps. Self-assessment tools can be tailored to your individual project’s goals, or standardized tools can be used ‘as is’.

Hot Tips:

  • Use tools to monitor progress and design technical assistance. Not only do the results from the transformation tools support practices to track and monitor their practice redesign, they allow technical assistance and practice staff to discuss any differences in their perception of the practice change efforts and can be a key resource for designing further technical assistance.
  • Utilize health insurance payers as stakeholders. Payers can see progress being made that is often intangible and support the practices in building the necessary foundation that will eventually lead to clinical performance improvement.
  • Administer self-assessment tools multiple times throughout a project to highlight small, but encouraging, changes.

Lessons Learned: Self-assessment tools can:

  • Establish a practice’s baseline
  • Enable practices to understand where they are in their transformation compared to other practices
  • Guide and structure practices’ transformation, particularly if the transformation tool has both an actual and expected project status over time
  • Allow technical assistance staff to step in early to support practices that are struggling in their transformation

Hot Tip: Save yourself and the practices the time of developing and testing a new tool. Take a look at the growing number of tools already available (see links below) rather than creating your own.

Hot Tip: Once practices have some experience using a self-assessment tool, ask practices who are finding the tool useful and are successfully accomplishing their PCMH transformation to present to the other practices either via a conference call or a webinar.

There are no “perfect” on-line assessments but our team suggests:

Qualis Site landing page

Transformation practice self-assessment tool

Medical Home Index

TransforMED MHIQ

Rad Resource: Measuring Medical Homes

Clipped from http://www.medicalhomeimprovement.org/knowledge/practices.html

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. 

Older posts >>

Archives

To top