AEA365 | A Tip-a-Day by and for Evaluators

CAT | Health Evaluation

Hi.  We’re  Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services (DES). Although DES conducts evaluations regarding whether an applicant for public benefits can be found disabled, evaluation as a research endeavor is not our primary focus. Nevertheless, as an organization, we are committed to ongoing quality improvement efforts to enhance our services for people with disabilities. We use a team-based iterative approach to define and address problem functions and processes.

For example, we used the process described herein to develop Quality Assurance systems for our clinical, clerical and technical support processes. We have also used this method to tackle caseload backlogs, and effective processing of incomplete applications.

We’ve discovered over time, regardless of the issue or problem involved, that there are common techniques that help a quality improvement (QI) team be successful. We would like to share some of these lessons learned with you.

Lesson Learned: 

  • Determine and clearly state the issues to be solved and team goals.
  • Involve key staff (line staff doing the work and managers supervising the work) in the development of any QI initiative. They are in “the know” about areas that may be problematic.
  • Incorporate non-judgmental facilitation to keep up the momentum. Key components include:

o   Involving all participants in decision making/discussion;

o   Keeping meeting minutes and agendas;

o   Keeping track and sharing “to do” lists, “next steps” and progress towards goals;

o   Meeting on a regular and ongoing basis (don’t cancel meetings unless absolutely necessary);

o   Seeking management decisions and input as needed; and

o   Making sure you hear from the quiet folks in the room – they may need a little encouragement to speak up, but often offer great insights.

  • Utilize team members/subcommittees to perform specific tasks between meetings.
  • Utilize available qualitative and quantitative data.
  • Collect specific data, as necessary, to help define the problem and suggest solutions.
  • Do fact finding to support decision-making.
  • Maintain a “living” working document(s) as decisions are made to be incorporated into a final product.

Utilize pilot testing to determine feasibility and make changes (i.e., “fix bugs”) prior to full implementation.

  • Provide periodic communication to the rest of the department or organization during the project and at its conclusion.
  • Train all impacted staff on process improvements.
  • Conduct periodic assessments after implementation to assess success of the project.
  • Refine processes as new issues and changes occur.

Hot Tips:

  • Sometimes QI processes take longer than expected. “Keep going even when the going is slow and uncertain.”  G.G. Renee Hill
  • “To discover new ways of doing something – look at a process as though you were seeing it either for the first or last time.” Mitchel Martin

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello, we are Linda Cabral and Jillian Richard-Daniels from the Center for Health Policy and Research at University of Massachusetts Medical School. Collecting and reporting data on a set of predefined measures is something that many evaluators are asked to do.  This typically quantitative process involves gathering data, often from multiple sources and stakeholders, to assess the amount, cost, or result of a particular activity. But have you ever thought about what goes into measure development, collection and reporting? A recent evaluation that we completed included interviews with the people involved in this process.

A federal grant interested in testing a set of child health quality measures was awarded to a group of stakeholders in Massachusetts. An example of such a quality measure is the percentage of infants who reached age 15 months during the measurement year, and who has six or more well infant visits during the first 15 months of life. Given that this was the first time that this particular set of measures was being collected, there was interest in learning about the feasibility, relevance and usefulness of these measures.  Qualitative data collection in the form of interviews and focus groups were done with a variety of stakeholders, ranging from the people who defined and calculated the measures to the providers who were seeing measure results specific to their site.

Lessons Learned:

  • Do your homework ahead of time – When talking to people involved in the nitty-gritty of data measurement and calculation, you need to have a solid understanding of the technical aspects involved so that you don’t spend the entire interview asking background questions.  Be comfortable with the subject matter.
  • Be flexible – The measure development process takes time. There can be unanticipated challenges or circumstances that can delay any component of the project. If interviews are planned for the end of a particular component, be flexible with the timing.
  • Orient interviewees, if necessary – Not all stakeholders, particularly consumers, will have a strong understanding of what a measure is.  In order to get the desired feedback, you may need to spend time providing some background and education before you start asking questions.

Hot Tips:

  • In a project that takes place over the course of a number of years with several different components, try to complete interviews when the information will be the most current for the interviewee and you.
  • Have a tracking mechanism set up to help you stay organized with your data collection. For us, this takes the form of an Excel spreadsheet containing fields such as interviewee contact information, dates of contact and data collection, and staff responsible for transcription and quality assurance.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Lisle Hites, Director of the Evaluation and Assessment Unit (EAU) at the University of Alabama at Birmingham (UAB). I’m writing to share my team’s experiences in conducting needs assessments.

We frequently have opportunities to work with our colleagues on campus to conduct needs assessments for grant-funded projects. One such example was a training grant through the School of Nursing, and we provide it to highlight the value of gathering more than one perspective in assessing needs.

In 2012, CDC data revealed that the South is the epicenter of new infections of HIV; compared to other regions, 46% of all new infections occurred in the region, with a higher percentage of women (24%) and African-Americans (58%) represented in the new infections. Therefore, it is critically important that healthcare providers receive HIV/AIDS training in order to provide HIV/AIDS primary care to meet current and future healthcare demands.

To establish workforce training capacity, we sent surveys to two key healthcare audiences: (1) potential training sites (Ryan White Grantees) and (2) future family nurse practitioners (FNPs). Responses identified both a shortage of trained HIV/AIDS healthcare providers as well as an interest by providers and students to establish clinical training opportunities. Additionally, 78% of current FNP students enrolled at one research institution in the south resided within 60 miles of a Ryan White Grantee site in a tri-state region.

Lessons Learned:

  • The design of this needs assessment allowed us to consider the capacity of Ryan White Grantee sites to provide clinical training opportunities for FNP students.
  • The survey captured the interest and desire of FNP students to seek the skills necessary to provide HIV/AIDS primary care.

Despite the current and future needs for a trained healthcare workforce, healthcare providers in the Deep South still encounter many of the same attitudes toward people living with HIV/AIDS as were found in the early years of the epidemic; therefore, it was necessary to identify a pool of potential candidates for training (i.e., FNP students). At the same time, little was known regarding the capacity and willingness of Ryan White Grantee sites to provide an adequate number of opportunities to meet the training needs of these students. By considering both sides of the equation, we could accurately match the number of students and training sites to ensure a high degree of satisfaction and success for both parties.

Rad Resources: 

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi. We’re Judy Savageau and Kathy Muhr from the University of Massachusetts Medical School’s Center for Health Policy and Research. Within our Research and Evaluation Unit, we work on a number of projects using qualitative and quantitative methods as well as primary and secondary data sources. We’ve come to appreciate that different types of data from different sources need varying levels of data management and quality oversight.

One of our current projects is evaluating a screening program that requires primary care providers to screen children for potential behavioral health conditions. Among a random sample of 4000 children seen for a well child visit during one of two study years, we collected data both from medical records (primary data source: both quantitative and qualitative chart notes) as well as administrative/claims data (secondary data source: solely quantitative). Given the nature of data from the two sources, we implemented different data quality checks and cross-checks between them.

Lessons Learned:

  • Claims data comes from the insurance payer having already gone through its own internal data cleaning and data management processes. However, much of the patient demographic data comes at the time of insurance enrollment and not updated at the time of a clinical visit. Some data elements are often incomplete and not updated even after numerous clinical encounters, especially data such as gender, race, ethnicity and primary language. While a provider might ‘know’ this information when seeing a patient, it’s not necessarily updated in administrative datasets.
  • Many practices don’t necessarily collect demographic data in a uniform manner unless they’re required to report on this data. Primary care providers are well connected to their patient’s demographics in terms of needs for interpreters, cultural health beliefs, and age- or gender-specific anticipatory guidance needs. Unfortunately, medical records data often had nearly as much missing data as did the administrative claims data!
  • Cross-checking data between these two sources was an important step for us to take in this project as we hypothesized that there might be differences in screening children for behavioral health needs. Wanting to assess potential health service disparities was an important factor in this evaluation given the interest in vulnerable populations.
  • While electronic medical records (EMRs) were evident in at least 60% of practices where charts were abstracted, it was no surprise to find that EMRs vary practice to practice. It was clear that projects such as this one might then need to use text-based data within the chart notes to obtain vital information in order to assess potential disparities.

Hot Tip: Although data quality is key, find a balance between budgetary and personnel resources and the time required to cross-check data through multiple sources and/or impute missing data using a variety of techniques.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I am Humberto Reynoso-Vallejo, a private consultant on health services research. A few years ago, I was part of an exploratory study of Latino caregivers in the Boston area caring for a family member suffering Alzheimer’s disease. Difficulties facing those families coping with the disease have promoted the rise of support groups for diverse population group. Support groups for racial/ethnic diverse caregivers were scarce, and in the case of Latino caregivers in the Boston area nonexistent. To respond to this need, I tried to develop a support group for Latinos with the assistance of the Alzheimer’s Association. After several unsuccessful attempts, I conducted a focus group with four caregivers to identify barriers to participation. Findings indicated that caregivers faced a number of issues including: lack of transportation; lack of available time to take off from other responsibilities; the absence of linguistically appropriate support groups; caring for other family members dealing with an array of health problems (multiple caregiving); and, other personal and social stressors.

I designed an alternative and pragmatic model support group, which took the form of a radio program. The “radio support group” directly targeted caregiver’s concerns and aimed to:

a) Disseminate culturally relevant information, largely from the point of view of the caregivers themselves, either as guest in the program or when calling into; and,

b) Reduce the sense of isolation that many caregivers feel on a daily basis as a result of their caregiving roles.

I facilitated the radio support group with the participation of caregivers, professionals and service providers. Four programs were aired exploring topics such as memory problems, identifying signs of dementia, caregiver needs, and access to services. After each radio program was aired, I called the 14 participant caregivers to explore their reactions, and found that the majority of them were not able to participate. Since the “live” radio support group was not accomplishing its original purpose of disseminating information and reducing caregivers sense of isolation, I decided to distribute the edited audiotapes of the 4 programs to all caregivers. Overall, caregivers found the information useful and many established contact with others. 

Lessons Learned:

  • This model of intervention, the radio support group, showed that innovation simultaneously with cultural relevant material is promising.
  • Research and evaluation should adapt to the particular needs and social context of Latino caregivers of family members with Alzheimer’s disease.
  • There is a need for more culturally appropriate types of interventions that mobilize caregivers’ own strengths, values, and resources.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

No tags

Hi, we are Monika Mitra and Lauren Smith from the Disability, Health, and Employment Policy unit in the Center for Health Policy and Research at the University of Massachusetts Medical School.  Our research is focused on health disparities between people with and without disabilities.

Evaluating a Population of People with Disabilities

In collaboration with the Health and Disability Program (HDP) at the Massachusetts Department of Public Health (MDPH), we conducted a health needs assessment of people with disabilities in Massachusetts.  The needs assessment helped us better understand the unmet public health needs and priorities of people with disabilities living in MA.  We learned a tremendous amount in doing this assessment and wanted to share our many lessons learned with the AEA365 readership!

Lessons Learned:

  • 3-Pronged approach

Think about your population and how you can reach people who might be missed by more traditional methodologies:  In order to reach people with disabilities who may not be included in existing health surveys, we used two other approaches to complement data from the MA Behavioral Risk Factor Surveillance System (BRFSS).  They included: an anonymous online survey on the health needs of MA residents with disabilities and interviews with selected members of the MA disability community.

  • Leveraging Partnerships

Think about alternative ways to reach your intended population:  For the online survey, we decided on a snowball sampling method.  This method consists of identifying potential respondents who in turn identify other respondents; it is a particularly useful methodology in populations who are difficult to reach and may generally be excluded from traditional surveys and affect one’s generalizability of findings.  HDP’s Health and Disability Partnership provided a network to spread the survey to people with disabilities, caregivers, advocates, service providers, and friends/family of people with disabilities.

  • Accessibility is Key

Focus on accessibility:  In an effort to increase the accessibility of the survey, Jill Hatcher from DEAF, Inc. developed a captioned vlog (a type of video blog) to inform the Deaf, DeafBlind, Hard of Hearing, and Late-Deafened community about the survey.  In the vlog, she mentioned that anyone could call DEAF, Inc. through videophone if they wanted an English-to-ASL translation of the survey.  Individuals could also respond to the survey via telephone.

Rad Resources:

  • Disability and Health Data System (DHDS)

DHDS is an online tool developed by the CDC providing access to state-level health data about people with disabilities.

  • Health Needs Assessment of People with Disabilities Living in MA, 2013

To access the results of the above-mentioned needs assessment, please contact the Health and Disability Program at MDPH.

  • A Profile of Health Among Massachusetts Residents, 2011

This report published by the MDPH contains information on the health of people with disabilities in Massachusetts.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Hi, we are Ann Lawthers, Sai Cherala, and Judy Steinberg, UMMS PCMHI Evaluation Team members from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s blog title sounds obvious, doesn’t it? Your definition of success influences your findings. Today we talk about stakeholder perspectives on success and how evaluator decisions about what is “success” can change the results of your evaluation.

As part of the Massachusetts Patient-Centered Medical Home Initiative (PCMHI), the 45 participating practices submitted clinical data (numerators and denominators only) through a web portal. Measures included HEDIS® look-alikes such as diabetes outcomes and asthma care, as well as measures developed for this initiative, e.g., high risk members with a care plan. Policy makers were interested in whether the PCMH initiative resulted in improved clinical performance, although they also wanted to know “Who are the high- or low-performing practices on the clinical measures after 18 months in the initiative?” The latter question could be about either change or attainment. Practices were more interested in how their activities affected their clinical performance.

To address both perspectives we chose to measure clinical performance in terms of both change and attainment. We then used data from our patient survey, our staff survey, and the Medical Home Implementation Quotient (MHIQ) to find factors associated with both change and attainment.

Lesson Learned: Who are the high performers? “It depends.” High performance defined by high absolute levels of performance disproportionately rewarded practices that began the project with excellent performance. High performance defined by magnitude of change slighted practices that began at the top, as these practices had less room to change. The result? The top five performers defined by each metric were different.

Hot Tip:

  • Do you want to reward transformation? Choose metrics that measure change over the life of your project.
  • Do you want to reward performance? Choose metrics that assess attainment of a benchmark.
  • The results of each metric will include different lists of high performers.

Lesson Learned: The practices wanted to know: “What can we do to make ourselves high-performers?” Our mixed methods approach found leadership and comfort with Health Information Technology predicted attainment, but only low baseline performance predicted change.

Hot Tip: A mixed methods approach provides a rich backdrop for interpreting your findings and providing detail for stakeholders who need/want detail.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Greetings. I am Christine Johnson, the Director of Transformation and Quality Improvement for the Patient Centered Medical Home Initiative (PCMHI). I am from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today’s post shares how using self-assessment medical home transformation tools with primary care practices can help practices self-evaluate throughout the transformation process and provide data for the evaluation team.

Medical home transformation involves multiple stakeholders: health insurance payers, practices, policy makers and those providing transformation technical assistance. When practices complete the same tool and then share their results, multiple stakeholders can literally be on the same page with a similar understanding of what has been done, what needs to be done, and identifying gaps. Self-assessment tools can be tailored to your individual project’s goals, or standardized tools can be used ‘as is’.

Hot Tips:

  • Use tools to monitor progress and design technical assistance. Not only do the results from the transformation tools support practices to track and monitor their practice redesign, they allow technical assistance and practice staff to discuss any differences in their perception of the practice change efforts and can be a key resource for designing further technical assistance.
  • Utilize health insurance payers as stakeholders. Payers can see progress being made that is often intangible and support the practices in building the necessary foundation that will eventually lead to clinical performance improvement.
  • Administer self-assessment tools multiple times throughout a project to highlight small, but encouraging, changes.

Lessons Learned: Self-assessment tools can:

  • Establish a practice’s baseline
  • Enable practices to understand where they are in their transformation compared to other practices
  • Guide and structure practices’ transformation, particularly if the transformation tool has both an actual and expected project status over time
  • Allow technical assistance staff to step in early to support practices that are struggling in their transformation

Hot Tip: Save yourself and the practices the time of developing and testing a new tool. Take a look at the growing number of tools already available (see links below) rather than creating your own.

Hot Tip: Once practices have some experience using a self-assessment tool, ask practices who are finding the tool useful and are successfully accomplishing their PCMH transformation to present to the other practices either via a conference call or a webinar.

There are no “perfect” on-line assessments but our team suggests:

Qualis Site landing page

Transformation practice self-assessment tool

Medical Home Index

TransforMED MHIQ

Rad Resource: Measuring Medical Homes

Clipped from http://www.medicalhomeimprovement.org/knowledge/practices.html

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. 

We are Linda Cabral and Laura Sefton from the University of Massachusetts Medical School’s Center for Health Policy and Research, and part of the PCMHI Evaluation Team whose work is being described all week. We want to share our team’s experience of being participant observers at ‘learning sessions’ and how it’s helped our overall multi-modal data collection efforts.

Staff from the 45 primary care practices in the PCMHI took part in seven, day-long learning sessions over the course of the 3-year initiative. These sessions offered technical assistance to practice staff by bringing in experts in PCMH implementation and provided opportunities for practices to learn from each other and share lessons learned. Through attendance, our team was able to observe the content being presented as well as the participants’ reactions to it, thereby giving us a better understanding of what practices are working on in their transformation to becoming medical homes. Additionally, our observations helped us to shape an interview guide for future site visits with the practices.

Hot Tips: Use a template to collect data in a standardized way. Each activity during a learning session had handouts or a PowerPoint presentation that contained information for the attendees. We developed a template to collect other relevant data from each session, which encompassed 3 mains areas:

Methodology Notes – What is the format of the activity, e.g., panel presentation, group activity? Who are the people leading the activity?

Field Notes – What is happening in the activity? Who is in the audience? What is the level of participant engagement? What types of questions are being raised? How are these questions being answered?

Personal Notes – What are your (the evaluator’s) impressions of the activity?

Use the opportunity to network with attendees. Explain why you, as an evaluator, are attending the session. Get participants’ thoughts on what would be important to evaluate. They may have ideas you hadn’t considered to shape future data collection questions. We used their ideas in developing our interview guide for future site visits.

Hold an internal team debriefing meeting after each event. These meetings allowed the evaluation team to share information with each other so that we could all have an understanding of what happened during all activities at each learning session.

Lesson Learned: Attending the learning sessions gave the team a frame of reference that was valuable to completing future site visit interviews that were conducted as part of the evaluation. When interviewees referenced the learning sessions, the interviewer’s prior knowledge allowed for a mutual understanding and helped build rapport.

Rad Resource: This article from on the online journal Qualitative Social Research describes in more detail participant observation as a data collection tool.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello. We are Valerie Konar, Carla Hillerns, and Michelle Landry from the University of Massachusetts Medical School’s Center for Health Policy and Research. Today, we share lessons learned from our evaluation work for the MA Patient Centered Medical Home Initiative.

The strength of many evaluation designs includes the use of a rigorous control group. However, identifying practices that received no exposure to medical home interventions was not possible as most practices in Massachusetts were exposed to some form of medical home knowledge. We therefore needed to secure a set of comparison practices that may be involved in medical home activities but not be receiving the same level of intervention as our study practices. Recruiting member practices for a comparison group and keeping them engaged over several years presented unique challenges.

How do you entice a busy primary care practice to sign-on and complete the tasks requested of them as part of the comparison data collection process with little or no compensation?

Hot Tip:   Network! Use professional organizations and contacts to spread the word and encourage participation.

  • Reach out to practices that initially showed interest in the intervention portion of the project, but were not selected; they may be interested in participating in a different way.
  • Vary and repeat your recruiting efforts until you generate the necessary interest.
  • Explain WHY participation is so important.

Hot Tip:   Offer feedback on the results of practices’ efforts as compensation. This feedback can be used as quality improvement tools or relate to other organizational goals.

  • If budgets allow, offer some form of compensation (e.g., small stipends) in acknowledgement of time and effort. Incrementally increase the stipend value over time to help encourage motivation to stay the course.

Engagement through the end of the project is key to successful comparison analyses. During the project’s life, how do you maintain comparison group’s participation?

Hot Tip:   Relationships are key! Simplifying your point of contact will eliminate confusion. Assigning one contact person who is knowledgeable and accessible will go a long way to maintaining relationships.

Hot Tip:   Try to time requests so as not to coincide with busy periods.

  • Bundle requests when possible to minimize the number of communications.
  • Make deliverables easy to complete and accommodate requests, if possible (e.g., allow responses by mail and web).
  • Predictability helps! Provide advance reminders for task assignments

Lesson Learned: Being mindful of what groups are able to provide. Being sensitive to the amount of time a task takes will increase your chance of receiving the necessary data.

Rad Resource:    RealWorld Evaluation: Working Under Budget, Time, Data and Political Constraints offers strategies for minimizing selection bias in a real-world context.

The American Evaluation Association is celebrating Massachusetts Patient-Centered Medical Home Initiative (PCMHI) week. The contributions all this week to aea365 come from members who work with the Massachusetts Patient-Centered Medical Home Initiative (PCMHI). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top