We are Asma M. Ali of AA & Associates, LLC and Francis Kwakwa of the Radiological Society of North America in Chicago. We spend a great deal of our day-jobs thinking about challenges of evaluation and assessment in Quality Improvement in Continuing Education for health professionals.
Continuing Education [CE] must meet constantly changing regulatory and education needs for approximately 8 million licensed healthcare professionals. In the last 15 years, CE models for health professionals have evolved from profession-centered to team-centered to support enhanced learner performance and impacts of educational programming. In the last several few decades, CE program models have evolved to include Continuous Quality Improvement goals of interprofessional and interdisciplinary teamwork and communication to support education for improved patient outcomes.
To meet the needs of complex medical condition, such as cancer or chronic health condition management, CE initiatives benefit from programs that are multi-disciplinary and inter-professional in scope. Yet, many CE initiatives in these areas are often discipline specific in education scope or target audiences. When initiatives include non-MD members of healthcare team, the metrics are often focused on medical and patient outcomes. In our work as evaluators of multidisciplinary CE QI metrics, we have noted the following consideration for evaluation of interdisciplinary QI in the health professions:
- Process-orientation: Quality improvement measurement is focused processes. From the development of metrics to the implementation of the QI Plan, QI is focused on processes rather than outcomes. QI metrics focus on the development and implementation of improvement protocols programs, rather than solely on program outcomes and impacts.
- Improvement focus: Change is integral to quality improvement. All quality improvement is focused on improving process delivery. As such, deficiencies uncovered in the pre-assessments and needs assessment activities are less important than improvement gains achieved throughout the program. Criterion referenced assessments may be better than norm referenced assessments in these cases.
- Metric Selection: Metric selection must speak to the goals of all team members. While patient outcomes (i.e. reduced A1C or increased patient knowledge) are important considerations, the process and related metrics will be different for various team members. For example, laboratory turn-around times can influence diagnosis time and treatment options for some cancers. However, these may not be measured for cancer treatment QI.
- Data reviews/Education: Intrinsic to data improvement is the establishment of regular opportunities to interpret data and learn together, as well as provide appropriate education for team members. Interdisciplinary interpretation and educational improvement sessions should include all disciplinary stakeholders to support changes needed for program improvement.
- Continuing Education in the Health Professions Competency Areas
- Beginner’s Guide to Measuring Outcomes in CME
- Measuring the Effectiveness of CeHP Programs and Overall CeHP Program Impacts
- Measuring inter-professional education efforts
The American Evaluation Association is celebrating Health Professions Education Evaluation and Research (HPEER) TIG Week with our colleagues in the Health Professions Education Evaluation and Research Topical Interest Group. The contributions all this week to aea365 come from our HPEER TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.