AEA365 | A Tip-a-Day by and for Evaluators

CAT | Health Evaluation

Greetings! I am Naomi Hall-Byers, an Associate Professor of Psychology at Winston-Salem State University in North Carolina. I am also a 2017 Minority Serving Institution fellow. During last year’s annual conference, my cohort and I explored the intersection between the social determinants of health (SDOH) and culturally responsive evaluation (CRE). SDOH are social and environmental factors affecting an individual’s health and quality of life. This article focuses on one SDOH, social and cultural context. According to Healthy People 2020, social and community context may include factors such as social cohesion, civic participation, discrimination, incarceration, social networks, norms, and social capital. As an applied social psychologist, with a background in public health, I am acutely aware of the importance of understanding social and cultural context. I provide some thoughts on how to incorporate this SDOH into health focused evaluations.

Lessons Learned: If we want to make a bigger impact on health, as evaluators, we have to move away from focusing primarily on individuals. Individual behavior is important, but behavior still takes place within the context of the social environment. This context is both complex and intersectional, and ultimately influences the program and its evaluation components.

Lesson Learned: It is important to situate CRE within elements of an evaluation framework. The key is to embed CRE throughout the evaluation process. One way to do this is to create and conduct an evaluation WITH the organization, and not FOR the organization. Understanding the cultural context in which the project/program operates, and being responsive to it, will create stronger cooperation, trust, collaboration, and engagement. This will ultimately produce better data, which can strengthen the organization, and the community it serves.

In closing, it is important for health-focused evaluators to seek to understand how each of the five SDOH areas intersects with CRE in ways that affect the health status of individuals and the communities in which they live.

Rad Resources: For more information on the social determinants of health visit Healthy People 2020: https://www.healthypeople.gov/2020/topics-objectives/topic/social-determinants-of-health, or the Centers for Disease Control and Prevention: https://www.cdc.gov/socialdeterminants/cdcprograms/index.htm

For more information on culturally responsive evaluation, and access to a plethora of resources, visit the Center for Culturally Responsive Evaluation and Assessment: https://crea.education.illinois.edu/

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi there! We’re Anne Vo, Ph.D., Director of the Keck Evaluation, Institutional Reporting, and Assessment (KEIRA) Office at the Keck School of Medicine of USC, and Jacob Schreiber, Evaluation Assistant at KEIRA. Today, we offer reflections on what we’ve learned about conducting evaluation within an academic medical center—an environment that offers rich opportunities to observe, conduct, and understand evaluation practice and policy.

Hot Tip #1: Standards Rule Healthcare, Medicine, and Medical Education

Medicine is a highly regulated field. Broad swaths of stakeholders—clinicians, clinical educators, school and hospital administrators—rely on standards to inform decision-making and drive practice. As such, systematic evaluation often manifests as high turn-around monitoring of easily quantifiable outcomes (e.g., student academic performance, residency program match rates, etc.). Successfully “chasing numbers” enables organizations such as academic medical centers to communicate that standards of care and teaching are being met. Because standards offer a common language that stakeholders can use to think through pressing issues of the day, they also become the go-to frame of reference for decision-makers throughout the organization.

Rad Resource:

Hot Tip #2: Everything is “Evaluated,” Everyone is an “Evaluator”

Because standards drive practice in Medicine, evaluation could become a decentralized activity. Aspects of evaluative practice—from question formulation, to data collection, monitoring, analysis, and synthesis—can often be divided among various stakeholder groups across an organization. This cascaded evaluation model emphasizes “local expertise” and echoes “team values” to which healthcare teams aspire. It is reminiscent of development evaluations that organizations such as UNICEF and the UNDP strongly support. And, in the medical context, it is a model that tends to immerse clinical experts in monitoring processes and largely distances them from actual evaluation.

Rad Resource:

Hot Tip #3: Democratic Decision Making is a Core Value

Decisions about how medical education is done are often made through committees and guided by accreditation standards; specifically, LCME Standard 1 on Mission, Planning, Organization, and Integrity (see above link) and Standard 2 on Leadership and Administration. Academic and administrative committees oversee and monitor the quality of Undergraduate Medical Education (what we know as the first four years of medical school). Many of the same stakeholders serve across committees as well as the sub-committees and work groups within each. For evaluation to be meaningful, expect to have many of the same conversations with the same people on different levels. Most importantly, know what each committee’s charge is; its membership; and members’ roles and stances on issues that are up for discussion.

Rad Resource:

Alkin, M.C. and Vo, A.T. (2017). What Is the Organizational, Community, and Political Context of the Program? (pp. 77-87). In Evaluation Essentials: From A to Z (2nd Edition). New York, NY: Guilford Press.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi! I’m David McCarthy, a 4th year medical student at the University of Massachusetts Medical School. I had the opportunity to get involved in the Prevention Wellness Trust Fund (PWTF), a project run by the Massachusetts Department of Public Health that works to combat treatable chronic medical conditions by integrating clinical and community interventions. I chose to focus on the pediatric asthma intervention of the City of Worcester’s PWTF, which utilized a series of Community Health Worker (CHW) home visits. As part of this project’s evaluation, I interviewed CHWs and Care Coordinators about their experiences providing home visits for patients with pediatric asthma and their families. In this blog, I summarize some tips and tricks that I learned that could help refine a community-based care model and be used as benchmarks for future care model evaluations.

Hot Tip: Let those with the contacts help with the networking

Initially, getting patients referred for enrollment in the intervention was difficult due to lack of medical provider education about the program. The solution had two components. First, increasing the frequency of Worcester PWTF asthma workgroup meetings improved coordination between the different groups involved and overall program engagement. Second, provider champions at each site reached out directly to other providers taking care of patients within the focus population, which expanded the project reach. Eventually, referral numbers improved, as they were coming in from nearly all care team members.

Hot Tip: Think outside of office hours when coordinating visits with families

We needed to be flexible scheduling home visits outside of typical business hours, including weekends, to accommodate families’ schedules. CHWs also needed to be available to patients by cell phone for calls and text messaging. This scheduling and options for availability helped to build trust with families and further helped retention of patients in the program.

Hot Tip: Consider care provider’s safety

As with any intervention that requires home visits or meeting parents/families in their own space, it’s always good to remember that the safety of study team members is paramount when going to unfamiliar sites. As part of this project, we provided personal safety training for CHWs who were entering patient homes. Where possible, a team of 2 CHWs conducted each home visit and CHWs confirmed dates and times with families before each visit.

Lesson Learned: Account for the varied needs of patients and families

CHWs provided a standardized set of asthma management supplies to families at each visit, including medication pill boxes, trash cans, mattress and pillow covers, and vacuums. This was designed to incentivize their engagement and compliance with their asthma management plan. However, these supplies didn’t always match individual families’ needs. Future intervention efforts should tailor supply sets for each family based on their existing individual home environment.

Overall, our evaluation efforts identified that an integrated clinical program to address social determinants of health through CHWs represents an innovative healthcare delivery system and is very feasible to implement.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Kate Cartwright, a 2016 AEA Minority Serving Institution Fellow and an Assistant Professor of Health Administration in the School of Public Administration at the University of New Mexico in Albuquerque. I study racial and ethnic health equity in regard to healthcare access, quality, and outcomes.

As an evaluator who values health equity, the imbalance of funding and research which prioritizes the health of underrepresented and underserved populations is of great concern. Researchers and evaluators alike are able to follow best practices in the field. However, too often the “best” practices reify inequities, which includes practices that leave out underrepresented groups.

A provocative essay published in The Atlantic in the summer of 2016 investigates why health studies are frequently so white when our population is so diverse. The article offers several theories, but repeatedly reveals that best practices in research fail to hold researchers accountable for non-inclusive sampling strategies. A recent PLoS Medicine article notes that even though the 1993 National Institutes of Health (NIH) Revitalization Act mandates that federally funded clinical research prioritize the inclusion of women and minorities, the act has not yielded parity in clinical study inclusion (for example, less than 2% of National Cancer Institute funded cancer trials from 1993 to 2013 met the NIH inclusion criteria).

Lesson Learned: Design Inclusive Sampling Strategies

Evaluators must design evaluations which have inclusive sampling strategies if they hope to improve the efficacy, effectiveness, and equity of evaluations.

Hot Tip: Always Include the Community as a Stakeholder

In one workshop on culturally responsive evaluation I attended at Evaluation 2016, some participants lamented that they would like to be more inclusive of community members when evaluating community health programs, but that they had to respond to the priorities of their stakeholders first. Thankfully, we were in a session with a great leader who gently, but firmly, challenged them (and all of us) to remember that community members must be counted as primary stakeholders in all evaluations.

Rad Resources:

The American Evaluation Association is AEA Minority Serving Institution (MSI) Fellowship Experience week. The contributions all this week to aea365 come from AEA’s MSI Fellows. For more information on the MSI fellowship, see this webpage: http://www.eval.org/p/cm/ld/fid=230 Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Valerie Hutcherson and Rebekah Hudgins, Research and Evaluation Consultants with the Georgia Family Connection Partnership (GaFCP) (gafcp.org). Started with 15 communities in 1991, Family Connection is the only statewide network of its kind in the nation with collaboratives in all 159 counties dedicated to the health and well-being of families and communities. Through local collaboratives, partners are brought together to identify critical issues facing the community and to develop and implement strategies to improve outcomes for children and families. The GaFCP strongly believes that collaboration and collective effort yield collective impact. Evaluation has always been a significant part of Family Connection, though capacity within each local collaborative greatly differs.

In 2013, GaFCP invited 6 counties to participate in a cohort focused on early childhood health and education (EC-HEED) using the Developmental Evaluation (DE) framework developed by Michael Quinn Patton. (Patton, 2011. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use). Each county was identified by GaFCP based on need and interest in developing a EC-HEED strategy and had the autonomy to identify collaborative partners, programs and activities to create a strategy tailored to meet the needs and resources of the county. As evaluators we recognized the collaborative and their strategy formation as existing in a complex system with multiple partners and no single model to follow. The DE approach was the best fit for capturing data on the complexity of the collaborative process in developing and implementing their strategies. DE allows for and encourages innovation which is a cornerstone of the Family Connection Collaborative model. Further, this cohort work gave us, as evaluation consultants, the unique opportunity to implement an evaluation system that recognized that understanding this complexity and innovation was as important as collecting child and family outcome data. With DE, the evaluator’s primary functions are to elucidate the innovation and adaptation processes, track their implications and results, and facilitate ongoing, real-time, data-based decision-making. Using this approach, we were able to engage in and document the decision making process, the complexity of the relationships among partners and how those interactions impact the work.

Lessons Learned: Just a few of the lessons we’ve learned are:

  1. Participants using a DE approach may not recognize real-time feedback and evaluation support as “evaluation”. Efforts must be taken throughout the project to clarify the role of evaluation as an integral part of the work.
  2. Successful DE evaluation in a collaborative setting requires attention to the needs of individual partners and organizations.
  3. The DE evaluator is part anthropologist thus is required to be comfortable in the emic-etic (insider-outsider) role as a member of the team as well as one involved in elucidating the practice and work of the team.

We’re looking forward to October and the Evaluation 2016 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to contribute to aea365? Review the contribution guidelines and send your draft post to aea365@eval.org.

Hi all, I’m Mariel Harding, a Program Coordinator with Prevention Institute (PI). PI is a national non-profit dedicated to improving community health and well-being by building momentum for effective primary prevention. PI has 18 years’ experience advancing policy and supporting communities in improving environments for health and health equity.

When gathering data on health, we often simply measure individual conditions and fail to count the elements at the community level that shape health. Yet we know that access to quality education and housing, clean air, safe places to play, strong social networks, and more, are essential for health and well-being. For example, when identifying key metrics regarding diabetes, it’s not enough to measure blood sugar levels—we need to also look at walkability, the existence of safe parks and open spaces nearby, and food access.

Prevention Institute’s (PI) recent report, Measuring What Works to Achieve Health Equity: Metrics for the Determinants of Health, represents a paradigm shift in thinking about helping communities count what matters when it comes to health, safety and equity. The report lays out the determinants of health – including structural drivers, community determinants, and healthcare – that must be improved to achieve health equity. It also describes the methods and criteria we applied to identify health equity metrics.

We identify 35 recommended metrics for the determinants of health that could track progress toward achieving health equity. However, not all of the metrics presented in the report actually exist because many important determinants of health equity are not regularly measured. Or, if they are, they aren’t being compiled in meaningful ways. Where metrics didn’t exist, we suggest new metrics to fill the gap.

Lessons Learned

  • What we count reflects what we think matters. If health equity is important, we must note it, count it, measure it, and track it.
  • Good metrics foster an understanding of the problem and the solution. For example, measuring neighborhood access to healthy food vendors may prompt efforts to facilitate healthy eating that address the underlying causes of illness, going beyond education campaigns. Such efforts may include recruiting vendors, zoning changes, and/or community gardens.
  • Metrics should gain the attention of the public, be designed not only as a measurement tool, but also as a communications tool to help inform the public about health inequity and what will reduce it. Composite measures, which include multiple indicators, do this well because they express the complexity of the environments which produce health inequity.

Metrics can help clarify and measure the sources of inequity and fostering understanding of solutions and actions that can lower the cost of healthcare, keep all people healthy, and ensure equal opportunities to thrive.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello! I’m Julie Goldman, a librarian at the University of Massachusetts Medical School’s Lamar Soutter Library. I want to introduce you to a number of high-quality public health resources, most of them freely available, and all that can help you as evaluators understand and support your communities, your research, and your work. Public health information comes in many different forms from education materials, research articles, white papers, and policies to raw data on lifestyle characteristics, disease prevalence, healthcare utilization – or, basically, all scientific output that applies to day-to-day life.

Here are a few takeaways when looking for public health information:

 Lessons Learned:

  • Public health is multidisciplinary; it is not just doctors and nurses! Other fields include:
    • Health educators
    • Community planners and policy makers (e.g., local, regional or state health boards)
    • Scientists and researchers
    • Biostatisticians
    • Occupational health and safety professionals
  • Health promotion should happen everywhere – and include everyone!
    • Information should be available to all communities, domestic and international
    • Addressing health disparities should be a key focus. Examples include: healthcare access, infectious diseases, environmental hazards, violence, substance abuse, and injury.

Hot Tips: Evidence-based practices inform decisions:

  • Using the best available scientific evidence leads to informed decisions
  • Pro-active prevention can lead to measurable impact
  • Spending on prevention saves money long-term

Visit the American Public Health Association’s website for public health news, webinars and useful infographics like the one below that visually tell the public health story. Many of these (or similar ones from a variety of sources) can be used to help relay the messages of evaluation data.

Goldman 1

Public Health Infographic, © 2016, American Public Health Association

Rad Resources: The table below provides a brief overview of many of the public health collections that offer freely available resources on populations, agencies, public health news and policy briefs, and much more.

GoldmanLamar Soutter Library Evidence-Based Public Health Portal.
A collaboration of U.S. government agencies, public health organizations, and health sciences libraries highlighting news, public health topics, policies, jobs, and education.
The National Library of Medicine, National Institutes of Health, offer many public health resources such as Haz-Mat, Toxnet, and IRIS.
A free, digital archive of scientific research and literature.
Explore and evaluate projects aimed at reducing racial and ethnic health disparities. From the Robert Wood Johnson Foundation.
The world’s most comprehensive collection of population, family planning and related reproductive health and development literature.
An image-based review of world demographics and statistics.

 

Many libraries, at both public and private institutions, as well as public libraries can assist with evaluation and research, and access to all levels of public health information. Many librarians are highly motivated to work with a research and/or evaluation team to help navigate public health data and resources. Re-read these two blog postings from previous years to learn more about collaborating with a librarian to help explore the vast array of information that can help with the development, conduct, and analysis of evaluation projects: “Library Resources and the Important Role They Play in Evaluation Work” and “Today’s Librarian and Building an Evaluation Team.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Elizabeth Tully the Online Toolkit Manager at the Johns Hopkins Bloomberg School of Public Health’s Center for Communication Programs (JHU?CCP). One of the toolkits that I work on regularly is the Measurement, Learning & Evaluation (MLE) Project‘s Measuring Success Toolkit. This toolkit provides guidance on how to use data to plan a health program and to measure its success through monitoring and evaluation (M&E). Using data to design and implement health programs leads to more successful and impactful programs. Data can be used to solve health programming-related problems, inform new program design, assess program effectiveness and efficiency, and suggest evidence-based adaptations and improvements.

But this post is especially about the importance of designing useful resources for M&E practitioners – and the Measuring Success Toolkit is Rad!Tully

Hot Tip #1: Using the Toolkit. The Toolkit is meant to be used! It offers full text documents and usable tools that are in the form of an uploaded file that a user can download or a hyperlink to another website’s rad resources. It is organized both by steps in an M&E plan and by health topic. A handy Toolkit tutorial video is also available to assist new users in navigating the Measuring Success Toolkit.

Hot Tip #2: Curated Content Focuses on Use. The Measuring Success Toolkit team updates the toolkit with useful resources every quarter! This means that the content is curated by M&E experts and includes guides, checklists, protocols, indicators and other tools that can be used by M&E practitioners in the field. While you won’t find peer reviewed journal articles or lengthy end of project reports in this toolkit, you will find the tools and resources to help you plan for, monitor and evaluate a program that would be worthy of an esteemed journal. You’ll certainly be prepared to document your project’s success!

Rad Resources: Within the toolkit you’ll find plenty of great tools – 160 and counting! Here’s a quick list of our most popular (and most downloaded) resources on the site:

Sample Outline of an M&E Plan

Qualitative Research Methods: A Data Collector’s Field Guide

A Guide for Developing a Logical Framework

Please send suggestions for resources to include to contactus@urbanreproductivehealth.org. Our next update will be in mid-October!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi. We’re Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services. As an organization, we are committed to ongoing quality improvement efforts to enhance our services for people with disabilities. Earlier, we wrote about common techniques that help a quality improvement (QI) team to be successful. Today we share some potholes and pitfalls we’ve encountered in group facilitation and our tips for negotiating them successfully:

Lessons Learned:

  • New problems or issues frequently arise in the middle of a QI project. Team members, management, or external events (such as changes in the industry) can generate issues unrelated to the original charge. This can be discouraging for the team members and leader and can delay completion of the project. The following may be helpful.
    • Reaffirm the team’s goals, mission, and review data as a group to ascertain if the new issue should be addressed in this venue or in another way.
    • Allow team members to opt out of participating in the new task. Seek new members for the team as needed to address the new issue(s).
    • Keep a “hot” list of issues that arise to be addressed by future QI teams.
  • Recommendations from team not fully accepted. A less than enthusiastic response from decision- makers to a team’s recommendations is a challenge for any team.
    • Set expectations with the group up front that recommendations might be accepted, rejected or amended.
    • Sustain the group’s enthusiasm during the revision process by reminding them of the importance of their work and input regardless of the outcome.
    • Emphasize the positive feedback before sharing constructive feedback. Thank team members for their efforts.
    • Ensure that relevant decision-makers are regularly briefed so the team can make “mid-course corrections” toward options likely to be approved.
  • Difficulty achieving full team consensus. This can be due to dominating or defensive team member(s), incomplete information or team members needing more time for analysis.
    • Encourage subgroup and individual work on the issue between meetings.
    • Allow the team to live with ambiguity for a while to enable consensus to develop.
    • Document what’s already been decided and refer team members back to prior discussions.

Thoughts to Ponder:

“The best-laid plans of mice and men / Often go awry” – from a poem by Robert Burns. The QI team process does not always go smoothly; however, these unexpected challenges present opportunities for better overall outcomes.

From a motivational poster by the British government in 1939, the facilitator must “keep calm and carry on” through the potholes and pitfalls of the QI team process.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi.  We’re  Sherry Campanelli, Program Compliance Manager and Laura Newhall, Clinical Training Coordinator, from the University of Massachusetts Medical School’s Disability Evaluation Services (DES). Although DES conducts evaluations regarding whether an applicant for public benefits can be found disabled, evaluation as a research endeavor is not our primary focus. Nevertheless, as an organization, we are committed to ongoing quality improvement efforts to enhance our services for people with disabilities. We use a team-based iterative approach to define and address problem functions and processes.

For example, we used the process described herein to develop Quality Assurance systems for our clinical, clerical and technical support processes. We have also used this method to tackle caseload backlogs, and effective processing of incomplete applications.

We’ve discovered over time, regardless of the issue or problem involved, that there are common techniques that help a quality improvement (QI) team be successful. We would like to share some of these lessons learned with you.

Lesson Learned: 

  • Determine and clearly state the issues to be solved and team goals.
  • Involve key staff (line staff doing the work and managers supervising the work) in the development of any QI initiative. They are in “the know” about areas that may be problematic.
  • Incorporate non-judgmental facilitation to keep up the momentum. Key components include:

o   Involving all participants in decision making/discussion;

o   Keeping meeting minutes and agendas;

o   Keeping track and sharing “to do” lists, “next steps” and progress towards goals;

o   Meeting on a regular and ongoing basis (don’t cancel meetings unless absolutely necessary);

o   Seeking management decisions and input as needed; and

o   Making sure you hear from the quiet folks in the room – they may need a little encouragement to speak up, but often offer great insights.

  • Utilize team members/subcommittees to perform specific tasks between meetings.
  • Utilize available qualitative and quantitative data.
  • Collect specific data, as necessary, to help define the problem and suggest solutions.
  • Do fact finding to support decision-making.
  • Maintain a “living” working document(s) as decisions are made to be incorporated into a final product.

Utilize pilot testing to determine feasibility and make changes (i.e., “fix bugs”) prior to full implementation.

  • Provide periodic communication to the rest of the department or organization during the project and at its conclusion.
  • Train all impacted staff on process improvements.
  • Conduct periodic assessments after implementation to assess success of the project.
  • Refine processes as new issues and changes occur.

Hot Tips:

  • Sometimes QI processes take longer than expected. “Keep going even when the going is slow and uncertain.”  G.G. Renee Hill
  • “To discover new ways of doing something – look at a process as though you were seeing it either for the first or last time.” Mitchel Martin

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Older posts >>

Archives

To top