AEA365 | A Tip-a-Day by and for Evaluators

TAG | continuous improvement

Hi, my name is Krista Collins, Director of Evaluation at Boys & Girls Clubs of America (BGCA) in Atlanta, GA.  Over the past few years, after school program quality standards have become more prevalent across the field as a way to ensure that young people are engaging in safe and supportive environments that promote positive developmental outcomes.  The design and implementation of Continuous Quality Improvement (CQI) processes has therefore increased rapidly as a methodology to monitor and improve program quality.  While all grounded in a similar feedback loop of design, test and revise, the models below are a few common examples of the various CQI frameworks that are being used within and across sectors.

In 2012, The David P. Weikart Center for Youth Program Quality released the results of an empirical study to test the impact of their continuous improvement process, the Youth Program Quality Intervention (YPQI) on program quality in after school systems. Their findings showed that YPQI had a significant positive impact on youth development practice and staff engagement, with outcomes sustained over time across multiple after school contexts. Within K-12 schools, quality improvement processes are often foundational to school reform efforts to turn around consistently low-performing schools. Studies have shown that when school reform includes a commitment to a specific strategy or plan (design), assessment of teacher and student performance (test), and opportunities for learning and improvement (revise), then positive impacts on teacher preparation, instruction, and student achievement are more likely (Hargreaves, Lieberman, Fullan & Hopkins, 2014; Hawley, 2006).

Lessons Learned: While CQI has garnered widespread support across industries, efforts to monitor and evaluate its effect have been limited due to challenges associated with the highly contextualized and iterative nature of CQI.  In a report from the Robert Wood Johnson Foundation, they summarized that the continuous evolution of design, metrics, and goals makes it difficult to determine if actual improvement has been made, and the learnings gained have limited generalizability.  These challenges, coupled with the long timeline required, have motivated new quality improvement methods to be identified.

Hot Tip: Generated in the healthcare space, The Institute for Healthcare Improvement has developed the Breakthrough Series Collaborative (BCS), an innovative approach to CQI that prioritizes the need for and value of rapid improvement with an emphasis on the team structure and procedures needed for efficient implementation. Their own healthcare evaluations, as well as studies examining the impact of this methodology to improve Timely Reunification within Foster Care, have shown significant and timely improvements in service delivery, stakeholder engagement and outcomes, cross-system collaboration, and reduced costs.  These successes demonstrating the value of BCS as a methodology to improve current CQI models warrant consideration and testing with the PreK-12 Education and after school space.  With the ever-increasing need to ensure that young people are exposed to the high-quality learning environments required to drive positive outcomes, the advantages of BCS may provide a more efficient and robust solution to drive effective school reform and quality improvement efforts.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss the importance of embedding quality throughout an organization by discussing our work in promoting continuous quality improvement (CQI) in afterschool programs.

CQI systems involve iterative and ongoing cycles of goal setting about offering quality programming, using effective training practices to support staff learning and development, frequent program monitoring including site observations and follow-up coaching for staff, and analyzing data to identify strengths and address weaknesses in program implementation. While CQI within an organization is challenging, we have begun to engage staff in conversations about CQI.

Hot Tip: One strategy we used involved translating the California Department of Education’s “Quality Standards for Expanded Learning Programs” into behavioral language for staff. Using examples from external observations we conducted at the organization, we created four vignettes that described a staff member who displayed both high and low quality across selected quality standards. Site managers then responded to a series of questions about the vignettes, including:

  • Did the vignette describe high-quality or low-quality practice?
  • What is the evidence for your rating of high or low quality?
  • What specific recommendations would you give to the staff member to improve on areas of identified as low quality?

At the end of the activity, site managers mentioned the vignettes resonated strongly with their observations of their staffs’ practices and discussed how they could begin implementing regular, informal observations and discussions with their staff to improve the quality of programming at their sites.

Hot Tip: Another strategy involved embedding internal observations into routine practices for staff. Over the years, we collaborated with the director of program quality to create a reduced version of our validated observation protocol, trained him on how to conduct observations, and worked with him to calibrate his observations with the external observation team. Results were summarized, shared across the organization, and were used to drive professional development offerings. Now, more managerial staff will be incorporated into the internal observation team and the evaluation process will continue and deepen throughout the organization. While this process generates action within the organization for CQI, it also allows for more observational data to be collected without increasing the number (and cost!) of external evaluations.

Rad Resource: Tiffany Berry and colleagues wrote an article detailing these process on “Aligning professional development to Continuous Quality Improvement: A case Study of Los Angeles Unified School District’s Beyond the Bell Branch.” Check it out for more information!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m John Cosgrove, an evaluator who is committed to utilization-focused evaluation. I am currently working with community colleges around the country to improve evaluation efforts and the use of data for continuous improvement. Clients indicate they want evaluation and data to drive continuous improvement and decision-making. Although a good place to start, data collection alone won’t get the job done. In her excellent article, Data Don’t’ Drive, Alicia Dowd reminds us that data alone won’t lead to continuous improvement.

I remember sitting in a faculty session at the University of Michigan Assessment Institute and listening to Richard Alford discuss the Craft of Inquiry. It was the end of the day and with all apologies to Dr. Alfred, I must admit I was thinking more about crafting dinner plans than inquiry, but then he made a very simple, yet powerful statement: “You don’t make the pig fatter by simply weighing it every day”.

Assessment, evaluation, data collection—whatever you want to call it—must be more than keeping score. If we don’t learn something and then take action from what we learn, we are simply recording data for the sake of recording data. As colleges are further inundated with the call for evaluation data from stakeholders, including legislators and funding agencies, they would do well to remember to structure such efforts with a meaningful culture of inquiry.

People engaged in the development of public questions and the thoughtful interpretation of data will drive continuous improvement. We should expand evaluation efforts to determine not only what works, but why it works. We offer the following framework help link questions, data collection, interpretation and action.

  • INQUIRE—What Do We Want To Know? Define the specific evaluation questions.
  • DISCOVER—What Do We Know? Identify data sources and methods of data collection.?
  • INTERPRET—What Does the Data Tell Us? Work with stakeholders to analyze and interpret results/data.
  • DEVELOP—What Actions Need To Occur? Use results to develop strategies for continuous improvement and further evaluation.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Vanessa Hiratsuka, secretary of the Alaska Evaluation Network (AKEN) and a senior researcher at Southcentral Foundation (SCF), a tribally owned and managed regional health corporation based in Anchorage, Alaska, which serves Alaska Native and American Indian people.

As part of Commitment to Quality, a key organizational value, Southcentral Foundation (SCF) prioritizes continuous quality improvement (CQI), quality assurance, program evaluation, and research.

Although the strategies and tools used in CQI, quality assurance, program evaluation, and research are similar, we do different things. One of our challenges is to help staff across the organization understand who does what. Because these four fields differ in aim and audience, exploring the goals of a project (aim) and who will use its findings (audience) provides a useful framework to determine where a project fits.

Hiratsuka graphic

At SCF, improvement staff work directly with SCF department and clinic processes to develop and implement project performance measures and outcome indicators as well as help staff (audience) improve processes to better meet customer-owner needs and inform business directions (aim).  Quality Assurance staff conduct quality monitoring to ensure programs are complying (aim) with SCF processes and the requirements of our accrediting bodies (internal and external audiences).

SCF internal evaluators measure programs’ performance (aim) and provide feedback to programmatic stakeholders — including staff, leadership, and funders (audience). The SCF research department’s projects address questions of clinical significance to contribute to generalizable knowledge (aim) for use within SCF and for dissemination in the scientific literature around American Indian and Alaska Native health (audience).

Lessons Learned:

–        Define the aim and intended audience early in the process! This helps identify the stakeholders, level of review, and oversight needed during all stages of a project, including development, implementation, and dissemination of findings.

–        Broadly disseminate findings! Findings and recommendations from all disciplines are only useful when they are shared. At SCF, findings are shared at interdivisional committee meetings and with staff who oversee the work of departments. Multipronged dissemination ensures involvement from all levels of SCF and supports innovation and the spread of new knowledge.

–        Project review can be complicated!  At SCF, research projects must be vetted through a tribal concept review phase, an Institutional Review Board review, and finally a tribal review of the proposal.  Later, all research dissemination products (abstracts for presentation, manuscripts, and final reports) are also required to undergo a tribal research review process. These take time, so it is important to understand the processes and timelines and build review time into your project management timelines.

Check out these posts on understanding evaluation:

  1. 1.    Gisele Tchamba on Learning the Difference between Evaluation and Research
  2. 2.    John LaVelle on Describing Evaluation

The American Evaluation Association is celebrating Alaska Evaluation Network (AKEN) Affiliate Week. The contributions all this week to aea365 come from AKEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Holly Lewandowski. I am the owner of Evaluation for Change, Inc. a consulting firm that specializes in program evaluation, grant writing, and research for nonprofits, state agencies, and universities. I worked as an internal evaluator for nonprofits for ten years prior to starting my business four years ago.

There have been some major changes in the nonprofit world as a result of the economic downturn -within the last four years especially. I’ve witnessed nonprofits that were mainstays in the community shut their doors because the major funding source they relied on for years dried up. Funding has become scarcer and much more competitive. Funders are demanding grantees demonstrate strong outcomes in order to qualify for funding. As a result, many of my clients are placing a much greater emphasis on evaluating outcomes and impact and less on evaluating program implementation in order to compete. The problem is you can’t have one without the other. Strong programs produce strong outcomes.

Here are some tips and resources I use to encourage my clients to think evaluatively to strengthen their programs and thus produce quality outcomes.

Hot Tips:

  • Take time to think. As an outside evaluator, I am very aware of the stress program staff and leadership are under to keep their nonprofits running. I am also aware of the emphasis for nonprofits to produce in order to keep their boards and funders happy. What gets lost, though, is time to think creatively and reflect on what’s going well and what needs to be improved. Therefore, I build in time in my work plan to facilitate brainstorming and reflection sessions around program implementation. What we do in those sessions are in the following tips.
  • Learn by doing. During these sessions, program staff learns how to develop evaluation questions and how to develop logic models.
  • Cultivate a culture of continuous improvement through data sharing. Also at these sessions, process evaluation data is shared and discussed. The discussions are centered on using data to reinforce what staff already knows about programs, celebrate successes, and identify areas for improvement.

Rad Resources:

  • The AEA Public eLibrary has a wealth of presentations and Coffee Break Demonstrations on evaluative thinking and building capacity in nonprofits.
  • If you are new to facilitating adults in learning about evaluation, check out some websites on Adult Learning Theory. About.com is a good place to start.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · · ·

Archives

To top