Hello! We are Cara Karter and Anne Farrell from Chapin Hall at the University of Chicago. We are passionate about generating evidence to drive policy and practice changes that can improve the services and supports available to families.
The Family First Prevention Services Act (FFPSA) has significant implications for the evaluation of programs delivered to families involved in the child welfare system. FFPSA requires programs and services intended to support families and prevent foster care placements to meet one of three evidence thresholds as rated by the Title IV-E Prevention Services Clearinghouse (see figure below). These thresholds are based on an independent review of evaluations using rigorous randomized controlled trial or quasi-experimental designs (see, for example, this evaluation of the Youth Villages Intercept program). This requirement will likely have a significant impact on the services provided for youth and families involved in the child welfare system, and have ripple effects on the development and evaluation of programs more generally. Evaluators may be able to take advantage of this opportunity to inform the processes and quality of interventions brought to scale.
Hot Tips:
1. Build internal measurement capacity for a rigorous evaluation. Rigorous evaluation designs are necessary to meet these thresholds, but they are also likely to face challenges. Conducting a randomized-controlled trial (RCT) requires careful screening and recruitment, procedures that can be fairly taxing or require front line practice shifts that are difficult to make and maintain. High quality non-experimental designs (observational studies) need to take into account the referral practices of front-line workers, supervisors, and regions, along with the availability of services in the community. One way to prepare systems and services for this level of controlled evaluation is to implement high quality systems for measurement and continuous quality improvement.
2. Emphasize and invest in continuous quality improvement. Effective continuous quality improvement (CQI) systems resemble programmatic research and follow a plan-do-study-act cycle. To obtain a well-supported evidence rating from the Title IV-E Clearinghouse, a program must provide a CQI plan. This requirement supports an ecological understanding of implementation: it’s been said that a well-supported treatment implemented in a new place is merely a promising one.
3. Involve internal stakeholders in evaluation design. In reflecting on the implementation of well-supported interventions in new settings, we also need to be mindful that cultural, social, linguistic, regional, and policy and practice variations all have implications for effective delivery of services. Internal stakeholders are invaluable in the development of evaluation plans because of their context expertise. Collaboration between evaluators and practitioners has been associated with improved outcomes for service-users and enhanced translation of findings for improving direct service and agency-level practices.
The American Evaluation Association is celebrating Chicagoland Evaluation Association (CEA) Affiliate Week. The contributions all this week to aea365 come from CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
This was a great post! I’m currently conducting reviews in Colorado and it was nice to be reminded of the important implications for evaluation while slogging through the somewhat onerous review process. Thanks for sharing!