My name is Susan Fojas. As the Associate Commissioner for Performance Measurement, Monitoring, and Improvement at New York City’s Administration for Children’s Services (ACS), my goal is to improve the quality of services and outcomes achieved for children in foster care through evaluation and monitoring.
Hot Tip: A valuable part of the evaluation process is a monthly forum hosted by the advocacy organization to which most foster care providers belong, the Council of Family and Child Caring Agencies. The forum allows valuable discussion of evaluation data in the context of practice issues among provider agencies, ACS, and the New York State Office of Children and Family Services. Discussion focuses on sharing performance improvement strategies and strengthening the evaluation system.
Lesson Learned: Productive discussions occur when we can dig into practice issues that inform what the evaluation data mean. This happens while interpreting exploratory data in the development of a new measure or reviewing results of the past year. For instance, in looking at our shared ability to place siblings together in foster homes, we explored patterns of sibling placements in a way that we hadn’t seen before. Sharing system-wide data that individual providers cannot normally access allowed them an expanded perspective beyond the experience of their agency, showing how placements differed across providers. We were able to have an informed discussion of the practice that impacts the data – the placement process, the experience for siblings, and barriers to placing siblings together – and how to develop a measure of performance that acknowledges where practice is and where we want it to go. Providers could then develop individualized approaches to sibling placements to achieve the goals we set for the system. Often, we also identify areas of system-wide challenges that require the public and private sides to partner in creating changes. This can have beneficial impacts on practice at the provider and system-wide levels.
Lesson Learned: Of course, discussion also centers on common limitations inherent in evaluation. We deal with limits in how legacy data systems, not originally designed to produce performance indicators, can be used for evaluation. We encounter unique situations in complex foster care cases that fall outside the bounds of what standardized evaluations can capture. We deal with how evaluation of a system serving 15,000 children can still run into small sample sizes when measuring specific areas of practice. Addressing these issues, we acknowledge ones we can’t change and evaluate reasonably, but in the best of worlds we move past these issues and focus on discussions that strengthen practice. The communication can sometimes be tough, but we are better as a system for it. The evaluation becomes stronger, which gives us the information we need to improve and move forward as a group.
The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Pingback: ACS in NYC talks about Program Evaluation | ResearChatCayuga