My name is Manolya Tanyu and I am a researcher at American Institutes for Research (AIR), a behavioral and social science research organization to support education, educational assessment, health, international development, and work and training. My former organization, Learning Point Associates recently merged with AIR.
Lessons Learned: In evaluating afterschool programs, primarily 21st Century Community Learning Centers, I observe a growing interest in accountability and demonstrating program impacts, primarily as a result of No Child Left Behind (NCLB) and federal concern on showing student progress based on program participation. Most evaluations funded focus on program outcomes that assume a linear relationship from program activities to program outcomes. In other words, if you consider a logic model, it is expected that the activities provided will lead to expected program outcomes and therefore the most common indicators assessed are long-term outcomes (e.g., whether students’ test scores or grades improve as a result of participating in the afterschool program) that is the last step of a logic model. The truth is many state or local evaluations are not designed with this logic model in mind, are costly and take time, and provide little useful information to the staff person working directly with the children. Little consideration is given to what happens in the process of trying to achieve the expected outcomes and there is not adequate guidance provided to funded programs in terms of how to improve high quality program practices to serve the youth adequately.
Hot Tip: Monitoring is in fact an integral part of evaluation that focuses on using data to inform continuous program improvement and program outcomes but often does not receive the same emphasis by funding resources. Needless to say, there are many excellent programs and states that choose to put their limited money and efforts into supporting ongoing monitoring of program implementation and impacts rather than costly evaluations before the program is implemented well and mature. As the phrase describes, the focus is on improving the quality of afterschool staff practices and supports and providing feedback to support implementation.
Cool Trick: To visualize this, consider a circular process of program development informed by data rather than a linear model of looking at program outcomes prescribed by the logic model. The role of the evaluator in this process is rather a collaborative, developing and changing one based on the needs and growing strengths of the stakeholders in the setting as they become more skilled in using data.
Patton, M. Q. (2011). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York, NY: Guilford.
Harvard Family Research Project: http://www.hfrp.org/
The American Evaluation Association is Educational Evaluation Week with our colleagues in the PreK-12 Educational Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our EdEval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.