My name is Lisa Chauveron. I am the Director of Research & Evaluation at The Leadership Program, an urban organization that serves 18,000 youth, 500 teachers, and 6,000 parents annually in 250 underserved New York City schools. I oversee all internal program evaluations, coordinate with outside consultants, and lead external evaluations for other organizations. We offer evaluative support to 15 programs annually, from multi- to single-site, 30 participants to 3,000, new idea to established model program, the scope and target of each as varied as their stages of development and evaluation readiness.
Of course this challenge is not unique, as both internal and external evaluators face similar demands: Stakeholder expectations for evaluation are often in conflict with the realities of the program development process as program developers may want large multi-site evaluations that demonstrate effectiveness before they have clearly identified the goals and outcomes of the program while conversely, scaled-up programs sometimes hesitate to invest resources into a evaluation designs that could demonstrate program effects.
Rad Resource: To give voice to multiple stakeholders and explain how to use evaluation to assist programs in moving from an idea to a formal boxed program that can be implemented at a large scale with high fidelity, we created a tool called the Roadmap to Effectiveness (downloadable from the AEA public eLibrary, by clicking on its title in this post). The Roadmap creates a strategic space for addressing the process, politics, and challenges of evaluating and developing multiple programs with myriad needs.
It identifies seven stages of program development: (1) Exploratory– program idea and creation phase, (2) Laboratory–experimentation with idea formulation and program intention, (3) Development–development of program model and components, (4) Replication–testing by developer, and then by non-developers, (5) Maintaining Excellence–model finalization and transition to Scale-Up, (6) Scale-Up–program effectiveness assessed at scale, and (7) Boxing It–develop model into product able to be administered by off-site purchasers, and lays out an evaluation goal for each stage. Each stage has specific benchmarks, criteria, and quantitative and qualitative development tools and methods, exposing practitioners to a range of options to provide feedback valuable to different stakeholders.
Radder Resource: Check out our roundtable at the AEA Annual Conference in November, where feedback, suggestions, and challenges are welcomed to help make the tool universally applicable.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Lisa,
Sounds like you are doing great work! This is Tim from mast!