AEA365 | A Tip-a-Day by and for Evaluators

TAG | effectiveness

My name is Lisa Chauveron. I am the Director of Research & Evaluation at The Leadership Program, an urban organization that serves 18,000 youth, 500 teachers, and 6,000 parents annually in 250 underserved New York City schools. I oversee all internal program evaluations, coordinate with outside consultants, and lead external evaluations for other organizations. We offer evaluative support to 15 programs annually, from multi- to single-site, 30 participants to 3,000, new idea to established model program, the scope and target of each as varied as their stages of development and evaluation readiness.

Of course this challenge is not unique, as both internal and external evaluators face similar demands: Stakeholder expectations for evaluation are often in conflict with the realities of the program development process as program developers may want large multi-site evaluations that demonstrate effectiveness before they have clearly identified the goals and outcomes of the program while conversely, scaled-up programs sometimes hesitate to invest resources into a evaluation designs that could demonstrate program effects.

Rad Resource: To give voice to multiple stakeholders and explain how to use evaluation to assist programs in moving from an idea to a formal boxed program that can be implemented at a large scale with high fidelity, we created a tool called the Roadmap to Effectiveness (downloadable from the AEA public eLibrary, by clicking on its title in this post). The Roadmap creates a strategic space for addressing the process, politics, and challenges of evaluating and developing multiple programs with myriad needs.

It identifies seven stages of program development: (1) Exploratory– program idea and creation phase, (2) Laboratory–experimentation with idea formulation and program intention, (3) Development–development of program model and components, (4) Replication–testing by developer, and then by non-developers, (5) Maintaining Excellence–model finalization and transition to Scale-Up, (6) Scale-Up–program effectiveness assessed at scale, and (7) Boxing It–develop model into product able to be administered by off-site purchasers, and lays out an evaluation goal for each stage. Each stage has specific benchmarks, criteria, and quantitative and qualitative development tools and methods, exposing practitioners to a range of options to provide feedback valuable to different stakeholders.

Radder Resource: Check out our roundtable at the AEA Annual Conference in November, where feedback, suggestions, and challenges are welcomed to help make the tool universally applicable.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello again! I’m Brian Yates, your Treasurer, Director of the Program Evaluation Research Laboratory (PERL) at American University in Washington, DC, and a Professor there too.

A few weeks ago I wrote an AEA365 on “Doing Cost-Inclusive Evaluation. Part I: Measuring Costs.” Measuring the costs of programs is only part of cost-inclusive evaluation. This week we focusing on measuring the monetary outcomes of program. Here’s lessons, tips, and resources for your next evaluation.

Lesson Learned – Costs are not outcomes: BENEFITS are outcomes. Many is the time that I have heard seasoned evaluators and administrators say “program cost” when they meant “program benefit.” What I call the “outcome bias” prompts many people to see only what comes out of programs (outcomes), and not what goes into them (resources, measured as costs). In cost-inclusive evaluations, “benefits” mean “outcomes that are monetary, or that can be converted into monetary units, i.e., that are monetizable.”

Lesson Learned – Examples? Benefits that really “count” for many funders and consumers of human include: a) increased income to clients (and taxes paid to society) resulting from client participation in a program, plus b) savings to society resulting from decreased client use of costly health and other services, like emergency room visits and hospital stays.

Hot Tip – Convert effectiveness to benefits with simple multiplication. If you assure confidentiality before approaching a program’s clients, they’ll often tell you what health and other services they used in the past few months. Sample clients before, during, and after program participation to assess impacts of the program on clients’ use of other services. Validate with checks with those other services. Next, transform these impacts into monetary units: multiply a client’s frequency of service use by the cost of each service (average of health service provider’s fees for that service, for instance). Then, compare costs of services used before and after a program for clients, and you’ve measured a potential program benefit that speaks louder than other outcome measures: cost savings produced by the program!

Lesson Learned – Wow finding: programs often pay for themselves — several times over, and quickly! (Look for specifics on how to analyze these cost-benefit relationships in a future AEA365.)

Lesson Learned – Just ’cause it has a dollar sign in front of it doesn’t make the number “better.” Benefits (and costs) are no more valid that the data from whence they’re derived. The “GIGO” (Garbage In –> Garbage Out) principle works here: invalid benefit data can lead to big mistakes about program funding.

Resource: For examples of measuring benefits and combining them with costs for program funding recommendations, see: http://www.wsipp.wa.gov/auth.asp?authid=2

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Archives

To top