Greetings, AEA365 readers! Liz DiLuzio here, Lead Curator of the blog. To whet our appetites for this year’s conference in beautiful New Orleans, this week’s posts come to us from the feature the perspectives of the Gulf Coast Eval Network (GCEval) members, where the uniqueness of doing evaluation in the gulf south will be on display. Happy reading!
Hello! We are Ronjanett Taylor from America Reads-Mississippi (ARM), Sondra Collins, from Mississippi’s Institutions of Higher Learning, and Jason Torres Altman from the TerraLuna Collaborative. We are writing to share our experience attempting for the first time to consider “Public Value” when evaluating statewide program outcomes.
What do we mean by “Public Value” as it pertains to our evaluation? For that, we’ll borrow a figure developed by Scott Chazdon, University of Minnesota-Extension Services, published in the Journal of Human Sciences and Extension.
ARM is an AmeriCorps program promoting early elementary literacy outcomes. From program director Mrs. Taylor’s perspective, the concept of evaluation had already grown in ARM‘s consciousness from a task to a tool. They had already embedded evaluation in daily work and widened partnerships to more comprehensively answer questions about how to serve, who to serve, and who to benefit. Previous evaluation efforts had naturally focused on the impact of member service on the students, members, and the schools in which they serve. However, recently, it was capturing public purpose, especially the broader impacts in the center of the Venn diagram that caused us to think (and act) a little differently.
Evaluating for public value can mean investigating things like spillover, leveraging, ripple effects, and ROI. We sought to speak to an audience that understands worth in primarily financial terms and chose an economic cost/benefit study for that reason. This was very interesting as an evaluation consultant because though we desire to influence additional hearts and minds beyond those directly affected by programming, the right combination of expertise, time, and resources to do so often proves elusive.
What You Need
Help!: Perhaps for the average evaluator (or maybe just for Jason), it seems a little bit out-of-our-lane, challenging, and even daunting to start thinking about things in universal terms like economic indicators. Teaming with someone with experience in economics mitigates the fear of putting a dollar figure to the contribution of a program to a community without leaving a tremendous volume of mistakes in your wake.
Hot Tips and Cool Tricks
Lucky for us, Dr. Collins, a trained, experienced professional in that area was available to keep us on track. Ask around, there may be someone like her in your orbit. If not, some things we’ve learned from her are:
- Read studies similar to yours to see what public data is available, what measurement methods were used, and how others deal with missing or incomplete costs or benefits.
- Like in all evaluative efforts, define your treatment and goals and remember to articulate “indirect benefits” on an expansive list and note those that can be measured.
- Determine benefits and assign a dollar value. Find reliable research that has determined that value for you. For example, if a program influences graduation, find a paper that shows how much more a high school graduate earns than a non-graduate.
- Consider your costs. Less fun for programming teams, but your services do have some direct costs. It’s important to include them in the analysis.
- Try to measure benefits (and costs) as close to the treatment as possible. For our K-3 programming, measuring high school graduation rates was tempting, but it is so far from the treatment. Instead, we focused on 4th-grade test rates, relying on peer-reviewed literature to connect test rates and graduation.
- A short video on thinking about public value.
- Scott Chazdon, University of Minnesota’s Extension Services, led early conversations related to Public Value. And fortunately for the rest of us, he left a bevy of resources for our investigation.
We’re looking forward to the Evaluation 2022 conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to contribute to AEA365? Review the contribution guidelines and send your draft post to AEA365@eval.org. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.