Hello again! I’m Brian Yates, your Treasurer, Director of the Program Evaluation Research Laboratory (PERL) at American University in Washington, DC, and a Professor there too.
A few weeks ago I wrote an AEA365 on “Doing Cost-Inclusive Evaluation. Part I: Measuring Costs.” Measuring the costs of programs is only part of cost-inclusive evaluation. This week we focusing on measuring the monetary outcomes of program. Here’s lessons, tips, and resources for your next evaluation.
Lesson Learned – Costs are not outcomes: BENEFITS are outcomes. Many is the time that I have heard seasoned evaluators and administrators say “program cost” when they meant “program benefit.” What I call the “outcome bias” prompts many people to see only what comes out of programs (outcomes), and not what goes into them (resources, measured as costs). In cost-inclusive evaluations, “benefits” mean “outcomes that are monetary, or that can be converted into monetary units, i.e., that are monetizable.”
Lesson Learned – Examples? Benefits that really “count” for many funders and consumers of human include: a) increased income to clients (and taxes paid to society) resulting from client participation in a program, plus b) savings to society resulting from decreased client use of costly health and other services, like emergency room visits and hospital stays.
Hot Tip – Convert effectiveness to benefits with simple multiplication. If you assure confidentiality before approaching a program’s clients, they’ll often tell you what health and other services they used in the past few months. Sample clients before, during, and after program participation to assess impacts of the program on clients’ use of other services. Validate with checks with those other services. Next, transform these impacts into monetary units: multiply a client’s frequency of service use by the cost of each service (average of health service provider’s fees for that service, for instance). Then, compare costs of services used before and after a program for clients, and you’ve measured a potential program benefit that speaks louder than other outcome measures: cost savings produced by the program!
Lesson Learned – Wow finding: programs often pay for themselves — several times over, and quickly! (Look for specifics on how to analyze these cost-benefit relationships in a future AEA365.)
Lesson Learned – Just ’cause it has a dollar sign in front of it doesn’t make the number “better.” Benefits (and costs) are no more valid that the data from whence they’re derived. The “GIGO” (Garbage In –> Garbage Out) principle works here: invalid benefit data can lead to big mistakes about program funding.
Resource: For examples of measuring benefits and combining them with costs for program funding recommendations, see: http://www.wsipp.wa.gov/auth.asp?authid=2
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.