AEA365 | A Tip-a-Day by and for Evaluators

CAT | Costs, Effectiveness, Benefits and Economics

Hi! I’m Brian Yates. This is the fourth piece in a series of AEA365’s on using costs in evaluation. I started using costs as well as outcomes in my program evaluations in the mid-1970s, when I joined the faculty of the Department of Psychology at American University in Washington, DC. Today I’m still including costs in my research and consultation on mental health, substance abuse, and consumer-operated services.

Three other 365ers focused on evaluating costs, benefits, and cost-benefit of programs; there’s even more to cost-inclusive evaluation!

Lesson Learned: What if important outcomes of a program are not monetary, and cannot be converted into monetary units? Easy answer: do a cost-effectiveness analysis or a cost-utility analysis!

Cost-effectiveness analysis (CEA) describes relationships between types, amounts, and values of resources consumed by a program and the outcomes of that program — with outcomes measured in their natural units. For example, the outcome of a prevention program for seasonal depression could be measured as days free of depression. Program costs could be contrasted to these outcomes by calculating “dollars per depression-free day” or “average hours of therapy A versus therapy B per depression-free day generated.”

Hot Tip: How to compare apples and oranges. “But how can you compare costs of generating one outcome with costs of generating another? Cost per depression-free day versus cost per drug-free day?!” No problem: compare these “apples” and “oranges” by bumping the units up one notch of generality, to fruit. Diverse health program outcomes now are measured in common units of Quality-Adjusted Life Years (QALYs), with a year of living with depression as being worth substantially less than a year of living without depression. This and other forms of cost-utility analysis (CUA ) are increasingly used for health services funding.

Lessons Learned:

Insight Offered: It’s easy to dismiss using of costs in evaluation with “…shows the price of everything and the value of nothing.” Actually, cost-inclusive evaluation encompasses types and amounts of limited societal resources used to achieve outcomes measured in ways meaningful to funders and other stakeholders.

More? Yes! Lately I’ve gained better understanding of relationships between resources invested in programs and outcomes produced by programs when I work with stakeholders to also include information on program activities and clients’ biopsychosocial processes. More on that later.

Rad Resources:

Cost-effectiveness analysis (2nd edition) by Levin and McEwan.

Analyzing costs, procedures, processes, and outcomes in human services by Yates.

Want to learn more? Brian will be presenting a Professional Development workshop at Evaluation 2014 in Denver, CO. Click here for a complete listing of Professional Development workshops offered at Evaluation 2014. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

My name is Jose Diaz, and I am one of the economists in Wilder Research conducting social return on investment analysis.

How many times have you been approached by a program leader asking you for an evaluation and just after a few minutes of talking you realize that the program does not have enough data for you to conduct the evaluation? You know that from this point on, it’s a very slippery road; data collection takes time and is expensive, and the already small budget for evaluation that the program had just became tiny and insufficient to pay for the research. So, is it all lost or is there an alternative?

Our team faces scenarios like this one very often. In these cases we suggest to our clients a type of analysis we call prospective social return on investment. This type of studies use secondary data to reasonably argue that the program in question is achieving its outcomes; we can then compute economic benefits and costs from the effect sizes and other existing data. This is not a completely new idea; the Washington State Institute for Public Policy has been a pioneer using meta-analysis to conduct economic analysis.

Hot Tips: Here are some considerations when conducting prospective return on investment:

  • Defining the logic model of the intervention during the preliminary stages of the analysis would significantly help the process.
  • Not all outcomes can be monetized; but maybe not having the outcome may cause some costs to society.
  • We always need to keep in mind what is the alternative we are measuring the outcome against; that is, what would be the characteristics and conditions of the target population in the absence of the program? Usually, the target population in the studies found in the literature reviews is not the same as the target population served by the program we are evaluating; so we need to define and describe the population to which we can apply the effect sizes found.
  • Along the way we will need to make assumptions that may be crucial in the final results; being conservative and reasonable about these assumption is a always a good practice

Lesson Learned: The idea sounds good in theory but in practice the devil is in the details. Social return on investment (prospective or actual) is an exercise in persuasion as much as a research effort, so you need to be your own worst enemy. Be mindful of the meaning and scope of the estimations produced. If you won’t, someone else will.

The American Evaluation Association is celebrating with our colleagues from Wilder Research this week. Wilder is a leading research and evaluation firm based in St. Paul, MN, a twin city for AEA’s Annual Conference, Evaluation 2012. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, my name is Kori Kanayama of Kaoru Kanayama Consulting.  I’m going to share an approach to assessing a nonprofit organization’s profitability and mission impact as a springboard to organizational sustainability. I presented this assessment methodology at the Spring AZENet conference.

This methodology can be done by nonprofit leaders, and by internal or external evaluators. It requires two kinds of data.

The first type of data is financial, and the task is to determine if each of the organization’s programs is making or losing money. The second type of data is mission impact data where a decision must be made on which indicators will be used to measure the impact of each program.  Examples of indicators that could be used to assess impact include: Alignment with Core Mission, Program Excellence, Community Building Value, and Leverage.

After determining whether each program (or line of business) is making or losing money and whether the impact of each program is high or low, the data can be displayed on a simple quadrant matrix.  Each program will end up in one (or could straddle two) of the four quadrants which show implications for the programs:

  Low Profitability High Profitability
High Impact HEARTAvoid making the choice to either shut it down or raise more money. Keep it, but control its costs. Too many of these make an organization unsustainable. STARSeems to run itself, but resist the temptation to loosen oversight. Invest in these programs because this is where strategic growth opportunities are. 
Low Impact STOP SIGNInstead of holding onto this program, close it down or give it away.  If making one last effort, do it with a budget and a deadline. Do an analysis on the impact of closing a business line, and pay attention to its effect on shared administrative costs. MONEY TREEPrograms in this category need to be nurtured to increase their impact. Until impact is increased resist expanding, adding, or growing the program. 

Hot Tip: Allocate income and expenses as accurately as possible. Trying to prop up unprofitable programs defeats the purpose of the analysis.  The point is to clarify the subsidizing relationships between business lines.

Hot Tip: This analysis can be incorporated into strategic planning, implemented at a board or senior staff retreat, or used as a stand-alone operational planning tool. See Jan Masaoka’s Blue Avocado blog post on alternatives to strategic planning which highlights the need to look at program impact and financial stability together.

Rad Resources:

Source Document: Nonprofit Sustainability by Jeanne Bell, Jan Masaoka and Steve Zimmerman.

My presentation at the AZENet conference.

The American Evaluation Association is celebrating Arizona Evaluation Network (AZENet) Affiliate Week with our colleagues in the AZENet AEA Affiliate. The contributions all this week to aea365 come from our AZE members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Greetings all! I am Agata Jose-Ivanina and I am a Senior Associate with ICF International, a research and evaluation firm headquartered near Washington, DC. As the budget debate has been heating up and educational agencies have to make hard choices, we have seen an increased demand to include a cost component as part of program evaluation. When trying to measure and analyze costs, evaluators should follow four steps:

In Step 1, you need to identify all of the possible costs of a program. Using the “ingredients method,” list out all of the possible cost categories of a program, as well as the individual costs within each category. For example, an evaluation of an educational technology program might include costs for personnel, hardware, software, training, troubleshooting, and maintenance.

Hot Tip: When identifying cost ingredients, it is a good idea to work together with the program staff who know the most about the program’s operations.

In Step 2, you develop a calculation for each of the individual costs identified in Step 1. For example, to estimate the value of the time that teachers spent in training, you would multiply the length of the training by the number of teachers by the hourly stipend that teachers receive for participating in professional development.

Hot tip: In this step, write out a formula for each calculation. Writing out all formulae will help you understand what data you need to collect and all the assumptions you may need to make.

In Step 3, you collect each of the individual pieces of data that is necessary for Step 2 calculations. Evaluators can’t rely solely on program budgets when collecting cost data.  Budgets often bundle costs of several programs together, and it may be hard to isolate the information that you need. Moreover, budgets may not include items that were not explicitly paid for, such as the cost of office space or time from salaried employees.

Hot Tip: Receipts, interviews, market research, and surveys — a researcher may need to employ all these strategies to get at the total cost of the program.

Step 4: Once you have your data, it is just a matter of plugging them into the calculations you developed in Step 2, and adding them up to get a total program cost.

Rad Resource: If you are new to cost-benefit or cost-effectiveness analysis, the book to start with is Levin and McEwan’s Cost-Effectiveness Analysis: Methods and Applications. Not only does it explain the concepts very well, it has a wonderful bibliography.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Agata? She’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

Hi, I’m Gary Huang, a Technical Director and Fellow at ICF Macro, Inc. in Calverton, Maryland. My colleagues, Sophia Zanakos, Erika Gordon, Gary McQuown, Rich Mantovani, and I are presenting at AEA’s upcoming conference on improper payment (IP) studies. We conduct research and evaluation relating to benefit eligibility and payment errors under rubric of IP. This kind of research, required by law (IPERA 2010, formerly IPIA 2002), is becoming increasingly important for improving government accountability and financial integrity.

Lessons Learned: To define benefit eligibility error and to make decisions on data sources and methods to use to generate IP estimates, we must prioritize stakeholders’ different interests. This includes meeting the technical and statistical rigor required by the Office of Management and Budget (OMB), understanding the intricacies in program concerns by federal agencies, dealing with the reluctance to cooperate among local agencies, and facing the logistic challenges for surveying program participants. Two types of data sources are used in IP studies: program administrative records and survey data.

Hot Tip: A comprehensive IP study of the assisted-housing programs at HUD involves a stratified sample survey and administrative data collection to generate nationally representative estimates of 1) the extent of erroneous rental determinations, 2) the extent of billing error associated with the owner-administered program, and 3) the extent of error associated with tenant underreporting of income. The extensive data collection effort requires coordination and data quality control to ensure data accuracy in tenant file abstraction, in-person CAPI interviewing, third party information, and data matching with Social Security and National Directory of New Hires databases.

Hot Tip: Some agencies conduct national representative surveys of individuals served and entities paid for providing services. In some cases, these surveys bear close similarities to audits and are overt or covert with the data collector posing as a customer. The Food and Nutrition Service (FNS) is increasingly emphasizing the use of administrative data to update estimates obtained from surveys. However, the administrative data are usually biased, and therefore must be modified. Statistical modeling for updating improper payment estimates seems a possible and efficient alternative in IP studies.

Hot Tip: For the Center for Medicare Medicaid Services (CMS) to identity probable fraudulent claims and the resulting improper payments to health care providers, computer programs were developed to examine four years of Medicaid administrative claims data for all US states and territories, applying a variety of algorithms and statistical processes. Both individual health care providers and related institutions were reviewed. For such large administrative data analyses, evaluators struggle to understand various issues from technical, managerial and political perspectives.

Rad Resources: Check OMB’s implementing guidance to all federal agencies (http://fedcfo.blogspot.com/search/label/IPIA) on IP measurement and policy and technical requirements for IP studies.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Gary? Gary and his colleagues will be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

My name is Stan Capela. I am the Corporate Compliance Officer for HeartShare Human Services, as well as the current chair of the American Evaluation Association’s Government Topical Interest Group (GOV TIG). The purpose of this AEA365 is to talk about corporate compliance and its relationship to program evaluation. Also it is an opportunity to highlight significant issues relevant to the Government TIG. This is the first AEA365 as part of a week long conversation on evaluation and government.

Lessons Learned: Several years ago, the federal government created the Office of Medicaid Inspector General’s Office (OMIG) to reduce fraud in Medicaid funded programs. The statute focused on the need for all Medicaid funded programs to establish eight anti-fraud elements. All organizations must have:

1.       a corporate compliance policy;

2.       corporate compliance program oversight;

3.       education and training;

4.       effective confidential communication;

5.       enforcement compliance standards;

6.       auditing and monitoring of compliance activities;

7.       detection and response; and

8.       whistleblower provisions and protections.

The key is to ensure that systems are in place to provide ongoing monitoring of programs, educate staff on code of conduct, ensure appropriate governance and encourage staff to be cognizant of fraudulent activities and reporting such activities.

Although internal program evaluators conduct ongoing evaluations, the corporate compliance role is one where there is a greater emphasis placed on orchestrating all evaluation activities in a way that reduces fraud, as well as risk to the organization. Further, there also is an emphasis on making sure the corporate compliance officer reports to the governing board and the CEO and President.

Lessons Learned: New York State has placed a great deal of emphasis on OMIG. The agency offers a wide range of webinars and tools. One very useful tool is a checklist to assess organizations’ corporate compliance plans. It is available thru OMIG compliance alert notes at its website: www.omig.ny.gov. As a result of these changes, organizations will place greater emphasis on individuals with program evaluation responsibilities to take on these tasks as part of their normal workload. In addition, this role also re-enforces the importance of ethics as part of the evaluators’ responsibilities since one task focuses on ensuring appropriate ethical conduct throughout the organization.

Hot Tip: Finally, as GOV TIG Chair, I encourage you to attend our business meeting at the AEA annual conference on Thursday November 3rd at 8 am in Huntington B, where you will be inspired by David Bernstein who will reflect on methods to make evaluations more useful and long lasting for research sponsors and stakeholders. If you want to learn more about the TIG or want to play a more active role, contact me at stan.capela@heartshare.org.

The American Evaluation Association is celebrating GOV TIG Week with our colleagues in the Government Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our GOV TIG members and you can learn more about their work via the Government TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am David Erickson, the Manager for the Center for Community Development Investments at the Federal Reserve Bank of San Francisco.

In the community development finance and socially-motivated investing worlds, there is both universal agreement for the need for better social outcome measurements and no consensus on how to do it. And yet the pressure to innovate in this area is coming from many sources—from government, consumers, investors, and others.

Lessons Learned: A growing consciousness among consumers and investors about social and environmental issues is already changing the types of products and services that are available in the marketplace. Government, too, is seeking to change the ways it does business by providing more resources to programs that are proven to work and by directing funds away from programs that don’t. Xavier De Souza Briggs, Deputy Director of the Office of Management and Budget, captured this idea at a recent Federal Reserve conference where he explained that leaders in government are trying to change “the DNA of the federal government” so that it can take more risks and reward investments that yield better social outcomes. That change – both in the market and for government – requires better data on social impact.

If it is possible to use investments – by the government and socially-motivated investors – to improve society, the question is how do you know they are succeeding? How do you do this where investments cover a wide range of issues? How do we agree what constitutes impact and what tools can be developed to track it? What is the role government can play to enable standard setting and measurements?

Hot Tip: The Board of Governors of the Federal Reserve System and the Federal Reserve Bank of San Francisco are hosting a meeting titled “Advancing Social Impact Investments through Measurement: New Capital for Community Development” to tackle those questions on March 21, 2011 in Washington, DC.

Resource: Our journal, the Community Development Investment Review, recently devoted the full issue to measuring social impact, including articles exploring social metrics and investing, measuring nonfinancial performance, and the role of the federal government in measuring and defining social impact in the impact investing field. All articles are available for free download.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello again! I’m Brian Yates, your Treasurer, Director of the Program Evaluation Research Laboratory (PERL) at American University in Washington, DC, and a Professor there too.

A few weeks ago I wrote an AEA365 on “Doing Cost-Inclusive Evaluation. Part I: Measuring Costs.” Measuring the costs of programs is only part of cost-inclusive evaluation. This week we focusing on measuring the monetary outcomes of program. Here’s lessons, tips, and resources for your next evaluation.

Lesson Learned – Costs are not outcomes: BENEFITS are outcomes. Many is the time that I have heard seasoned evaluators and administrators say “program cost” when they meant “program benefit.” What I call the “outcome bias” prompts many people to see only what comes out of programs (outcomes), and not what goes into them (resources, measured as costs). In cost-inclusive evaluations, “benefits” mean “outcomes that are monetary, or that can be converted into monetary units, i.e., that are monetizable.”

Lesson Learned – Examples? Benefits that really “count” for many funders and consumers of human include: a) increased income to clients (and taxes paid to society) resulting from client participation in a program, plus b) savings to society resulting from decreased client use of costly health and other services, like emergency room visits and hospital stays.

Hot Tip – Convert effectiveness to benefits with simple multiplication. If you assure confidentiality before approaching a program’s clients, they’ll often tell you what health and other services they used in the past few months. Sample clients before, during, and after program participation to assess impacts of the program on clients’ use of other services. Validate with checks with those other services. Next, transform these impacts into monetary units: multiply a client’s frequency of service use by the cost of each service (average of health service provider’s fees for that service, for instance). Then, compare costs of services used before and after a program for clients, and you’ve measured a potential program benefit that speaks louder than other outcome measures: cost savings produced by the program!

Lesson Learned – Wow finding: programs often pay for themselves — several times over, and quickly! (Look for specifics on how to analyze these cost-benefit relationships in a future AEA365.)

Lesson Learned – Just ’cause it has a dollar sign in front of it doesn’t make the number “better.” Benefits (and costs) are no more valid that the data from whence they’re derived. The “GIGO” (Garbage In –> Garbage Out) principle works here: invalid benefit data can lead to big mistakes about program funding.

Resource: For examples of measuring benefits and combining them with costs for program funding recommendations, see: http://www.wsipp.wa.gov/auth.asp?authid=2

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi! I’m Brian Yates, Professor in the Department of Psychology, and Director of the Program Evaluation Research Laboratory (PERL), at American University in Washington, DC. I’ve also been the AEA Treasurer for the past 3 years, and am looking forward to serving for 3 more.

I’ve included cost as well as outcome measures in my quantitative and qualitative evaluations since the mid-1970s.

Lesson Learned – 1) Costs are not money. Money’s just a way to get access to the resources that make programs work. What matters for programs, and what I measure when I’m evaluating costs, are people’s time — clients’ as well as staff’s, space used, transportation (of clients to and from programs, often) … and not just total time spent working in the program, but the amount of time spent in the different activities that, together, are the program.

Hot Tip: When asking stakeholders about program costs, I make a table listing the major activities of the program (therapy, groups, education, for example) in columns and the major resources used by the program (staff and client time, office space, transportation, for example) in rows. Different stakeholders put the amount of each resource that they use in each activity, and then compare others’ entries with their own. Insights into program operations often ensue!

Lesson Learned – 2) The most valuable resources may not have a price. Many programs rely on volunteered time and donated space and materials: these often don’t come with a monetary price attached. One can assign a monetary value to these resources according to what the same time from the same person would be paid in a job, but the most important thing to measure is the amount of time, the capabilities of the person, and ways they spent their time.

Lesson Learned – 3) When measured only as money, cost findings are instantly obsolete and do not aid replication. Inflation can quickly make specific monetary values for program costs out of date and, all too soon, laughably low. Translating 1980 dollars into 2011 dollars is possible, but still does not inform planners as to what specific resources are needed to replicate a program in another setting.

Lesson Learned – 4) When presenting costs, keep resources in their original units. Yes, time is money … but it comes in units of hours to begin with. Report both, and your audience will learn not just price but what it takes to make the program happen.

Rad Resource: Here’s a free on-line and down-loadable manual I wrote on formative evaluation of not only cost, but also cost-effectiveness and cost-benefit … and not just for substance abuse treatment! http://archives.drugabuse.gov/impcost/IMPCOSTIndex.html

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

My name is Steve Kymes and I am the Director of the Center for Economic Evaluation in Medicine (CEEM) at Washington University in St. Louis. At CEEM, we assist investigators conducting clinical or community based research design, and implement and conduct cost-benefit and cost-effectiveness studies. While it may often seem that conducting valid and credible economic evaluation is simply a matter of basic arithmetic, it always requires thoughtful planning and is best done with the benefit of a multidisciplinary team.

Hot Tip #1: Make sure you know why you are doing the economic evaluation. In the language of economic evaluation we say that this is considering the “perspective of the analysis.” More pragmatically, you should always remember that the purpose of the program evaluation (and thus the economic component of it) is to influence one or more policy makers or stakeholders. That being the case, you need to be aware of what is important to that audience. Everything in the analysis—the definition of cost, how cost will be measured, the measures of effectiveness (or benefit), and the methods of estimation used are all determined by the perspective of the decision maker. So this should be the very first question you answer before you do anything else.

Hot Tip #2: As with so much else in program evaluation, your work will be much easier and the results will have increased validity if you involve people from the organization being evaluated. However, it is even more critical in this area of evaluation than with others. Assessment of cost data often takes the evaluator out of his/her area of expertise, and may also be beyond the expertise of the primary contacts at the organization as well. Therefore, it is essential to make early contact with the accounting and operations staff of the organization to learn how resources are tracked, costing methods used, and how data can be extracted.

Hot Tip #3: Experts in economic evaluation go by a number of descriptions–some are economists, some are outcomes researchers, and often, they have an MBA and a strong background in accounting or finance. Whoever they are, you will have a stronger result and avoid heading down many blind alleys if you involve an expert in your evaluation team from the very beginning. They will help you understand how to design the most efficient study, assist in interacting with the organization’s accounting team, and help you to communicate your results to policy makers.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

Older posts >>

Archives

To top