AEA365 | A Tip-a-Day by and for Evaluators

TAG | costs

My name is Ellen Steiner, Director of Market Research and Evaluation at Energy Market Innovations, a research-based consultancy focused on strategic program design and evaluation for the energy efficiency industry – we work to create an energy future that is sustainable for coming generations.

Lessons Learned:

An increasingly common practice…

In energy efficiency program evaluations, telephone surveys are traditionally the mode of choice. However, there are many reasons that evaluators are increasingly interested in pursuing online surveys including the potential for:

(1) lower costs,

(2) increased sample sizes,

(3) more rapid deployment, and

(4) enhanced respondent convenience.

With online surveys, fielding costs are often lower and larger sample sizes can be reached cost-effectively. Larger sample sizes result in greater accuracy and can support increased segmentation of the sample. Online surveys also take less time to be fielded and can be completed at the respondent’s convenience.

Yet be aware…

In contrast, there are still many concerns regarding the validity and reliability of online surveys. Disadvantages of online surveys potentially include:

(1) respondent bias,

(2) response rate issues,

(3) normative effects, and

(4) cognitive effects.

Certain populations are less likely to have Internet access or respond to an Internet survey, which poses a generalizability threat. Although past research indicates that online response rates often are equal or slightly higher than that of traditional modes, Internet users are increasingly exposed to online survey solicitations, necessitating researchers employ creative and effective strategies for garnering participation. In addition, normative and cognitive challenges related to not having a trained interviewer present to clarify and probe which may lead to less reliable data.

Come talk with us at AEA!

My colleague, Jess Chandler and I will be presenting a session at the AEA conference titled “Using Online Surveys and Telephone Surveys for a Commercial Energy Efficiency Program Evaluation: A Mode Effects Experiment,” in which we will discuss the findings from a recent study we conducted comparing online to telephone surveys. We hope you can join us and share your experiences with online surveys!

Hot Tips:

  • Email Address Availability – In our experience, if you do not have email addresses for the majority of the population from which you want to sample, the cost benefits of an internet sample are cancelled out by the time spent seeking out or trying to purchase email addresses.
  • Mode Effects Pilot Studies – Where possible, conducting a pilot study using a randomized controlled design where two or more samples are drawn from the same population and each sample is given the survey in a different mode is a best practice to understand the potential limitations of an online survey specific to the population under study.

The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

AEA365 began on January 1, 2010. Before we promoted this resource, we reached out to dedicated authors who believed in the project in order to populate the site with starter content. Those who contributed in week 1 wrote for an audience of fewer than 10. One year later we have over 1500 subscribers and are re-posting the contributions from those trailblazers in order to ensure that they receive the readership they deserve. John was kind enough to update his for 2011!

My name is John LaVelle, and I am an advanced graduate student at Claremont Graduate University. An interest (and need) of mine is time management, since it’s so easy to let the day slip away. I’ll be sharing two resources on more effective time management.

Rad Resource: When you’re getting started in evaluation, it’s easy to forget how much time you spend on your project, and that information can be very valuable later on for when you’re estimating costs for future projects. So, I use a free program for the Mac called MyMacTime to help me keep track of my time investments. MyMacTime is available at: http://www.apple.com/downloads/macosx/productivity_tools/mymactime.html

Rad Resource: It’s sometimes easy to get wrapped up in a project and forget about the time. I use a free dashboard app for Apple called Prod Me, which plays chimes or other sounds at regular intervals to help me be more mindful of the time. ProdMe is available at: http://www.apple.com/downloads/dashboard/status/prodme.html

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello again! I’m Brian Yates, your Treasurer, Director of the Program Evaluation Research Laboratory (PERL) at American University in Washington, DC, and a Professor there too.

A few weeks ago I wrote an AEA365 on “Doing Cost-Inclusive Evaluation. Part I: Measuring Costs.” Measuring the costs of programs is only part of cost-inclusive evaluation. This week we focusing on measuring the monetary outcomes of program. Here’s lessons, tips, and resources for your next evaluation.

Lesson Learned – Costs are not outcomes: BENEFITS are outcomes. Many is the time that I have heard seasoned evaluators and administrators say “program cost” when they meant “program benefit.” What I call the “outcome bias” prompts many people to see only what comes out of programs (outcomes), and not what goes into them (resources, measured as costs). In cost-inclusive evaluations, “benefits” mean “outcomes that are monetary, or that can be converted into monetary units, i.e., that are monetizable.”

Lesson Learned – Examples? Benefits that really “count” for many funders and consumers of human include: a) increased income to clients (and taxes paid to society) resulting from client participation in a program, plus b) savings to society resulting from decreased client use of costly health and other services, like emergency room visits and hospital stays.

Hot Tip – Convert effectiveness to benefits with simple multiplication. If you assure confidentiality before approaching a program’s clients, they’ll often tell you what health and other services they used in the past few months. Sample clients before, during, and after program participation to assess impacts of the program on clients’ use of other services. Validate with checks with those other services. Next, transform these impacts into monetary units: multiply a client’s frequency of service use by the cost of each service (average of health service provider’s fees for that service, for instance). Then, compare costs of services used before and after a program for clients, and you’ve measured a potential program benefit that speaks louder than other outcome measures: cost savings produced by the program!

Lesson Learned – Wow finding: programs often pay for themselves — several times over, and quickly! (Look for specifics on how to analyze these cost-benefit relationships in a future AEA365.)

Lesson Learned – Just ’cause it has a dollar sign in front of it doesn’t make the number “better.” Benefits (and costs) are no more valid that the data from whence they’re derived. The “GIGO” (Garbage In –> Garbage Out) principle works here: invalid benefit data can lead to big mistakes about program funding.

Resource: For examples of measuring benefits and combining them with costs for program funding recommendations, see: http://www.wsipp.wa.gov/auth.asp?authid=2

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Hi! I’m Brian Yates, Professor in the Department of Psychology, and Director of the Program Evaluation Research Laboratory (PERL), at American University in Washington, DC. I’ve also been the AEA Treasurer for the past 3 years, and am looking forward to serving for 3 more.

I’ve included cost as well as outcome measures in my quantitative and qualitative evaluations since the mid-1970s.

Lesson Learned – 1) Costs are not money. Money’s just a way to get access to the resources that make programs work. What matters for programs, and what I measure when I’m evaluating costs, are people’s time — clients’ as well as staff’s, space used, transportation (of clients to and from programs, often) … and not just total time spent working in the program, but the amount of time spent in the different activities that, together, are the program.

Hot Tip: When asking stakeholders about program costs, I make a table listing the major activities of the program (therapy, groups, education, for example) in columns and the major resources used by the program (staff and client time, office space, transportation, for example) in rows. Different stakeholders put the amount of each resource that they use in each activity, and then compare others’ entries with their own. Insights into program operations often ensue!

Lesson Learned – 2) The most valuable resources may not have a price. Many programs rely on volunteered time and donated space and materials: these often don’t come with a monetary price attached. One can assign a monetary value to these resources according to what the same time from the same person would be paid in a job, but the most important thing to measure is the amount of time, the capabilities of the person, and ways they spent their time.

Lesson Learned – 3) When measured only as money, cost findings are instantly obsolete and do not aid replication. Inflation can quickly make specific monetary values for program costs out of date and, all too soon, laughably low. Translating 1980 dollars into 2011 dollars is possible, but still does not inform planners as to what specific resources are needed to replicate a program in another setting.

Lesson Learned – 4) When presenting costs, keep resources in their original units. Yes, time is money … but it comes in units of hours to begin with. Report both, and your audience will learn not just price but what it takes to make the program happen.

Rad Resource: Here’s a free on-line and down-loadable manual I wrote on formative evaluation of not only cost, but also cost-effectiveness and cost-benefit … and not just for substance abuse treatment! http://archives.drugabuse.gov/impcost/IMPCOSTIndex.html

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Archives

To top