AEA365 | A Tip-a-Day by and for Evaluators

TAG | measuring outcomes

Hi! I’m Fran Butterfoss, President of Coalitions Work, a group that helps coalitions build and sustain community change efforts to promote health and prevent disease. Coalitions are powerful vehicles for building the skills of professionals and volunteers, thereby empowering them to advocate and act on behalf of priority populations within their communities. Since training and technical assistance on evaluation is requested more than any other topic, I’ll share some hot tips to jumpstart your coalition’s efforts to evaluate itself and its initiatives.

Hot Tips to Improve Coalition Evaluation 

  1.  Let questions about your coalition and its strategies drive evaluation. Any evaluation should balance measures of how the coalition does its work with evidence that its strategies work. List  questions that you have about your coalition, then collect data to answer them.
  2. Enlist partners’ help to build buy-in and cooperation. Evaluations that successfully engage community members are more likely to develop relevant evaluation methods and tools and gain  community credibility and participation in data collection efforts. For example, have members create short frequent surveys that reduce respondent burden and maximize participation.
  3. Use innovative, qualitative evaluation methods. Traditional evaluation methods do not always capture the dynamic nature and outcomes of coalitions. As coalition strategies become more complex and concentrate less on individual behavior change, use multifaceted approaches across multiple levels that take community readiness into account. Relying more on qualitative methods that better represent the community and figuring out how coalitions make a difference is a start.
  4. Focus on practice-proven strategies and measurable outcomes. Coalitions are best suited to assessment and priority-setting, rather than implementing projects. Concentrate on relevant health/ social outcomes, as well as on how partnerships build capacity by improving outcomes related to participation, member diversity, leadership, networks, skills, and resources. Coalition sustainability may be evaluated by tracking outcomes such as: community buy-in, infrastructure improvements, resource diversity, educational opportunities, and policy changes.
  5. Provide training and technical assistance. Appropriate training, technical assistance and resources for conducting effective evaluations should be made available, so coalitions can translate evaluation results into actionable tasks.
  6. Begin where you are.  Most coalitions view evaluation as a formidable task. You may feel overwhelmed by technical tasks, time/financial costs, and concerns that you might fail. Start small and evaluate one aspect of your coalition from each of three levels (short, intermediate and long-term) each year. Use and adapt others’ tools. Take advantage of existing data that can be evaluated with little or no cost. As examples, member diversity can be determined by assessing the roster; attendance patterns can be derived from meeting minutes. As confidence and skills grow, engage in new and more complex evaluation tasks.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello again! I’m Brian Yates, your Treasurer, Director of the Program Evaluation Research Laboratory (PERL) at American University in Washington, DC, and a Professor there too.

A few weeks ago I wrote an AEA365 on “Doing Cost-Inclusive Evaluation. Part I: Measuring Costs.” Measuring the costs of programs is only part of cost-inclusive evaluation. This week we focusing on measuring the monetary outcomes of program. Here’s lessons, tips, and resources for your next evaluation.

Lesson Learned – Costs are not outcomes: BENEFITS are outcomes. Many is the time that I have heard seasoned evaluators and administrators say “program cost” when they meant “program benefit.” What I call the “outcome bias” prompts many people to see only what comes out of programs (outcomes), and not what goes into them (resources, measured as costs). In cost-inclusive evaluations, “benefits” mean “outcomes that are monetary, or that can be converted into monetary units, i.e., that are monetizable.”

Lesson Learned – Examples? Benefits that really “count” for many funders and consumers of human include: a) increased income to clients (and taxes paid to society) resulting from client participation in a program, plus b) savings to society resulting from decreased client use of costly health and other services, like emergency room visits and hospital stays.

Hot Tip – Convert effectiveness to benefits with simple multiplication. If you assure confidentiality before approaching a program’s clients, they’ll often tell you what health and other services they used in the past few months. Sample clients before, during, and after program participation to assess impacts of the program on clients’ use of other services. Validate with checks with those other services. Next, transform these impacts into monetary units: multiply a client’s frequency of service use by the cost of each service (average of health service provider’s fees for that service, for instance). Then, compare costs of services used before and after a program for clients, and you’ve measured a potential program benefit that speaks louder than other outcome measures: cost savings produced by the program!

Lesson Learned – Wow finding: programs often pay for themselves — several times over, and quickly! (Look for specifics on how to analyze these cost-benefit relationships in a future AEA365.)

Lesson Learned – Just ’cause it has a dollar sign in front of it doesn’t make the number “better.” Benefits (and costs) are no more valid that the data from whence they’re derived. The “GIGO” (Garbage In –> Garbage Out) principle works here: invalid benefit data can lead to big mistakes about program funding.

Resource: For examples of measuring benefits and combining them with costs for program funding recommendations, see: http://www.wsipp.wa.gov/auth.asp?authid=2

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

My name is Emily Warn and I’m a senior partner at SocialQuarry, a social media consulting group that plans and analyzes online networks and social media for non-profits, foundations, and public agencies.  Today I’m going to be sharing a hot tip about using social media in evaluations.

This past week Facebook surpassed Google in US visitors, proof—if you needed one—that social networking sites are rapidly transforming our offline friendship, family, and work circles to online communities.

It’s obvious to anyone who has tried to ignore ads on Facebook and Twitter that social media is changing how for-profit companies advertise, sell products , and build their brands.  A plethora of tools exist—it seems like new ones are announced every day—to measure a company’s return on their investment (ROI) in social media.  For-profit companies can measure the success of social media campaigns, search engine optimization efforts, customer conversions from browsing to buying, mentions on Twitter, etc.

Not-for-profit organizations are also investing in social media; most are dabbling in Facebook and Twitter, but they lack the tools to measure their (ROI) because, for the most part, their reasons for using social media are very different than those for-profit uses. For example, tools developed to measure the success of advertising campaigns to sell handbags, don’t always work to measure the success of advocacy campaigns to change policies.  Plus, people who participate in advocacy campaigns are passionate about their cause and more likely to check related websites and social networking sites with a regularity that for-profits can only dream of.

Hot Tip: Here are some of the reasons why social networks can be a holy grail for many non-profits.  Non-profits can use social networks to help them:

  • Increase capacity by using network to pool and share resources
  • Coordinate groups working on a common issue
  • Generate ideas and tap expertise to develop grant and advocacy strategies
  • Raise money for capital campaigns and causes
  • Increase a donor base and engage new donors
  • Identify leaders who can expedite learning and coordinate actions across a network.

Hot Tip: Using for-profit tools to measure non-profit outcomes requires defining fundamentally different key progress indicators (KPI). For example, commercial companies can use web analytics tools to measure engagement with customers. They can define a number of units sold as a KPI for an ad campaign and measure progress against that goal by analyzing how many customers clicked-through to their site and bought a product.  Further analysis can reveal on which pages customers abandoned  the process of stepping through an online shopping cart.  Improving those pages improves customer engagement.

Non-profits could define a KPI for engagement as the number of network members who participated in a in collective actions? Instead of stepping through  a shopping cart, web analytics tools could measure how many people stepped through a process to send an email to a legislature, or signed up for a newsletter to stay informed about an advocacy campaign or stay connected with key people or organizations.

I hope this gives you some ideas about how you can incorporate some online tools in your own evaluations!

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

Jan/10

24

Susana Bonis on Logic Models and Outcomes

My name is Susana Bonis, and I am an advanced graduate student at Claremont Graduate University.  I work with small to mid-size nonprofit organizations in fundraising, strategic planning, and evaluation.  Most of the organizations I work with focus on outcome measurement, and many are looking to develop their own internal evaluation capacity.  I introduce nearly all of them to logic models, and how they can be used.  The two resources I will highlight are “oldies but goodies”– they offer straightforward definitions, plenty of examples, and useful tools and worksheets;  there’s no need to reinvent the wheel!

Rad Resource:  W. K. Kellogg Foundation guide to developing a logic model. The guide describes what a logic model is and how it can be used to direct evaluation efforts.  Fictitious examples help readers understand the processes of both developing a logic model and using it to frame evaluation questions.  Helpful tips are provided for establishing indicators to measure success.  The appendix offers logic model templates and checklists of important things to consider when constructing each part of the model.  Hard copies are available in English and Spanish.  http://bit.ly/WKKFoundationLMguide

Rad Resource:  The United Way’s Measuring Program Outcomes predates the Kellogg guide.  This is a step-by-step manual for health, human service, and youth- and family-serving agencies focused on specifying program outcomes, developing measurable indicators, identifying data sources and data collection methods, analyzing and reporting findings, and using outcome information.  It can be ordered for $5 (includes shipping and handling) from http://bit.ly/UW-OMRN.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

Archives

To top