AEA365 | A Tip-a-Day by and for Evaluators

TAG | nonprofit

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

My name is Holly Lewandowski. I am the owner of Evaluation for Change, Inc. a consulting firm that specializes in program evaluation, grant writing, and research for nonprofits, state agencies, and universities. I worked as an internal evaluator for nonprofits for ten years prior to starting my business four years ago.

There have been some major changes in the nonprofit world as a result of the economic downturn -within the last four years especially. I’ve witnessed nonprofits that were mainstays in the community shut their doors because the major funding source they relied on for years dried up. Funding has become scarcer and much more competitive. Funders are demanding grantees demonstrate strong outcomes in order to qualify for funding. As a result, many of my clients are placing a much greater emphasis on evaluating outcomes and impact and less on evaluating program implementation in order to compete. The problem is you can’t have one without the other. Strong programs produce strong outcomes.

Here are some tips and resources I use to encourage my clients to think evaluatively to strengthen their programs and thus produce quality outcomes.

Hot Tips:

  • Take time to think. As an outside evaluator, I am very aware of the stress program staff and leadership are under to keep their nonprofits running. I am also aware of the emphasis for nonprofits to produce in order to keep their boards and funders happy. What gets lost, though, is time to think creatively and reflect on what’s going well and what needs to be improved. Therefore, I build in time in my work plan to facilitate brainstorming and reflection sessions around program implementation. What we do in those sessions are in the following tips.
  • Learn by doing. During these sessions, program staff learns how to develop evaluation questions and how to develop logic models.
  • Cultivate a culture of continuous improvement through data sharing. Also at these sessions, process evaluation data is shared and discussed. The discussions are centered on using data to reinforce what staff already knows about programs, celebrate successes, and identify areas for improvement.

Rad Resources:

  • The AEA Public eLibrary has a wealth of presentations and Coffee Break Demonstrations on evaluative thinking and building capacity in nonprofits.
  • If you are new to facilitating adults in learning about evaluation, check out some websites on Adult Learning Theory. About.com is a good place to start.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · · ·

Hello, I am Bernadette Sangalang, Evaluation Director at San Francisco AIDS Foundation (SFAF), a nonprofit that aims to reduce new HIV infections through education, advocacy, and direct services. Prior to SFAF I was an evaluation officer at a large philanthropic organization. I’ve found that, to make evaluation more useful to nonprofits, it helps to start the engagement with staff using simple tools and open conversations about their work.

Hot Tip: Assess current evaluation practices. Create a chart (see below) and discuss with staff their current evaluation activities.  Ask staff to list evaluation-related activities they currently do (e.g., logic models, surveys, reporting to funders) and rate how easy or difficult it is to implement the activity against the usefulness of the information they receive from the activity. If activities fall on the left side of the chart, engage in a discussion as to why they are not useful (and why they continue to do them), and whether the activities are worth revising or maintaining.

Hot Tip: Find simple ways to make evaluation more meaningful to staff. Begin a conversation by asking what success looks like. Engage in a discussion about what their organization’s website would look like if it included a “Results” or “Our Impact” section. What does the organization want to say about the results of their work?  Having an open discussion and brainstorming about program success will likely help focus the evaluation efforts on the important information that they need and will use the most.

 

Lesson Learned: Don’t underestimate the power of logic models. While we know the limitations of logic models, they are a useful tool for nonprofits to articulate the relationships between activities and intended outcomes. The process alone of developing logic models with staff creates ownership and shared understanding of the program and what they hope to achieve. Also, including the organization’s overall goals in the logic model highlights important pathways linking individual program outcomes with the organization’s goals, helping staff better understand how they are contributing to the goals.

Rad Resource: Post-its! Post-its (or sticky notes) are an inexpensive, versatile resource to use in evaluation meetings. For the activity to assess evaluation practices described above, draw the chart on poster paper. Ask staff to write an evaluation activity on a Post-it, place it on the chart, and discuss their reasons for placing it in a particular quadrant. For developing logic models, write the headings on poster paper and use Post-its to fill in the logic model, shifting Post-its as needed as discussion with staff illuminates where each item belongs. Then bring the paper logic model back to the office to transcribe to an electronic version.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

We are Ehren Reed and Johanna Morariu, Senior Associates of Innovation Network. We work with foundations and nonprofits to evaluate and learn from programs, projects, and advocacy endeavors. For more than fifteen years, Innovation Network has been an intermediary in the philanthropic and nonprofit sectors—our mission is to build the evaluation capacity of people and organizations.

For some time, the evaluation field has lacked up-to-date, sector-wide data about nonprofit evaluation practice and capacity. We thought that such information would not only be helpful to us as evaluation practitioners, but could also inform a wide variety of other audiences, including nonprofits, funders, and academics. The State of Evaluation project (www.stateofevaluation.org) is Innovation Network’s answer to this need. In May 2010 we launched a survey to a nationally representative sample of 36,098 nonprofits (all were 501(c)3 organizations) obtained from GuideStar. We received 1,072 complete responses from representatives of nonprofit organizations (for a response rate of 2.97%). Survey results are generalizable to all U.S.-based nonprofits, with a margin of error of plus or minus 4%.

Lessons Learned:
With a tip of the hat to David Letterman, here are the “Top Ten” highlights from State of Evaluation 2010: Evaluation Practice and Capacity in the Nonprofit Sector:

1. 85% of organizations have evaluated some part of their work in the past year.

2. Professional evaluators are responsible for evaluation in 21% of organizations. (For more than half of nonprofit organizations, evaluation is the responsibility of the organization’s leadership or board.)

3. 73% of organizations that have worked with an external evaluator rated the experience as excellent or good.

4. Last year, 1 in 8 organizations spent no money on evaluation. (Less than a quarter of organizations devoted the minimum recommended amount of 5% of their budget to evaluation.)

5. Half of organizations reported having a logic model or theory of change, and more than a third of organizations created or revised the document within the past year.

6. Quantitative evaluation practices are used more often than qualitative practices.

7. Funders were named the highest priority audience for evaluation.

8. Limited staff time, limited staff expertise, and insufficient financial resources are barriers to evaluation across the sector.

9. Evaluation was ranked #9 of a list of ten organizational priorities. Fundraising was #1, and research was #10.

10. 36% of nonprofit respondents reported that none of their funders supported their evaluation work. (Philanthropy and government sources are most likely to fund nonprofit evaluations.)

This report—State of Evaluation 2010—marks the first installment of this project. In two years, we will conduct another nationwide survey and update our findings. To learn more about the project, please visit www.stateofevaluation.org.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Ehren and Johanna? They’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio, Texas.

· · · · · ·

Hi!  We are Johanna Morariu (from Innovation Network) and Debra Natenshon (from the Center for What Works).  Today we will be sharing some tips on non-profit rating systems.

Efforts to use common measures to assess and compare nonprofit performance have multiplied. Interest in comparing nonprofit performance is in a dramatic upswing, and new approaches seem to emerge frequently, ranging from sector-wide research on shared metrics to a variety of new rating systems.

Nonprofit rating systems are designed to offer an apples-to-apples comparison of organizations working towards vastly different missions, employing infinitely varied strategies. Primarily, the systems are intended to inform giving: sharing assessment of organizational effectiveness with large funders and individual donors.

The best known nonprofit rating system is Charity Navigator. Charity Navigator assesses nonprofits on financial health, organizational efficiency, and organizational capacity (with information from the IRS Form 990). The assessments are conducted by Charity Navigator staff of “professional analysts [who have] examined tens of thousands of non-profit financial documents” (from the Charity Navigator website).

Lesson Learned: Through the work of organizations such as Charity Navigator (founded in 2001), the nonprofit sector has benefited from an increased focus on organizational efficiency and effectiveness. Additionally, rating systems stimulate dialogue about how to best compare organizations—resulting in more robust understanding (i.e. beyond finances) of what makes a high-performing nonprofit organization.

Other nonprofit rating systems, such as GreatNonprofits (founded in 2007), have joined the fray more recently. GreatNonprofits generates their ratings differently than Charity Navigator: they collect opinions and reviews from anyone that has interacted with the organization (e.g., clients, employees, volunteers, donors). This new approach is a manifestation of society’s acceptance of the democratization of information: everyone and anyone—not just the experts—can contribute valuable information.

Hot Tip: Currently there are two main nonprofit rating system approaches, as typified by Charity Navigator and GreatNonprofits. Both approaches strengthen the dialogue about nonprofit effectiveness, but fall short of providing a rigorous comparison of an essential component: nonprofit outcomes and impact.

One day, getting to an apples-to-apples comparison of nonprofit results would be great. Until then, we’re eager to push dialogue forward in this area. At the AEA conference in November, we’ll be facilitating a think tank on this topic. We’ll share a current landscape of nonprofit rating systems and participants will be asked to discuss questions such as:

  • Is it possible to develop meaningful common measures for a field as diverse as the nonprofit sector?
  • What can we learn from the experiences of fairly well-known, sector-wide approaches such as Charity Navigator, GreatNonprofits, etc.?
  • What is the effect of nonprofit rating systems on traditional program evaluation?

Tell us what you think: share your comments/questions, and we’ll include them in our session. Or better yet, join us as we build on this discussion in San Antonio!

The American Evaluation Association is celebrating evaluation in Not For Profits & Foundations (NPF) week with our colleagues in the NPF Topical Interest Group.  The contributions all this week to AEA365 will come from our NPF members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting NPF resources.

·

My name is Najah Callander and I am a Manager in Community Investment at the United Way of Greater Houston (UWGH). The mission of UWGH is to help children and youth reach their full potential, create strong families and safe neighborhoods and help them thrive, keep seniors independent and living in their homes, and support people who are rebuilding their lives after a crisis.

As a grantee and a grantor, UWGH uses evaluation and outcomes information to seek funds from individuals, foundations and corporations as well as to identify and invest in quality programs and collaboratives that are making an impact in people’s lives.

Over the last three years, UWGH has moved from simply measuring outcomes to managing them. This new focus is about helping the programs we fund in four areas: 1) to make sure all their clients are benefiting from the program equally; 2) to know how well they are doing; 3) to know why they are getting certain results; and 4) to improve their program and tell their story to stakeholders.

For many years, the non-profit community functioned using outputs (counting units of productivity) to measure their effectiveness.  UWGH’s new focus on continuous quality improvement emphasizes use of outcomes information to improve service delivery, maintain faithful implementation of program models and strengthen resource development. Together with our partners, we strive to demonstrate results.

Hot Tip: One commonly cited barrier to program evaluation, among non-profits, is that staff does not have the time or expertise to do a good job. Utilizing a local college or university can unburden non-profit staff and can provide valuable experience to university students. United Ways use students from colleges of social work, education, health and business to do evaluation projects. In addition to hands on experience with classroom taught techniques, students can recieve a stipend or have their results published by the non-profit.

Hot Tip: Consider creating affinity groups of like programs. Often in the non-profit world, peer programs and agencies have similar challenges and can benefit from shared learnings. Programs that compare their results with like programs, national benchmarks/standards and external experts are more likely to effectively implement changes that improve their programs and get results.  Sometimes outcomes management in an affinity group can help non-profits discover areas where they are models of service. Organizations can further refine their programs to focus on their core competencies and collaborate more effectively to meet additional client needs.

Rad Resource: Analyzing Outcome Information, by H.P. Hatry, J. Cowan & M. Hendricks, the Urban Institute, 2004. http://www.urban.org/UploadedPDF/310973_OutcomeInformation.pdf

 

Note: Mike Hendricks helped UWGH develop our approach to outcomes and our emphasis on Outcomes Management. This is a valuable article to refer to on your own journey!

The American Evaluation Association is celebrating evaluation in Not For Profits & Foundations (NPF) week with our colleagues in the NPF Topical Interest Group.  The contributions all this week to AEA365 will come from our NPF members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting NPF resources.

· ·

Hello, my name is Salvatore Alaimo and I am an Assistant Professor in the School of Public, Nonprofit and Health Administration at Grand Valley State University. I would like to share some tips on the evaluator’s role in evaluation capacity building with nonprofit organizations.

Evaluation Capacity Building (ECB) continues to gain momentum in the literature and in our profession thanks to scholars, researchers and practitioners such as Baizerman, Compton, & Stockdill; Bamberger, Rugh, & Mabry; Boyle, & Lemaire; Fetterman; Miller, Kobayashi, & Noble; Milstein, Chapel, Wetterhall, & Cotton; Patton; Presskill, & Russ-Eft; Sanders; Stufflebeam; Volkov, & King and others. Nonprofits have been challenged with meeting demands for evaluation from foundations, government agencies, the United Way and accrediting bodies, and face the question of what it takes to efficiently and effectively evaluate their programs.

These authors tell us that ECB is context dependent. The challenge we face as evaluators is determining what our specific role should be in ECB. Where is the line between helping a nonprofit organization develop evaluation capacity and becoming an enabler who contributes to co-dependency? Do we help the organization to continue without our assistance and work ourselves out of a job, or do we do just enough to get them started in the ECB process and leave them to continue to build capacity on their own? If we intervene too much, at what point are we taking on responsibilities and tasks best left for the organization’s stakeholders to build a culture for evaluation, mainstream it, and incorporate it into organizational learning?

These questions present challenges for our profession. There are tools we can use to help us navigate these dilemmas and incorporate into our decision making to strive to balance assisting nonprofits in ECB while leaving enough for them to enact on their own.

Hot Tip: I recommend two evaluation checklists by Stufflebeam and Volkov & King in the ECB category found on the Evaluation Center’s web site – http://www.wmich.edu/evalctr/checklists/checklistmenu.htm . I also recommend the program evaluation standards from the Joint Committee found on AEA’s web site at http://www.eval.org/EvaluationDocuments/progeval.html as well as the Guiding Principles for Evaluators at http://www.eval.org/Publications/aea06.GPBrochure.pdf . There are no magic pills or quick answers for working through the challenges of our role in ECB; however if you use these documents together in your ECB work, I believe you will find them extremely helpful in making wise choices and sound decisions.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

My name is Trina Willard and I currently exercise my evaluation and measurement skills as the Vice President of Transformation Systems, Inc., a management consulting firm. During my 15 years in the evaluation field, I have had wonderful opportunities to engage across all levels of service organizations, including work with executives, leadership teams and service delivery staff.

I have personally gained insight about successful evaluation strategies by working “in the trenches”, and a great appreciation for the challenges that service providers face daily in juggling multiple priorities. As a frequent consultant to nonprofit and government groups, I consistently find that these organizations are most successful when armed with a foundational understanding of evaluation. However, competing demands, particularly in relatively small organizations, can preclude attention to professional development on the evaluation front. In fact, sometimes evaluation is tackled as an afterthought detrimentally, and only considered after all other priorities are addressed. Consequently, I believe that building evaluation capacity in the nonprofit sector often “sticks” when it is presented as a process of incremental steps, created systematically over time. In addition, such organizations are often most receptive to a practical, applied approach to evaluation, as opposed to a predominantly academic perspective. I’d like to recommend a rad resource that nicely taps into both of these needs.

Rad Resource: Hallie Preskill & Darlene Russ-Eft (2005). Building Evaluation Capacity: 72 Activities for Teaching and Training. Thousand Oaks, CA: Sage Publications. http://www.amazon.com/Building-Evaluation-Capacity-Activities-Teaching/dp/0761928103

Preskill & Russ-Eft do a great job of translating evaluation models, approaches, and techniques into relatable, hands-on exercises. The book is actually light on narrative “explanation”, but rather creates understanding through the direct implementation of tools and templates. As a trainer, I’ve used this resource repeatedly to illustrate evaluation principles for decision-makers and staff. One of my favorites: Activity 3, Evaluating Chocolate Chip Cookies Using Evaluation Logic. This exercise is always a hit at training events – a true example of learning in an enjoyable way! In addition, the text covers a wide variety of evaluation-relevant content, spanning topics such as ethics, political context, logic models, data collection, qualitative and quantitative analysis, budgeting for evaluation, and organizational buy-in. The layout easily facilitates training on one focused topic, or alternatively creation of a comprehensive training program.

I encourage you to give this resource a look. I’ll be interested to hear what you think!

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · ·

My name is Susan Kistler and I am the Executive Director for the American Evaluation Association. I contribute each Saturday’s post to the aea365 blog. This week I am writing from Atlanta at the Nonprofit Technology conference.

Do you work for or with nonprofit organizations? Have you experienced challenges due to financial constraints that make technology purchases for evaluation beyond the budget?

Hot Tip: Take a look at TechSoup, the “technology place for nonprofits.” TechSoup has resources, training, a peer-to-peer community, and a donated technology program – TechSoup Stock. Their donated tech program gives nonprofits access to products from a range of big name (and not so big name) companies. Examples include the full Microsoft Office Suite including Access and Excel; ArcGIS from ESRI for spatial analysis; and Crystal Reports from SAP for data visualization and reporting. And the cost? Each product has an administrative fee, but most are well below even discounted retail prices. As an example, the full Microsoft Office 2007 suite is $20. Organizations do need to go through a relatively painless qualification process, and the eligibility criteria vary from product to product, but the resource is definitely worth checking out.

The opinions expressed above are my own and do not necessarily represent those of my employer, the American Evaluation Association.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · · · · ·

My name is Kristen Cici, and I am the owner of The Advancement Company (http://www.theadvancementcompany.com) and blogger at NonprofitSOS: http://www.nonprofitsos.com. I tend to primarily work with nonprofit organizations and am interested in nonprofit capacity building.

Hot Tip: When most think of evaluation they think of evaluating a program or policy. I like to help people think of the other ways one can use evaluation in their work, such as a performance review. The tip I am going to share comes from my blog post “Want to know how your nonprofit is doing financially?”, and will help you determine an organization’s financial sustainability.

One of the best ways to gain insight into how an organization is doing is look at an organization’s Defensive Interval, which will tell you how long that organization could survive with its cash on hand. To calculate the defensive interval:

(Cash + Marketable Securities)/(Operating Expenses/365 days)

Note: Marketable Securities are liquid investments, things that can be bought or sold with little effect on their price. They typically have maturities of less than a year, so for example a 6 month CD would be considered a marketable security.

Organizations should have at least 90 days worth. For more ways to look at how you organization is doing financially, check out “Want to know how your nonprofit is doing financially?” at NonprofitSOS and calculate the debt ratio, program expense ratio, and working capital ratios for your organization!  http://bit.ly/financialdefenseintervals

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· ·

Archives

To top