AEA365 | A Tip-a-Day by and for Evaluators

CAT | Nonprofits and Foundations Evaluation

Hello! I am Laura Beals and I am an internal evaluator at Jewish Family and Children’s Service (JF&CS). JF&CS is a large nonprofit social service agency located just outside of Boston, MA. As we are surrounded by 58 institutions of higher learning, we have many opportunities to leverage students and faculty in order to increase our evaluation capacity.

At Evaluation 2013, I presented with Jennifer Lowe, Director of Research at Crittenon Women’s Union, about how our organizations handle collaborations with students and faculty:

Beals 1

In this post, I am focusing on the process we have at JF&CS for evaluating (pun intended!) requests that come into the agency from students and faculty.

Lessons Learned:

  • Requests for research can come in many forms; examples of requests include:
    • Students or faculty requesting staff assistance to distribute flyers or notify agency clients of the availability of a research project.
    • Students requesting access to agency staff or clients to observe a program session, distribute a survey, or take part in an interview for a class project.
    • Students or researchers asking for access to agency administrative or managerial staff to study nonprofit management practices.
    • Academic researchers wishing to gain access to client data or to current clients for scholarly research governed by an Institutional Review Board (IRB).
    • Practitioners or researchers wishing to gain access to clients to validate a clinical evaluative tool.
    • Nonprofit collaborations with students or faculty can have many benefits; for example:
      • Capacity: Tackle a project that might have been on the back burner
      • Knowledge: Learn more about the nonprofit and the field in which it operates
      • Inspiration: Be inspired by the methods or tools used
      • Reputation: Improve the nonprofit’s reputation through collaborations with well-known researchers or universities
      • And can also have many risks, for example:
        • Ethics: Clients may not be treated in an ethnical manner
        • Privacy: Client data privacy may be breached (e.g., HIPPA)
        • Resources: Staff or facilities may be overburdened
        • Reputation: The nonprofit’s reputation may be damaged through bad collaborations

Hot Tips:

  • Research requests from students or faculty can be a great way to increase nonprofit evaluation capacity, but there should be a process in place for reviewing and approving to ensure that the benefits outweigh the risks.
  • At JF&CS, students or faculty with a research request must complete an application; all requests for research are then reviewed by our internal evaluation and research department. Depending on the level of risk, varying levels of senior leadership need to approve.

Rad Resource: The handout from this session, which details this process in more detail, is available here in the AEA Public elibrary.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Johanna Morariu, Kat Athanasiades, and Ann Emery from Innovation Network. For 20 years, Innovation Network has helped nonprofits and foundations evaluate and learn from their work.

In 2010, Innovation Network set out to answer a question that was previously unaddressed in the evaluation field—what is the state of nonprofit evaluation practice and capacity?—and initiated the first iteration of the State of Evaluation project. In 2012 we launched the second installment of the State of Evaluation project. A total of 546 representatives of 501(c)3 nonprofit organizations nationwide responded to our 2012 survey.

Lessons Learned–So what’s the state of evaluation among nonprofits? Here are the top ten highlights from our research:

1. 90% of nonprofits evaluated some part of their work in the past year. However, only 28% of nonprofits exhibit what we feel are promising capacities and behaviors to meaningfully engage in evaluation.

2. The use of qualitative practices (e.g. case studies, focus groups, and interviews—used by fewer than 50% of organizations) has increased, though quantitative practices (e.g. compiling statistics, feedback forms, and internal tracking forms—used by more than 50% of organizations) still reign supreme.

3. 18% of nonprofits had a full-time employee dedicated to evaluation.

Morariu graphic 1

4. Organizations were positive about working with external evaluators: 69% rated the experience as excellent or good.

5. 100% of organizations that engaged in evaluation used their findings.

Morariu graphic 2

6. Large and small organizations faced different barriers to evaluation: 28% of large organizations named “funders asking you to report on the wrong data” as a barrier, compared to 12% overall.

7. 82% of nonprofits believe that discussing evaluation results with funders is useful.

8. 10% of nonprofits felt that you don’t need evaluation to know that your organization’s approach is working.

9. Evaluation is a low priority among nonprofits: it was ranked second to last in a list of 10 priorities, only coming ahead of research.

10. Among both funders and nonprofits, the primary audience of evaluation results is internal: for nonprofits, it is the CEO/ED/management, and for funders, it is the Board of Directors.

Rad Resource—The State of Evaluation 2010 and 2012 reports are available online at for your reading pleasure.

Rad Resource—What are evaluators saying about the State of Evaluation 2012 data? Look no further! You can see examples here by Matt Forti and Tom Kelly.

Rad Resource—Measuring evaluation in the social sector: Check out the Center for Effective Philanthropy’s 2012 Room for Improvement and New Philanthropy Capital’s 2012 Making an Impact.

Hot Tip—Want to discuss the State of Evaluation? Leave a comment below, or tweet us (@InnoNet_Eval) using #SOE2012!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365. I’m Will Fenn from Innovation Network a Washington, D.C.-based monitoring and evaluation consulting firm specializing in advocacy and policy change evaluation.

There is an all too common situation that arises around evaluation — I’ll call it the evaluation merry-go-round. I saw this situation many times as a foundation program officer. Now that I work in a role fully focused on evaluation, my goal for the “State of Evaluation Practice” is to help organizations avoid the merry-go-round and promote evaluation that embraces data-based decisions and learning.

Lesson Learned—Let me explain how the evaluation merry-go-round often starts: A funder recognizes the importance of evaluation at a board meeting and approaches its grantees requesting data for an evaluation in the coming year. If the grantee has good data, evaluation moves along happily for both parties. But often resources are tight for grantees and they were not able to capture good quality data even if they are doing great work. The grantee offers what they have, the funder may then question the data, accept the incomplete data, or perform their own data collection with the evaluation conclusion sent to the board.

The process is often uncomfortable for both sides and too often leaves grantees in no better position to improve operations through data informed decision-making. In other words, funder and grantee go up and down through the evaluation process but the ride ends with the organization in the same place.

Hot Tip: My experience is that the same scenario can play-out successfully when funders and grantees cooperate, plan, and invest from the earliest stage to build capacity before the evaluation. A high level of engagement and planning from both parties is essential; and additional resources in terms of funding and expertise are highly recommended. Remember, data is not king; it only helps one ask the right questions. There is no substitute for investing time to understand the context around the data and to know which data to consider.

Rad Resources:

The Stanford Social Innovation Review article, Counting Aloud Together, shows an example of how to build evaluation capacity.

The Collaborative Evaluation” section in the Learning from Silicon Valleyarticlealso offers great tips from the Omidyar Network’s experience on collaborative evaluation.

Also check out Innovation Network’s guide to evaluation capacity building.

saloonHot Tip—Insider’s advice for Evaluation 2013 in DC: For a quiet place to reflect on the day’s events visit the Saloon at 1205 U Street, NW. The bar’s mission is to promote conversation, so it is free of TV screens and offers large communal tables upstairs.

We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

No tags

Greetings! I’m Ann Emery from Innovation Network in Washington, DC.

Rad Resource – Emery Evaluation: Like many evaluators, I wear several hats – full-time evaluation consultant, part-time research methods graduate student, and 24/7 data nerd. My blog weaves these roles together:

  • I blog about my adventures as a nonprofit and foundations evaluator.
  • I share data files and preliminary results from research projects, like this evaluation use survey.
  • I’ve collected guest posts from more than 15 colleagues.
  • I’ve created more than 30 video tutorials, Excel for Evaluators.

Hot Tips – favorite posts: My most popular posts share resources, start conversations, and tell stories:

Lessons Learned – what I’ve learned: Want to write a great blog post? The best posts are short and sweet (not a full manifesto); contain photos, graphs, links, or embedded videos; and end with discussion questions so readers can respond with their own ideas.

Lessons Learned – why I blog: My reasons have evolved over time. I was initially inspired by Chris Lysy’s Ignite presentation about why evaluators should blog and the 2011 Bloggers Series on aea365. And, I simply needed more space – I couldn’t fully express myself in 140-character tweets @AnnKEmery any longer! Now, I blog to educate other evaluators (through my tutorials) and to educate myself (by collecting guest posts from different viewpoints).

Lessons Learned – why you should blog: Blogging makes you a better communicator (and, therefore, a better evaluator). I’ve also talked to evaluators whose blogs have led to invitations to write magazine articles, join committees, participate in podcasts, speak on panels, and turn their blog posts into a published book. Who knew that 400 words could open so many doors?

Lessons Learned – hesitant to start blogging? Most evaluators are concerned that blogging will be time-consuming. So, I conducted some super-serious research to test this hypothesis. Results indicate that, yes, it takes one million hours to write your first blog post. But, with practice, you’ll be writing blog posts in an hour or less. Stick with it!

emery2

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi everyone, my name is Brian Hoessler and I am an independent consultant in Saskatoon, Canada. Through my company, Strong Roots Consulting, I work with non-profit organizations and community-based initiatives to build their capacity through research, strategic planning, and evaluation.

Rad Resource – Strong Roots blog: The website for Strong Roots is also home to my professional blog, which I use to share ideas, resources, and news with the non-profit community in and beyond Saskatoon. It also provides a behind-the-scenes look at the ongoing development of my consulting business, having just started it this past July.

Lessons Learned – why I blog: My reasons for blogging include a bit of everything – sharing tips and resources, demonstrating my knowledge and skills to a new community (I’ve been in Saskatoon for less than a year), and supporting my own professional development. As someone new to the consulting field who is just becoming comfortable with the term “evaluator”, I also use my blog as a space to reflect on my practice and think about directions to take.

Hot Tips – favorite posts:

  • When Does It End? – This post demonstrates how connecting with an online community of bloggers can lead to fruitful thinking. A post by Chris Lysy at freshspectrum (via EvalCentral) prompted me to write about how a dose of evaluative thinking can help determine when a program is failing, even if things look good on the surface.
  • En Route – A demonstration of how I think through writing, in this case a reflection on the term “evaluator” and how I identify (or not) with the field.
  • AEA Conference Day 1 – When I attended my first AEA conference this past October in Minneapolis, I decided to post daily to share new ideas, resources, and personal insights. It was sometimes difficult to find the energy to write after a long day of workshops and sessions, but I’m glad that I kept it up!

Lessons Learned – what I’ve learned: I’ve found it useful to keep a couple of drafts or expanded outlines at hand – sometimes I’ll come up with an idea for a post but don’t feel like writing it out right then and there, or I’m in the mood for stringing words together but have nothing pressing to write about. Breaking the blogging process into two parts, idea generation and writing, can help lessen the anxiety of seeing that blank page!

This winter, we’re continuing our series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

My name is Holly Lewandowski. I am the owner of Evaluation for Change, Inc. a consulting firm that specializes in program evaluation, grant writing, and research for nonprofits, state agencies, and universities. I worked as an internal evaluator for nonprofits for ten years prior to starting my business four years ago.

There have been some major changes in the nonprofit world as a result of the economic downturn -within the last four years especially. I’ve witnessed nonprofits that were mainstays in the community shut their doors because the major funding source they relied on for years dried up. Funding has become scarcer and much more competitive. Funders are demanding grantees demonstrate strong outcomes in order to qualify for funding. As a result, many of my clients are placing a much greater emphasis on evaluating outcomes and impact and less on evaluating program implementation in order to compete. The problem is you can’t have one without the other. Strong programs produce strong outcomes.

Here are some tips and resources I use to encourage my clients to think evaluatively to strengthen their programs and thus produce quality outcomes.

Hot Tips:

  • Take time to think. As an outside evaluator, I am very aware of the stress program staff and leadership are under to keep their nonprofits running. I am also aware of the emphasis for nonprofits to produce in order to keep their boards and funders happy. What gets lost, though, is time to think creatively and reflect on what’s going well and what needs to be improved. Therefore, I build in time in my work plan to facilitate brainstorming and reflection sessions around program implementation. What we do in those sessions are in the following tips.
  • Learn by doing. During these sessions, program staff learns how to develop evaluation questions and how to develop logic models.
  • Cultivate a culture of continuous improvement through data sharing. Also at these sessions, process evaluation data is shared and discussed. The discussions are centered on using data to reinforce what staff already knows about programs, celebrate successes, and identify areas for improvement.

Rad Resources:

  • The AEA Public eLibrary has a wealth of presentations and Coffee Break Demonstrations on evaluative thinking and building capacity in nonprofits.
  • If you are new to facilitating adults in learning about evaluation, check out some websites on Adult Learning Theory. About.com is a good place to start.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · · ·

I’m Reid Zimmerman, a thirty-year veteran of the nonprofit sector in Minnesota and a member of the faculty at Hamline and Capella Universities where I teach intro to research and evaluation courses in graduate nonprofit programs.

Even though my PhD is in Organizational Development and Effectiveness, I am often called, as a Certified Fund Raising Executive (CFRE), to assist an organization with raising money; a challenging task that is made exponentially more difficult because many nonprofits do not evaluate the work they do and have no basis in fact upon which to make their request for philanthropic support.  Because nonprofit staff are so wrapped up in doing good work, many have never bothered to develop a theory of change, define or measure outcomes, or indicate the impact they have on the community.  The sole basis for their work rests on counting activities and outputs .

Hot Tips:

Evaluation for most nonprofit leaders is a frightening endeavor.

  1. Direct service Program personnel are afraid that you are going to find something wrong with the work they do and they will be disciplined or lose their jobs.
  2. Executive Directors are afraid to demand that staff evaluate their programming.  EDs say they want to evaluate but are not necessarily willing to call the question and make sure data is collected regularly.  They are also fearful that their ability to analyze data will be demonstrated as lacking. Often they think like program staff because that is where they began their careers.
  3. Accounting and Admin staff is afraid that evaluation is too expensive, that the money would be better used for programming.
  4. Fundraising staff want the evaluation results to be exceptional, but are afraid that they will only be average or worse…demonstrate that improvement is necessary.
  5. Boards of Directors are afraid that they are overstepping their position to ask for or expect program evaluation, and they don’t know what it is that they want to see with an evaluation.
  6. Clients are often afraid that answering evaluation questions honestly may get them tossed out of the program
  7. Foundation staffs that support these programs with funding are afraid to demand evaluation because then they would have to read and use the evaluation reports and that will add another layer of work.

So, move slowly and with lots of reassurance.  Assuage nonprofit fears that evaluation is about finding fault and help them to realize it is about getting better. Those assurances will be needed.

To hear and discuss a case study of a derailed evaluation at a small NP, meet us at the AEA conference.

Rad Resources:

The American Evaluation Association is celebrating Minnesota Evaluation Association (MN EA) Affiliate Week with our colleagues in the MNEA AEA Affiliate. The contributions all this week to aea365 come from our MNEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Susan Kistler and I am the American Evaluation Association’s Executive Director and aea365 regular Saturday contributor. I’ve written before about podcasts of interest to evaluators, but they were from peripheral fields. Today, I’m excited to see evaluators entering the fray.

Hot Tip – Podcasts are for everyone: The word ‘podcast’ is associated often with iPods, Apple’s ubiquitous device. But you do not need to own an ipod to listen to podcasts. You can listen right on your computer – each has a play button for ready online listening.

Rad Resource –  Story by Numbers Podcast: The Story by Numbers podcast comes out about monthly and is from Maria Gajewski  and Ruth Terry. You might have seen Maria at the AEA 2011 or 2012 conferences. She’s an independent evaluation consultant at Changing River Consulting. The Story by Numbers podcast focuses on the nonprofit sector and their most recent offering is “The DataViz Extravaganza” a two part interview with Johanna Morariu and Ann Emery of Innovation Network.

Hot Tip – More from Ann and Johanna: You can also listen and learn and talk and engage with Ann and Johanna next month at Evaluation 2012 – Ann’s presentation list here and Johanna’s here.

Rad Resource – Adventures in Evaluation Podcast: The Adventures in Evaluation Podcast is hosted by two awesome Canadian evaluators, James Coyle and Kylie Hutchinson. You’ve probably heard from Kylie before in particular, she’s a regular contributor to aea365 and one of AEA’s dedicated professional development workshop presenters, most recently via eStudy on evaluation reporting. The Adventures in Evaluation podcast comes out every two weeks and focuses broadly on different aspects of the field of evaluation. Their most recent podcast features me! I had the opportunity to talk with James and Kylie this past week about what’s new at Evaluation 2012.

Lesson Learned – Accept the Invitation: I was a bit nervous about being on Kylie and James’ podcast, but it was easy and fun to be a guest. James and Kylie recorded their podcast via Skype. I was on my computer with my microphone headset and was quite pleased with how well the sound quality came out and K&J were gracious hosts who made it easy.

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

No tags

My name is Naomi Walsh. I am an independent consultant working primarily with nonprofits. Today, I want to share information about a favorite, free, online journal.

Rad Resource: NTEN is the Nonprofit Technology Network. Their annual membership is only $85 per year, about like AEA’s, and if you work with nonprofits at all – even if you aren’t a tech guru – they are a great resource. They have lots of training opportunities and a lively community, much of it free for members.

Rad Resource: NTEN’s online journal, NTEN: Change (A Quarterly Journal for Nonprofit Leaders) is completely free! You need only to fill out a very short subscription form. Their September issue is topical for evaluators in that the focus is “I Love Data.” It includes an interview with Mayur Patel of the Knight Foundation on tracking and demonstrating nonprofit impact and a Feature Focus on How Your Organization Can Embrace Data and Use What it Can Teach You from Katie Delahaye Paine.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hello, I am Edith Gozali-Lee, a research scientist at Wilder Research. I work primarily on research and evaluation projects related to education. I am currently working on a multi-site, longitudinal study of an early childhood initiative. The study includes three cohorts of school-based preschool program children in ten schools, five cohorts of community-based child care children in homes and centers, and comparison children with and without prior preschool experience. The study follows children from preschool to third grade. That’s a lot to track, making good data collection critical from the start.

Hot Tips:

These are a few coding tips that will help to ensure good data collection tracking:

  • Anticipate the different groups ahead of time and make intuitive coding to make it easier for the following years’ data tracking and analyses
  • Use categories or codes used by schools to make data analyses process easier when you merge data that you collect with other student data collected by schools (demographic data and student outcomes)
  • Label all instruments (survey and assessment forms) with these codes prior to data collection to reduce coding work after the data collection and errors for data entry

Lesson Learned:

It is helpful to hold regular project debriefs to reflect on what works well and does not work so well. This will make the evaluation process go smoother and faster the next time around.

Rad Resources:

Practical research-based information, visit CYFERnet Children, Youth and Families Education and Research Network

Resources for research in early childhood:

We are looking forward to seeing you in Minnesota at the AEA conference this October. Results of this study (along with other Wilder Research projects and studies) will be presented during a poster session: Academic Outcomes of Children Participating in Project Early Kindergarten Longitudinal Study.

The American Evaluation Association is celebrating with our colleagues from Wilder Research this week. Wilder is a leading research and evaluation firm based in St. Paul, MN, a twin city for AEA’s Annual Conference, Evaluation 2012. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top