I’m Maryfrances Porter, Ph.D., CEO and Founder of Partnerships for Strategic Impact, and one of the ImpactStory coaches. We have spent the last 15+ years coaching, training, and supporting small- to medium-sized nonprofits in telling masterful impact stories.
The nonprofits we work with are big enough to care about data, but not big enough to have internal data staff. They often hire consultants to do larger one-time projects: most often for grant writing or strategic planning facilitation, and sometimes they hire an evaluator to help them create a logic model or surveys. The nonprofits we work with are not looking to take a deep dive into whether their program works or to partner with scientists to do research. What they are looking for is honest, actionable feedback from clients that they can use in fundraising. And we think that’s ok.
Nonprofits are practitioners, not scientists. Nonprofits rarely have a mission to prove their programming works, their mission is to deliver excellent programming that helps people. Like a physician who prescribes an evidence-based treatment and then monitors their patient’s wellness, nonprofits should be using evidence-based interventions and then monitoring clients improvement.
Nonprofits are not responsible for long-term impact. Nonprofits are not funded, as scientists are, to do longitudinal research. This is a great reason for using evidence-based interventions which are expected – based on science – to result in long-term impacts. This means nonprofits can focus their precious resources on measuring the immediate impact for the people they serve.
Nonprofits must articulate why what they do is expected to work. Evaluation consultants are uniquely skilled in helping nonprofits do this! Sometimes it’s as easy as creating an annotated citation for a well-known evidence-based program (such as the Abecedarian Project). However, very often, this involves writing a solid defense for how an evidence-based program has been adapted for a specific population and/or pulling together scientific research that supports the approach (e.g., evidence-informed programming).
Nonprofits need support in understanding and using their data and telling their impact story. Anyone reading this knows that articulating the reason why programming is expected to work is really just the first step in helping nonprofits figure out what immediate impact to collect and then how to use that data – not just for fundraising – but even more importantly, for monitoring and improving program delivery.
Here are a few resources anyone can use to articulate why programming is likely to work, all of which (and more) can be found in our Resource Library.
- The Promising Practices Network on Children, Families and Communities provided information on programs and practices that credible research indicated are effective in improving outcomes for children, youth, and families. Here are summaries of Programs That Work (as of June 2014, when the project concluded).
- US Department of Health and Human Services, Office of the Administration for Children and Families, Office of Planning, Research, and Evaluation: Self-Sufficiency, Welfare, and Employment Portfolio addresses innovative approaches for increasing economic self-sufficiency and reducing welfare dependency, including rigorous evaluations of promising employment strategies.
- Abt Associates’ Center for Evidence-Based Solutions to Homelessness covers all of the major areas that comprise the field of homelessness, including the diverse populations of people who experience homelessness and the policies and programs that are intended to serve these different groups.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
1 thought on “Lessons Learned from Day-to-Day, Real-Life Evaluation for Small- and Medium-Sized Nonprofits by Maryfrances Porter”
I work as a data analyst at a small non-profit that is staffed by passionate, dedicated, and agile employees. While I understand the need and purpose to adopt evidence-based programming, doing so can stifle innovation. Providers working on the ground are often the best suited to quick adaptations and innovative new solutions. It is often years before a “promising practice” gets turned into an evidence based program and who bears the burden of producing that evidence? I feel like this is a major issue in our field. From my experience, evaluation that produces credible evidence tends to happen at the state or in academic settings and our mission is, as you say to “deliver excellent programming to help people”. How do we bridge that gap?