I’m Prentice Zinn. I work at GMA Foundations, a philanthropic services organization in Boston.
I went to a funder/grantee dialogue hosted by Tech Networks of Boston and Essential Partners that discussed the tensions between nonprofits and funders about data and evaluation.
Funders and their grantees are not having an honest conversation about evaluation.
A few people accepted this dynamic as one of the existential absurdities of the nonprofit sector.
Others shared stories about pushing back when the expectations of foundations about measurement were unrealistic or unfair.
Everyone talked about the over-emphasis on metrics and accountability, the capacity limits of nonprofits, and the lack of funding for evaluation.
Others began to imagine what the relationship would be like if we emphasized learning more than accountability.
As we ended the conversation, someone asked my favorite question of the day:
“Are funders aware of their prejudices and power?”
Here is what I learned about why funders may resist more honest conversations with nonprofits about evaluation and data:
Business Conformity. When foundations feel pressure to be more “business-like” they will expect nonprofit organizations to conform to the traditional business models of strategy developed in the late 20th century. Modern management theory treats organizational strategy as if it was the outcome of a rational, predictable, and analytical process when the real world is messy and complex.
Accountability and Risk Management. When foundations feel pressure to be accountable to the public, their boards, and their peers, they may exert more control over their grantees to maximize positive outcomes. Exercising fiduciary responsibility pressures funders to minimize risk by estimating probabilities of success and failure. They will put pressure on grantees to provide conforming narratives based on logic models, theories of change, outcome measurements, and performance monitoring.
Outcomes Anxiety. Funders increase their demands for detailed data and metrics that indicate progress when
they get frustrated at the uneven quality of outcome information they get from nonprofits.
Data Fetishism. Funders may seek data without regard for its validity, reliability, or usefulness because society promotes unrealistic expectations of the explanatory power of data. When data dominates the perception of reality and what we are seeing, it may crowd out other ways of understanding what is going on.
Confirmation Bias and Overgeneralization. When foundations lack external pressures or methods to examine their own assumptions about evaluation, they may overgeneralize about the best ways to monitor and evaluate change and end up collecting evidence that confirms their own ways of thinking.
Careerism and Self-Interest. When the staff of foundations seek to advance their professional power, privilege, and prestige, they may favor the dominant models of organizational theory and reproduce them as a means of gaining symbolic capital in the profession.
Rad Resource: Widespread Empathy: 5 Steps to Achieving Greater Impact in Philanthropy. Grantmakers for Effective Organizations. 2011. Tips to help funders develop an empathy mindset.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thank you for this very interesting piece on data collection and the tensions connected to the process. As others who commented I agree that data collection is vital to the exi8stense of a program and will improve the outputs.
Pingback: Research Evaluation Consulting - Program Evaluation: Nonprofits’ Needs Versus Donor’s Requests, Part 2
Pingback: Research Evaluation Consulting - Program Evaluation: Nonprofits’ Needs Versus Donor’s Requests, Part 1
One of the most disappointing features you don’t mention is that Boston is the most college-dense region on the continent, and nobody uses the universities, their students, their faculty, or their curriculum to conduct regular impact analyses. MIT used to have an exceptional vehicle through their internship initiatives, but most of that has been re-absorbed by their departments. The shortest route to a credible, published article for a grad student application is by evaluating a community innovation through surveys, interviews, and lit reviews. When will we see this again?
Good point Joe.
The good news is that there is more going on in the region that most people realize. I am always surprised how academic institutions are reaching out and vice-versa.
The other thing that you probably already know is that community-university partnerships are REALLY challenging to institutionalize over the long haul. They require skilled leaders and resources on both sides to do well.
Many of the nonprofit leaders I talk to are highly critical of the quick and dirty student internship model for evaluation. They are looking for deeper relationships, continuity, skilled mentoring, and authentic partnerships rooted in the mission not just a publication for someone’s vitae.
Good blog post, very useful.
At the Institute for Community Health, we’ve found that some of the issues noted in this blog can be minimized with a participatory approach to evaluation that demonstrates the impact of the program rather than canned metrics. This approach should focus on grant-making goals and effectively document the impacts of both the foundation and the grantees. There are a number of activities that can be utilized:
Multi-level Logic Modeling and Evaluation Planning
Collaborate with foundation staff to develop an overarching, foundation-level logic model that depicts the underlying theory of how grant-making activities will accomplish stated goals.
Layered, Strategic Reporting
Develop a tailored set of evaluation measures for each grantee’s project that is derived directly from their evaluation plan and flows naturally from the work they are doing. This ensures that the reports submitted by grantees capture project-specific outcomes that are connected to the foundation’s key goals, making reporting a useful and relevant activity for both the grantee and the foundation.
Collecting and Sharing Lessons Learned
To help document the collective learning that occurs throughout the implementation of each grantee’s project, conduct qualitative interviews to gain a deeper understanding of grantees’ experiences. Summarize and share back the qualitative findings to ensure that future grantees can build upon the collective best practices and lessons learned to optimize their work.
For more information, see our blog entry from October, “The Foundation to Effective Reporting: A Utilization-Focused, Participatory Approach”.
Thank you for raising the issue of reporting/data/tension between funders and non-profits. I once heard a venture capitalist say, “Silicon value celebrates failure.” These “funders” appreciate hearing what doesn’t work, since that’s how they learn. They’d rather hire someone who’d tried and failed than someone who had never failed.
Maybe if non-profit funders (and I’ve started to see this shift) required more proof of failure and lessons learned, we’d see a shift in reporting.
Thank you for your post. I took time to read it because the header said “exploring the tensions between nonprofits and foundations.” I valued your comments about the foundation tensions, but I didn’t see any tensions about nonprofits’ tendencies to resist data collection. Although I understand the comment about “learning” v. “accountability,” I’ve become increasingly convinced that nothing can better help a nonprofit organization improve their outcomes than informed decision-making based on good data about their programs. I evaluate programs in a government setting and one program underwent a major systems change with a foundation grant. It was challenging to get staff to appreciate how data can improve decision-making, but they made the shift. The program outcomes improved dramatically. Now that the grant is over, they sometimes revert back to old ways of doing things, although they’re very willing to get back on track with data usage if reminded. It would be interesting if you could do another post about Lessons Learned about nonprofits and data.
Kali,
I am so glad you noticed!
I avoided the whole nonprofit sector’s resistance to data collection and evaluation on purpose and turned attention toward the role of funders.
Think about it. Philanthropic elites and their allies have probably written more on this than any other subject in the past 40 years.
Critiquing the beleaguered nonprofit sector has become a familiar and predictable parade of hand-wringing and finger wagging.
It seldom yields insight about the nature of the problem or asks nonprofit leaders, staff, board members, or community members what they think about the issue or how they see a more productive way forward.
And yes, I agree that measurement is essential, but I’m not convinced that more metrics and evaluation as they are currently deployed (or inflicted) are a sure path to organizational effectiveness.
Nonprofit leaders, even the ones that geek out on data and evaluation know this!
I’d settle for better monitoring, evaluation, and learning that was a bit more self-reflective than more of the nonsense we pretend passes muster in the field.