Hi AEA Members! I’m Tom David and I’ve been working in and around “organized” philanthropy for 30 years. About ten years ago I was asked to advise a large foundation on how to think about evaluating a planned 10-year multi-site, “place-based” initiative. As talking points for a conversation with them, I jotted down the following list, without much deep thought. Needless to say, they didn’t follow much (if any) of my advice at the time…and ended up making a minimal investment in evaluation. The kicker is that I have recently been invited to participate in a retrospective evaluation of the same initiative, now nearing its conclusion. I’ve been asking myself… is it just me or has the state of our art not advanced all that much in the past decade?
Lessons Learned:
- This is complex, generational work that requires patient, long-term investment, underpinned with a serious commitment to ongoing learning. There are no simple solutions and there are no simple metrics of success, despite how much we might wish for them.
- Evaluation and Learning are complementary activities; however it’s possible to invest a great deal in evaluation without a commensurate payoff in learning. It’s important to ask not only what has been accomplished (outcomes) but also why things turned out the way they did… and how that knowledge can be used to improve practice.
- It’s difficult if not impossible to apply traditional evaluation research methods (e.g. experimental designs incorporating random assignment) to real-world community change efforts. Multi-method approaches that include in-depth qualitative analysis have generally proven to be most helpful for learning.
- The results of large-scale, multi-site evaluations are rarely, if ever unambiguous. Achieving clear attribution (being able to take credit) for an observed change is usually impossible. Being able to claim a “contribution” to an observed change is usually the best one can do.
- Most community-based organizations have very limited capacity to collect and analyze data, yet we often expect them to take on that function without providing adequate support and resources.
- Low-income communities of color have rarely benefited directly from participating in large-scale evaluations, and in many cases are understandably weary of “being studied.” Making sure that they tangibly benefit from these efforts is essential if we expect them to view us as true partners.
- Evaluators often have a difficult line to walk to fulfill the expectations of funders while also maintaining a cordial and collaborative relationship with communities.
- Foundations are often reluctant to provide the resources necessary to support genuinely thoughtful evaluation and learning efforts. Indeed, there seems to be a trend to expecting more for less.
- Foundation culture is not generally supportive of learning. It is impatient, forward-focused, captivated by novelty and reluctant to devote the time and energy necessary for reflection.
- Foundations are also typically ambivalent about transparency about what is learned, particularly if the results don’t match expectations. Staff doesn’t feel rewarded for candor, and more evaluation reports end up in filing cabinets than on the web.
The American Evaluation Association is celebrating Community Development TIG Week with our colleagues in the Community Development Topical Interest Group. The contributions all this week to aea365 come from our CD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Amen, brother!