LAWG Week: Will Fenn on the Evaluation Merry-Go-Round
Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365. I’m Will Fenn from Innovation Network a Washington, D.C.-based monitoring and evaluation consulting firm specializing in advocacy and policy change evaluation.
There is an all too common situation that arises around evaluation — I’ll call it the evaluation merry-go-round. I saw this situation many times as a foundation program officer. Now that I work in a role fully focused on evaluation, my goal for the “State of Evaluation Practice” is to help organizations avoid the merry-go-round and promote evaluation that embraces data-based decisions and learning.
Lesson Learned—Let me explain how the evaluation merry-go-round often starts: A funder recognizes the importance of evaluation at a board meeting and approaches its grantees requesting data for an evaluation in the coming year. If the grantee has good data, evaluation moves along happily for both parties. But often resources are tight for grantees and they were not able to capture good quality data even if they are doing great work. The grantee offers what they have, the funder may then question the data, accept the incomplete data, or perform their own data collection with the evaluation conclusion sent to the board.
The process is often uncomfortable for both sides and too often leaves grantees in no better position to improve operations through data informed decision-making. In other words, funder and grantee go up and down through the evaluation process but the ride ends with the organization in the same place.
Hot Tip: My experience is that the same scenario can play-out successfully when funders and grantees cooperate, plan, and invest from the earliest stage to build capacity before the evaluation. A high level of engagement and planning from both parties is essential; and additional resources in terms of funding and expertise are highly recommended. Remember, data is not king; it only helps one ask the right questions. There is no substitute for investing time to understand the context around the data and to know which data to consider.
The Stanford Social Innovation Review article, Counting Aloud Together, shows an example of how to build evaluation capacity.
The “Collaborative Evaluation” section in the Learning from Silicon Valleyarticlealso offers great tips from the Omidyar Network’s experience on collaborative evaluation.
Also check out Innovation Network’s guide to evaluation capacity building.
Hot Tip—Insider’s advice for Evaluation 2013 in DC: For a quiet place to reflect on the day’s events visit the Saloon at 1205 U Street, NW. The bar’s mission is to promote conversation, so it is free of TV screens and offers large communal tables upstairs.
We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.
- Susan Kistler on the Evaluation 2013 AEA Annual Conference Call for Proposals
- NPF Week: Bickel, Iriti & Meredith on a Learning Protocol for Learning From Grantmaking
- Myia Welsh and Johanna Morariu on Approaches to Supporting Evaluation Capacity Building
- LAWG Week: David J. Bernstein on Submitting Proposals and Presenting at DC-Based AEA Conferences
- LAWG Week: Megan Walker on Using NVivo to Examine Trends in a National Nonprofit Network