Hi, I’m Allison Van, Chair of the Environmental Program Evaluation (EPE) Topical Interest Group (TIG) and Executive Director of Spark: a centre for social research innovation at McMaster University.
The EPE TIG has spent the last year thinking and having conversations about what environmental program evaluation is and should be in the midst of our climate crisis. This is what we have observed so far:
- We’re still mostly evaluating individual programs, rather than portfolios of aligned interventions that effectively leverage community assets and opportunities for mitigation or resilience building.
- Evaluations are mostly staying with their commissioners, rather than being utilized to rapidly iterate and improve the investments being made in addressing the climate crisis.
- Few are effectively considering cultural context and structural environmental injustice in evaluating the benefits of programs.
- Many of us are far from using the range of tools and methods (e.g., drones, geographic information system mapping tools, structural equation models, contribution analysis, risk analysis and value for money – these are all critical to make strong conclusions about long term impacts of environmental initiatives.
Sometimes we get lucky, and we have the clients, freedom, time, and money to do all of what we want and push our sub-field forward. But this is still the exception rather than the rule. To change how effective our work is, we have to change our frameworks and methods. Here are my ideas for what that change could look like:
- The existing and exceptional efforts of Michael Quinn Patton with Blue Marble Evaluation and Beverly Parsons with Visionary Evaluation for a Sustainable, Equitable Future can provide approaches that we can learn together, and expand upon collectively.
- Intensive focus on cross-training with ecology and other disciplines that have methods in landscape analysis approaches can expand our skills.
- Within our TIG conference programming, a more explicit focus on deepening methods training is critical.
- Developing a database of publicly shareable evaluations of environmental programs can inform investment in solving the environmental crisis. To the extent that we can identify common types of evaluations and build sets of metrics that could be usable across efforts, we could build environmental program evaluations that are less one-off and more likely to have comparable impact measures and standards.
- We need to actively engage with environmentally focused funders to align their efforts so they become part of the push toward effective, community-engaged, rigorous impact evaluation that fully embraces the complexity, uncertainty and long time horizons inherent to the climate context.
What I want to know is how you see being able to change the current state of environmental program evaluation. What are our strengths as a field? Where do we really need to dig in deep and learn?
If you want to explore these questions, I’d like to invite you to join the EPE TIG on April 29th any time between 2:00-3:30 EST at https://mcmaster.zoom.us/j/92013584241. In monthly Friday conversations throughout the spring, we have been exploring these and other issues and working to support each other as we each push forward in our own practice.
The American Evaluation Association is hosting Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.