My name is Valerie Williams and I am a Program Evaluator at the University Corporation for Atmospheric Research (UCAR). One of the programs I work with is Global Learning and Observations to Benefit the Environment (GLOBE), a worldwide, K-12 environmental science and education program.
Many environmental education programs struggle with the question of whether environmental education is a means to an end (e.g. increased stewardship) or an end itself. This question has profound implications for how programs are evaluated, and specifically the measures used to determine program success.
Hot Tip: Investing time out the outset to understand the history and evolution of the program is critical. Program evaluation must be based on a clear understanding of the program purpose, structure, and theory of change. Determining a program’s intended purpose may be challenging for programs with a long history. Over time, the program purpose may be reconceptualized, without subsequent changes in design and/or theory of change. Former Vice President Al Gore’s 1992 book, Earth in the Balance, provided a clear description of the original vision and purpose of the GLOBE program:
Specifically, I propose a program involving as many countries as possible that will use schoolteachers and their students to monitor the entire earth daily. Even relatively simple measurement could, if routinely available on a more nearly global basis produce dramatic improvements in our understanding of climate patterns. (p.356)
This clarified the primary expected outcome of the program – dramatic improvements in our understanding of climate patterns; the primary activity to achieve that outcome – students monitor to the entire earth daily; and the central program design feature – involve as many countries as possible.
Lessons Learned: An evaluability assessment (EA) is always a good idea. EA is often considered useful in deciding whether to evaluate new programs, where the program’s readiness for evaluation is in question. However, EA offers other benefits regardless of the program’s developmental stage, including:
- Surfacing disagreements among stakeholders about the program theory, design and/or structure
- Highlighting the need for changes in program design
- Clarifying the type of evaluation most helpful to the program.
Rad Resources:
- Evaluability Assessment; Examining the Readiness of a Program for Evaluation provides an overview of how to structure an EA and a list of findings that would indicate that a program is not ready for evaluation. (free for download)
- Evaluability Assessment to Improve Public Health Policies, Programs, and Practices is a great article that reviews EA in the context of public health programs and offers useful illustrations of how EA was used.
- The Assessing the Feasibility and Likely Usefulness of Evaluation chapter in the Handbook of Practical Program Evaluation is one of the principal documents from a pioneer of this methodology, Joe Wholey.
The American Evaluation Association is celebrating Environmental Program Evaluation Week with our colleagues in AEA’s Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi! I would also recommend utilizing an adaptive management systems framework when setting up and monitoring environmental education programs, and having grounding in ecological economics when considering outcome measures. What do others think?