AEA365 | A Tip-a-Day by and for Evaluators

TAG | sustainability

Kara Crohn and Matt Galport here – we’re consultants with EMI Consulting, an evaluation and consulting firm based in Seattle, Washington that focuses on energy efficiency and renewable energy programs and policies. More than ever, evaluators must consider how their clients’ programs impact the well-being of the communities and environments in which they are embedded. It is also important for evaluators to consider how their clients’ program goals relate to state, national, or global sustainability goals. In this post, we offer five types of systems-oriented sustainability metrics that evaluators can use to connect clients’ program contributions to broader environmental, economic, health, and social metrics of well-being.

But first, what do we mean by “sustainability”?

In this post, we’re not talking about the longevity of the program, but rather the extent to which a program’s outcomes, intended or otherwise, contribute to or detract from the future well-being of its stakeholders. We are also using an expanded definition of “stakeholders” that includes communities and environmental resources affected by the program.

Hot Tip:

Consider incorporating these five types of sustainability metrics into your next evaluation:

#1: Public health: The extent to which a program contributes to or detracts from the health of program and community stakeholders

#2: Environment and energy: The extent to which a program implements environmental and energy conservation policies that support resource conservation

#3: Community cohesion: The extent to which a program promotes or detracts from the vibrancy and trust of the communities in which it is embedded

#4: Equity: The extent to which a program contributes to or detracts from fair and just distribution of resources

#5: Policy and governance: The extent to which a program’s policies support civil society, democratic institutions, and protect the disadvantaged

So, what would this look like in practice?

Here’s an example of how to connect program-specific metrics for a small, local after-school tutoring program to the broader set of social goals.

Rad Resources:

Resources for municipal and global sustainability metrics:

Municipal: STAR Rating system for U.S. cities

Global: United Nation’s Sustainable Development Goals

Continue the conversation with us! Kara kcrohn@emiconsulting.com and Matt mgalport@emiconsulting.com.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Jindra Cekan, PhD of Valuing Voices at Cekan Consulting, challenging us to reach for sustainability.

What is the overlap of sustainability and evaluation? OECD’s DAC Principles for Evaluating Development Assistance has five parameters: relevance, effectiveness, efficiency, impact, and sustainability. “Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable“. Valuing Voices research has found that too rarely do organizations return to evaluate sustainability after projects close from the participants’ perspectives.

Involving participants during implementation can strengthen prospects for self-sustainability afterwards. Often follow-on projects are designed without learning from the recent past, including failing to ask participants what they they feel they can self-sustain. Ninety-nine percent of international development projects are not evaluated after close out, much less by the people we aim to serve; yet we need this feedback to know what to design next. Current estimates indicate that over $1.5 trillion in US- and EU- funded programming alone since 2000 remains unevaluated so there is much to be learned.

Hot Tips: 

  • Plan for post-project evaluation processes from the beginning of a project.
  • Look at prospects for self-sustainability by interviewing participants at end of a project (Ethiopia)
  • We have used Appreciative Inquiry/Rapid Rural Appraisal/empowerment evaluation and are hoping to use Outcome Harvesting with participants and partners. We also recommend “360 degree” interviews with other stakeholders (e.g., local authorities, other NGOs in the area) to see what activities were sustained, what unexpected impacts occurred, and the extent of the spread.

Lessons Learned: Current post-project evaluations are asking:

  1. Which outcomes were communities able to maintain and why or why not?
  2. Did activities continue through the community groups or partners or others? Why?
  3. If the activities did not continue, why not? Was there any learning in terms of design/ implementation faults? Was ceasing activities always bad?
  4. What were unexpected outcomes and what led to new innovative (unforeseen) outcomes?
  5. What can we learn about capacity and motivations needed for sustainability? Resources? Linkages?
  6. What about knowledge management about results post-close out – who holds them, where and for how long for others to learn from?
  7. How is impact illuminated by the funding? What is the Return on Investment? How can we prioritize activities that are the most sustainable?

We should do country- and locally-led research on what project outcomes were self-sustained and why, in order to learn across projects, donors and borders. We need to know what activities are most self-sustained for future design and partnership in sustained exit and impact. Please join us!

The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hello! My name is Anna Williams. I have worked on issues related to sustainable development as an evaluator, facilitator, and agent of change for 20+ years.

The concept of sustainable development took hold 25 years ago, but several challenges have hampered interpretation and analysis of progress, starting with these:

The future of sustainability.  Advancing the well being of girls and women is now commonly understood as a critical underpinning to sustainable development

The future of sustainability. Advancing the well being of girls and women is now commonly understood as a critical underpinning to sustainable development

1. The oft-cited 1987 Brundtland Commission definition of sustainable development is “development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”  This definition does not translate well into practical application, and it causes confusion.

2. Sustainability and social equity are often viewed as inherently at odds, even when they are inextricably tied and win-win solutions are available in several areas, such as maternal health, energy efficiency, and subsistence fisheries.  (This is not to discount the reality that some real tensions and tradeoffs do still need to be addressed.)

3. Efforts to identify sustainable development goals and indicators, and to measure and evaluate progress toward sustainable development, have struggled, and many faded into the background around ten years ago.

The good news: This past paradigm has faded, and in its place is the next era of sustainable development. 

According to the Marine Stewardship Council, about 1 billion people - largely in developing countries - rely on fish as their primary animal protein source.

According to the Marine Stewardship Council, about 1 billion people – largely in developing countries – rely on fish as their primary animal protein source.

There is now consensus that human equity and well being are at the heart of sustainable development; that realizing environmental sustainability requires addressing extreme poverty, energy access, and maternal and reproductive health, among other fundamentals.  In The Future We Want, the Rio +20 Resolution adopted in July 2012, the UN General Assembly stated, “Eradicating poverty is the greatest global challenge facing the world today and an indispensable requirement for sustainable development. In this regard we are committed to freeing humanity from poverty and hunger as a matter of urgency.”

To prepare for a 2015 post Millennium Development Goals agenda, efforts are underway to redefine sustainable development and create ways to understand progress toward it. One promising example of improved goals, targets, and indicators is the UN-commissioned Sustainable Development Solutions Network’s proposed framework for sustainable development, which has 10 goals, 30 targets, and 100 indicators.   The first proposed goal is end extreme poverty, including hunger.  Other next-generation efforts are taking place at the global, national, and local levels.  It is an inspiring time when the past 25 years are informing the next 25, and evaluation of sustainable development initiatives will be able to benefit greatly from these advances.

Rad Resources: Below are a few resources for next-generation indicators and evaluative analysis tools for sustainable development:

Climate change will disproportionately affect those who are least responsible.  Prevention and adaptation are squarely humanitarian concerns.

Climate change will disproportionately affect those who are least responsible. Prevention and adaptation are squarely humanitarian concerns.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi everyone!  I’m Yvonne M. Watson, an Evaluator in U.S. EPA’s Evaluation Support Division and Chair of AEA’s Environmental Program Evaluation Topical Interest Group.  As we celebrate Earth Week in April and prepare for the annual American Evaluation Association  (AEA) conference in October, the theme of sustainability looms large.

As I think about an area where organizations and individuals can make a significant difference to ensure a sustainable future, consumer choice and green purchasing/procurement comes to mind.  The federal government’s role as the leading purchaser of green products is vital to ensuring a sustainable future.  Equally important is the role that households and individuals play in this equation.

Lesson Learned: According to Fischer’s 2010 report, Green Procurement: Overview and Issues for Congress, at the institutional level, federal government procurement accounts for $500 billion annually. Because of its size and purchasing power, the federal government influence on the market is broad—“affecting manufacturing (product planning and development), and purchasing (large institutions and States that mimic federal specifications) both nationally, and internationally.  Established in 1993, the purpose of EPA’s Environmentally Preferable Purchasing (EPP) Program is to: 1) achieve dramatic reductions in the environmental footprint of federal purchasing through creation of guidelines, tools, recognition programs, environmental standards and other incentives and requirements, and (b) make the overall consumer marketplace more sustainable through federal leadership.  In 2011, the EPP program initiated an evaluation to examine the changes in spending on green products across the federal government since 2001. The results indicate greater awareness and positive attitudes towards green procurement among federal purchasers surveyed.

At the individual level, consumers not only vote with their feet – they vote with their purses and wallets too, through the purchase of food, cars, electronics, clothes and a host of other services. In addition, the prominence of green and eco-labels is a prime example of the manufacturing industry’s response to greater demand from consumers who look for green products.  During Earth Week, I encourage organizations, individuals and evaluators alike to take a step back and assess our individual and collective consumer purchasing decisions and the implications for a sustainable future.  After all, the purchasing choices we make today affect the future we have tomorrow.

Rad Resources: EPA’s Greener Products website provides information for consumers, manufacturers and institutional purchasers related to green products.

The EPP Evaluation Report is available here.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Andy Rowe here, I evaluate sustainable development and natural resource interventions.  I am convinced evaluation is facing a key adapt or decline juncture.

Connectivity is the mechanism enabling us to understand how interventions reach to the public interest and effects in the natural system. Our siloed governance approaches come from cost and accountability structures in the for-profit sector.  For-profits recognize the importance of connections to the larger mission and judges performance accordingly; now in the mission includes sustainability.  Major corporations such as Mars and WalMart are acting decisively to ensure sustainable supply chains, which they judge essential to survival of their businesses.  We need to begin the process of incorporating sustainability into evaluation.

The story of how domesticated cats contribute to climate change illustrates how obscure but important these causal connections can be.

RoweLesson Learned: Domesticated cats living with humans, and feral cats are a significant predator of songbirds taking an estimated 40% annually.  Birds carry a parasite Toxoplasma gondii. The unharmful parasite departs in stools, often in litter, which ends up in landfills. Landfills are often connected to the sea via groundwater and streams and the parasites enter coastal waters where bivalves ingest them.  Sea otters love bivalves ingesting the parasite, which attacks the otter brains.  Poor otters.

Another system connects with our story. Fertilizer and waste from sewage treatment and other sources deliver nutrients to the sea causing algal growth in the water that weaken sea grasses.  Otters address the effects of excessive nutrient loading on grasses keeping the sea grasses alive.  Sea grasses are amazingly effective at storing carbon – with the help of otters Pacific sea grasses store the equivalent of annual carbon dioxide emissions from 3 to 6 million cars.

So, cats contribute to climate change via mechanisms that are far from transparent.  As evaluators we need to attend to the connections from the intervention to important effects, including effects in the natural system.  By tracing connectivity within and across systems, evaluation can play an important role in ensuring that interventions are designed and undertaken so that the world we leave for our grandchildren is at least as good as the world we inherited.  It is time that sustainability becomes an expected element in evaluation.  Several years ago the National Academy of Science gave sustainability science a room of its own –time now for sustainability to become a required element in our Standards.

Lesson Learned:  Take a look at sustainability in the for-profit sector:  1. Mars Corporation here and here and 2. Walmart here.

Rad Resources:  Otters and weeds:

Also, see Sustainability Science Room of Its Own by William C. Clark (2007).

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi all! I’m Juha Uitto, Deputy Director of the Independent Evaluation Office of the United Nations Development Programme (UNDP). I’ve spent many years evaluating environment and development in international organizations, like UNDP and the Global Environment Facility (GEF).

As we all know, evaluating sustainability is not easy or simple. Sustainability as a concept and construct is complex. It is by definition multidimensional encompassing environmental, social, cultural, political and economic dimensions. It cannot be evaluated from a single point of view or as just one dimension of a programme. Apart from the above considerations, sustainability refers to whether the programme or intervention that is the evaluand is in itself sustainable. Sustainability evaluation, must take all of the above into account.

At its simplest, sustainability evaluation would look into whether the intervention would ‘do no harm’ when it comes to the various environmental, social, cultural and other dimensions that may or may not be the main target of the programme. At this level, the evaluation does little more than ensuring that safeguards are in place. The evaluation also has to look at whether the intervention itself was sustainable, i.e. whether it has developed exit strategies so benefits will continue beyond the life of the intervention.

But this is not enough. It is essential for evaluations and evaluators to be concerned with whether the evaluand makes a positive difference and whether it has unintended consequences. In environment and development evaluation a micro-macro paradox is recognized: evaluations show that many individual projects are performing well and achieving their stated goals; yet the overall trends are downward. There are lots of projects focused on protected areas and biodiversity conservation; still, we are facing one of the most severe species extinction crises ever. Many projects successfully address climate change mitigation in various sectors ranging from industry to transportation to energy; still, the global greenhouse gas emissions continue their rising trend. It is not enough for evaluators to focus on ascertaining that processes, activities, outputs and immediate outcomes are achieved.

Lessons learned: In evaluating environment and poverty linkages, one should never underestimate the silo effect. Sustainable development requires a holistic perspective but few organizations operate that way. People have their own responsibilities, priority areas, disciplinary perspectives, partners, networks, and accountabilities that often preclude taking a holistic perspective. Evaluators must rise above such divisions. An evaluation – such as the Evaluation of UNDP Contributions to Environmental Management for Poverty Reduction – can make a major contribution to how an organization acknowledges, encourages and rewards intersectoral and transdisciplinary cooperation.

Rad resource: All UNDP evaluation reports and management responses to them are available on a publicly accessible website, the Evaluation Resources Centre, and independent evaluations at Independent Evaluation Office of UNDP.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, I am Jindra Cekan, PhD, an independent evaluator with 25 years of international development fieldwork, at www.ValuingVoices.com.

What if we saw our true clients as project participants and wanted the return on investment of projects be maximally sustained? How would this change how we evaluate, capture, learn together?

Lesson Learned: Billions of dollars of international development assistance are spent every year and we do baseline, midterm and final evaluations on most of them.  We even sometimes evaluate sustainability using OECD’s DAC Criteria for Evaluating Development Assistance: relevance, effectiveness, efficiency, impact and sustainability.  This is terrific, but deeply insufficient. We rarely ask communities and local NGOs during or after implementation what they think about our projects, how to best sustain activities themselves and how to help them do so.

Also, very rarely do we return 3, 5, or 10 years after projects close and ask participants what is “still standing” that they managed to sustain themselves. How often do we take community members, local NGOs, or national evaluators as the leaders of evaluations of long-term self-sustainability of our projects? Based on my research 99% of international aid projects are not evaluated for sustainability or impact after project close by anyone, much less by the communities they are designed to serve.

With $1.52 trillion dollars in US and EU foreign aid being programmed for 2014–2020, our industry desperately needs feedback on what communities feel will be sustainable now, what interventions offer the likelihood of positive impact beyond the performance of the project’s planned (log-framed) activities. Shockingly, this does not exist today.

Further, such learning needs to be transparently captured and shared in open-date format for collective learning, especially at the country and implementer level. Creating feedback loops between project participants, national stakeholders, partners and donors that foster self-sustainability will foster true impact.

Hot Tip: We can start in current project evaluations. We need to ask these questions of men, women, youth, elders, the richer and poorer in communities as well as of local stakeholders. Ideally we would request national evaluators to ask (revise!) questions such as:

  • How valuable have you found the project overall in terms of being able to sustain activities yourselves?
  • How well were project activities transferred to local stakeholders?

o   Who is helping you sustain the project locally once it ends?

  • What were the activities do you think you can least maintain yourselves?

o   What should be done to help you?

  • What were activities that you wish the project had supported that build on your community’s strengths?
  • Was there any result that came of the project that was surprising or unexpected?
  • What else do we need to learn from you to have greater success in the future?
Clipped from http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I’m Clara Hagens. I work for Catholic Relief Services (CRS) as the Regional Technical Advisor for Monitoring, Evaluation, Accountability and Learning in Asia. I’d like to share with you a guidance document we have developed to support project teams to operationalize monitoring and evaluation (M&E) plans, big and small, in various contexts.

Rad Resource: CRS’ Guidance on Monitoring and Evaluation covers a range of topics related to basic M&E concepts and to designing and implementing M&E activities. The topics include gender in M&E, random and purposeful sampling, developing qualitative data collection tools, M&E in emergencies, and community participation in M&E to name a few. Each topic is grounded in a set of standards to guide our M&E practice. The standards are accompanied by narrative to explain how each can be achieved, tips and good practices, examples, and planning tables and templates.

For example, the Guidance provides standards for Community Participation in M&E that state that M&E systems track the changes most important to communities and communities participate in data collection and in the interpretation of M&E results and includes tips for each step in the process. The standards for Planning and Conducting an Evaluation refer to the importance of developing project-specific evaluation questions related to the relevance, effectiveness, efficiency, impact and sustainability. The Guidance also includes a simple evaluation planning table which helps teams to link the methods for data collection and respondents to the evaluation questions.

Clipped from http://www.crsprogramquality.org/publications/2013/4/8/guidance-on-monitoring-and-evaluation.html

The CRS Guidance on Monitoring and Evaluation is appropriate for project teams who are looking for additional hands-on support to further engage with their M&E systems. I hope you will find this useful!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Hello. We are Susan Shebby and Sheila Arens, evaluators from Mid-continent Research for Education and Learning (McREL). In this post, we want to share our experience with a district working to implement a federal grant in two schools.

When the grant was awarded, it received a great deal of attention throughout the district and community. The funding was unprecedented, and the potential opportunities available to students and teachers were remarkable. However, initial excitement waned once the work began. Schools were slow to implement planned interventions and teachers were frustrated by grant goals. Moreover, no champions for the program emerged at the district level so the intervention was largely forgotten by the district—except during reporting and budget periods. Well before the end of the grant period, district- and school-level administrators lost sight of the grant goals and looked to the next grant or funding source for inspiration. Grant activities supported by teachers and community partners were terminated after the final grant performance period.

Lessons Learned:

The district requested a “case study” about development and implementation of large-scale grant initiatives that would yield recommendations for future initiatives in the district. Four themes repeatedly emerged as areas for improvement: communication and collaboration, leadership, consistency of policies and procedures, and sustainability. Evaluators help clients plan for implementation and sustainability by incorporating the following strategies into planning and delivery discussions.

  • Communication and collaboration. Create a systematic process for collaboration and communication with staff at all levels of the district, as well as with outside partners. This communication should occur regularly and frequently from the inception of the grant to build awareness of grant activities and successes.
  • Leadership. Create structures that support consistency in leadership for grant initiatives, provide clear reporting structures, and build the leadership capacity of existing personnel. Multiple changes in leadership—especially at the beginning of a grant award—were perceived as particularly harmful given the limited time available to demonstrate grant impact.
  • Consistency of policies and procedures. Raise awareness of existing policies and procedures, and create structures that support coherence between these policies and procedures and grant initiatives. This might include involving key implementers and policymakers early in the planning process (i.e., during the grant writing process) to ensure they perceive grant goals as important and attainable.
  • Sustainability. Create structures early in the grant cycle towards supporting programs after grant funding has concluded. In a five-year grant, meaningful discussions about sustainability should occur by Year 3 at the latest.

Rad Resources: These resources may be helpful as you work to support sustainability of initiatives.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Coming soon to an intervention near you – SUSTAINABILITY!

Andy Rowe here, writing from long experience and physically from our small farm on Salt Spring Island off of the Canadian coast of British Columbia.

Years ago, when I was involved in the theory and practice of rapid social change, many of us argued that gender and race were the main divisions created by those with power to forestall significant social change.  When others stated natural resources were also important we called them Malthusians.  We were wrong.  Race and gender are the main mechanisms creating social inequality, hardship, constraining human change and improvement, but the human system exists only in the context of natural systems which cannot continue to absorb our disregard and provide us with what we need.  We now see significant adjustments to adapt to these fundamental sustainability issues.  Evaluation can either contribute to these changes or get left behind.

Many evaluators have the implicit view that only the human system merits our attention.  To a large extent this is because they accept that interventions should be evaluated against their intended outcomes and unintended and indirect effects.  This means that evaluation works within the programmatic silos in which most interventions we exist. But in this period of significant transformation change agents, including evaluators, need to get in front of the curve and incorporate connectivity to other elements in the human and natural systems.

Example – within ten years, current models for locating and managing school sites will be unacceptable; sustainability requirements will have shifted expectations and standards.  Decisions about school siting and site management will address costs of building on valuable carbon sequestration sites, remediate adverse effects of pollutant-carrying runoff from pavement, offset of global warming impacts from roofs and heating and cooling, and incorporate the incremental environmental and health costs of daily commuting.  In other words siloed siting and site management decisions disconnected to environmental, human health and community effects will not be acceptable. For evaluation to contribute to positive change it will need to span the boundaries of existing programmatic silos across diverse systems and elements.

Hot Tip: Think about everything that makes our existing program theories happen. What natural resources are required inputs to, and are affected by, an intervention. Example: schools require land and adversely affect water.  Check out UN Natural Capital’s site: TEEB – The Economics of Ecosystems and Biodiversity.

Rad Resources: Check out ISEAL Alliance stakeholder engagement process. Think about how many evaluation impacts we measure could meet the standards they have developed.

For school sites, start with EPA’s school siting guidance.

Google “Corporation and Sustainability” as in this example from Mars. Compare what this corporation is doing to the interventions you evaluate.

Clipped from http://www.teebweb.org/

The American Evaluation Association is celebrating Environmental Program Evaluation Week with our colleagues in AEA’s Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top