AEA365 | A Tip-a-Day by and for Evaluators

CAT | Environmental Program Evaluation

Hi, I’m Heather Dantzker, Past Chair of AEA’s Sustainability Working Group, serving as Chair 2015-2017.  I currently serve as Program Co-Chair of AEA’s Environmental Program Evaluation TIG.

AEA’s environmental sustainability working group has been implementing recommendations from the Green Audit that AEA commissioned in 2015. The Green Audit provided a road map for moving forward to reduce AEA’s environmental impact and leverage its influence as an organization. The Green Audit was also conducted, in part, as a response to AEA’s Guiding Principles, which emphasize that evaluators have an obligation to honor the public good in our work.

AEA’s sustainability working group has worked on the dual goals of how AEA can practically address sustainability challenges as an organization and also how AEA’s members can act as catalysts for sustainable action in their own organizations and in their work as evaluators.

The AEA sustainability working group co-sponsored an AEA Coffee Break last fall with the Environmental Program Evaluation TIG to discuss these goals and efforts. One outcome provided AEA annual meeting participants with the opportunity to purchase carbon offsets for travel to the meeting via Native Energy.

Results of the Green Audit showed that AEA annual meeting travel is the single largest contributor to AEA’s ‘carbon footprint.’ Results from a post-meeting survey conducted by AEA found that while a majority of survey participants did not purchase or plan to purchase carbon offsets, over one-third of respondents (38%) either had purchased, or stated they would consider purchasing carbon offsets for travel in the future. This is a good baseline starting point for AEA members to achieve real change as AEA members have additional opportunities to learn about sustainable actions they can make and take action to reduce our collective carbon footprint.

Hot Tip:  Learn more about carbon offsetting at https://nativeenergy.com/.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

My name is Matthew Ballew. I am finishing my doctoral degree at Claremont Graduate University in Social Psychology, and I intern with Kara Crohn of EMI Consulting, an evaluation and consulting firm that focuses on energy efficiency and clean energy programs and policies. Understanding how a program connects with its participants on an emotional level is important to assessing program success including outcomes such as well-being and even behavior change. There is a long history of theorizing on the role of emotions in guiding people’s actions and, increasingly, recent research documents the powerful effects of different emotions on behavior. In this post, we offer a definition of “emotions,” three ways to understand their impact on energy saving behaviors, and hot tips for evaluations of behavioral programs and communications.

In comparison to general affect or mood (i.e., overall good or bad feelings), emotions are specific feeling states that are clearly recognized and consciously linked to an object or event; they motivate people to act in certain ways. In the psychological literature, three ways to understand emotions include: (1) whether they are positive or negative, such as feeling happy for taking action or upset for not taking action; (2) whether they are considered moral emotions like pride or guilt; and (3) by their degree of behavioral specificity, e.g., emotions can be tied to specific actions like feeling positively about saving energy or to more general things like worrying about the environment.

emotions diagram

 

Lesson Learned: In my dissertation, I assessed the extent to which these different categories of emotions influenced household energy conservation and broader environmental engagement, such as intentions to invest in energy efficient technologies. Supporting previous research, both feeling positive emotions for saving energy and negative emotions for not saving energy were strong drivers of energy conservation behavior and other pro-environmental actions. Moreover, persuasive communications focusing on positive emotions (e.g., “Save Energy. Feel Proud.”) had a stronger effect than negative emotions (e.g., “Save Energy. Don’t Be Guilty.”) on intentions to save energy and invest in energy efficiency.

Hot Tips: Emotions are a hot topic in behavioral interventions. Incorporating strategies that leverage emotions—as well as measures of emotions—specifically tied to action (and inaction) strengthens the evaluation of behavioral programs and communications. When evaluating behavioral programs and communications, consider:

  1. Measuring emotions related to performing/not performing actions to indicate behavior change (e.g., expecting to feel positive emotions for volunteering)
  2. Including strategies to connect positive behaviors (e.g., purchasing efficient vehicles) to positive emotions like pride and, conversely, negative behaviors (e.g., wasting electricity) to negative emotions like guilt to foster behavior change
  3. Focusing strategies on positive emotions related to taking action; they seem to be especially promotive of behavioral persistence, leading to a virtuous and recursive cycle of positivity (i.e., “feeling good by doing the right thing”)

Rad Resource:

rare logofocuses on collective pride to promote sustainability

 

 

Continue the conversation with us! Matthew mballew@emiconsulting.com and Kara kcrohn@emiconsulting.com

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

Hi, I’m Rupu Gupta, Co-Chair of AEA’s Environmental Program Evaluation Topical Interest Group and Researcher at New Knowledge Organization, a non-profit research and evaluation think tank. As a conservation psychologist, my research interests lie in understanding how people perceive different forms of nature and how they relate to caring for it. Most recently, my evaluation work on community resilience projects have advanced these interests and highlighted the unique value our field brings to these efforts.

The realities of climate change and the potential for adverse environmental events pose new ways of thinking about projects intended to create mutually beneficial relationships between humans and nature. Resilience building is a very strong focus now in this area, with the aim to create communities that are able to prepare, adapt, and recover from extreme events. Moreover, engaging local residents, who have direct knowledge and experience with their communities as active change agents, needs to be an integral part of these efforts. For example, Second Nature, creator of a Climate Leadership Network in higher education is piloting a collaborative project between community and university partners to develop shared resilience plans in three cities.  Similarly, The New England Aquarium in Boston is involved closely in creating collaborations with local groups to create resilient communities.

These projects place a strong emphasis on the human experience of a changing environment. This is a significant departure from efforts that rely heavily on developing physical, natural, and structural infrastructure. As these emerging trends bring to the forefront human capacity and potential to adapt to a rapidly changing world, project evaluation has a lot to offer. The following insights offer some exciting ways to think about and redefine our role as evaluators:

Lessons Learned:

  • Group-level processes are important indicators of program impacts. Creating resilience plans collaboratively among a diverse group of stakeholders depends on the use of inclusive practices and developing mutual respect and appreciation of multiple perspectives.
  • Patience is key in anticipation of outcomes within the project’s stipulated timeframe. A corollary of the previous lesson is that relationship building takes time and may take longer than expected to develop. Setting realistic expectations about measurable outcomes for partnership development is prudent and may benefit from further refinement during the project.
  • Resilience holds distinct meanings for different groups within a project context. Evaluators need to create opportunities to understand the various ways in which stakeholders think about the concept in relation to their community role. This will enable a culturally responsive resilience effort, meaningful for all stakeholders.
  • Participatory action research is useful in engaging community members. This approach can facilitate agency and self-efficacy in the process of co-creating plans for a resilient community. This is especially empowering for marginalized groups who are often overlooked in decision-making.

Rad Resource:

Check out an evaluation report on the progress of a community-focused resilience effort at its midpoint.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I’m Jeff Danter, Senior Vice President from The Trust for Public Land. Peter Drucker famously said “Culture eats strategy for breakfast.” This is especially true when considering the utility of evaluation and measurement for management decision-making. For evaluation and measurement data to be effective in influencing strategy, an organization must possess a management culture that values these types of data; that is, the culture must exhibit a certain readiness to receive and act on the results of evaluation strategies. Ultimately leaders must seek to align culture, goals, strategy, and evaluation to effectively achieve organizational mission.

My organization has been building parks and conserving land for people for more than 40 years. Recently, we have sharpened the focus of our work in cities with the goal to put a quality park within a 10-minute walk of everyone living in urban areas (87% of all Americans). This goal represents a dramatic change in how we think about strategy. We have always known that quality parks deliver many benefits to communities, and that the process of building parks can be impactful on civic culture. As such, historically, mission success was measured as completion of quality parks – no additional information was necessary.

Use of quantitative measurement tools was not initially embraced as core to the work of the organization. To fully utilize the power of these new tools, leadership needed to make changes that would align the culture with both the 10-minute walk goal and the quantitative techniques that measure goal attainment. This cultural change has been a critical component implementing the new goal and strategies. Ultimately, it is the alignment of culture, goals, strategy, and data that leads to high mission performance. Attending to organization culture, then, is a critical and often under-valued component of successful evaluation and measurement. Evaluators and senior leaders must collaborate on a shared vision of the culture that embraces quantification of mission outcomes.

Lessons Learned:

Cultural change was addressed by folding data from these tools into the daily working of the organization. Project approvals, funding proposals, board presentations, and internal communications required the presentation of these data. While this seems a simple change, it represented a significant departure from previous norms. In creating the new culture, demand for new and better tools was also created.

Rad Resources:

To support the new 10-minute walk goal, The Trust for Public Land has developed several sophisticated measurement tools that guide all phases of our work, including communities to work in, site selection, community engagement processes, park design, and community benefits. These tools include cutting edge mapping tools that include social, environmental, and health data, as well as more traditional measurement and evaluation approaches. Cross functional teams created these tools as new organizational strategies were developed.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Greetings from Caroline Stem, Coordinator of the Conservation Measures Partnership (CMP). CMP is a group of leading conservation organizations from around the world that recognizes the importance of a common language and process to facilitate learning within and across conservation efforts. CMP maintains a set of standards for designing, implementing, monitoring, and adapting conservation programs (Open Standards for the Practice of Conservation).

CMP Open Standards for the Practice of Conservation

This blog describes two Open Standards tools – conceptual models and results chains. In a world of urgent and changing threats, scarce resources, and incomplete information, these tools help teams focus efforts, create shared understanding, and provide a framework for learning.

In Step 1 of the Open Standards, teams (often with partners) develop a conceptual model that depicts the context within which they are working. The conceptual model shows, in a graphically succinct fashion, what a team aims to conserve and the key forces affecting those priority conservation “targets.” Conceptual models help teams discuss and agree upon their context, the priority threats they face, the drivers behind those threats, and the appropriate actions to ultimately improve conservation.

Teams can then use another diagrammatic tool, results chains, to illustrate the theory of change (using if-then logic) behind each priority action. Results chains help teams think critically about potential actions, their likely success, and the enabling conditions needed to ensure their success. Results chains also provide a clear framework for setting programmatic objectives and indicators and evaluating effectiveness.

A real-world example: Painted Dog Conservation (PDC), based near Hwange National Park in Zimbabwe, is dedicated to reducing threats to African wild dogs.  PDC’S programs are numerous and complex, and the PDC team was looking for a structured approach to responding to key threats and evaluating their effectiveness. In 2011, PDC developed a new strategic plan using the Open Standards, conceptual models, results chains, and Miradi Software.

PDC actively uses this plan, reviewing its results chains during quarterly and annual meetings to share and document data and discuss progress on priority actions, objectives, and goals. In December 2017, PDC held a workshop to assess which actions have worked, which have not, and why. The team is using the review results to evaluate the effectiveness of PDC actions since 2011 and to adapt PDC’s action and monitoring plans, laying the foundation for its 2018-2022 strategic plan.

Photo Credit: Arlyne Johnson


PDC core team discussing results during the strategic plan review workshop

Photo Credit: Arlyne Johnson

PDC progress data on its results chain

Rad Resources:

 

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Mary Sutter

Mary Sutter

Hello! I am Mary Sutter, a Partner at Grounded Research and Consulting, LLC and have been evaluating the somewhat niche area of energy efficiency programs for my entire professional career (27 years). Recently, I was privileged to lead an extraordinary group of colleagues in our effort to codify an introductory certificate for evaluators in our field of energy efficiency evaluation.

Today, I will share a bit about why we chose to move towards a certificate and what we have accomplished to date.

Our niche evaluation area is not new – it has been around for approximately 40 years, as long as there have been energy efficiency programs. Four decades ago, social scientists and engineers adapted at-the-time-current program evaluation practices to fit energy efficiency programs and assess the impacts of these programs and, like all evaluation, we continue to evolve.

Energy efficiency is now an $8 billion a year industry and has a unique requirement of mostly using independent third-party evaluators to assess impacts. Evaluators are in high demand, with many coming from basic humanities or engineering backgrounds and learning about evaluation in on-the-job work situations. Because of this, many new practitioners are unaware of the deep and robust discussions around program evaluation. We are beginning to overcome that hurdle through a certificate on the Foundations of Impact Evaluation (for energy efficiency evaluators).

Energy Efficiency Evaluator Certification

https://www.grounded-research.com/certification

Through a seven-month long process, we met with our steering committee to derive the learning outcomes (rooted in standards and competencies) for this certificate.

Rad Resource:

If you want to see several good documents that are specific to evaluation of energy efficiency programs, led by Mike Li, at the Department of Energy (DOE), go to the See Action website. They provide knowledge and structure to energy efficiency programs and their evaluation.

Go here to read through the basics of impact evaluation for energy efficiency evaluators! You can see how we pull in the multiple dimensions of evaluation as put forward by AEA and tailor them to our specific energy efficiency industry.

It is a great way to see how others in the evaluation area are working to create solid and competent evaluators!

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Marti Frank, a researcher and evaluator based in Portland, Oregon. Over the last three years I’ve worked in the energy efficiency and social justice worlds, and it’s given me the opportunity to see how much these fields have to teach one another.

For evaluators working with environmental programs – and energy efficiency in particular – I’ve learned two lessons that will help us do a better job documenting the impacts of environmental programs.

Lessons Learned:

1) A program designed to address an environmental goal – for example, reduce energy use or clean up pollution, will almost always have other, more far reaching impacts. As evaluators, we need to be open to these in order to capture the full range of the program’s benefits.

Example: A weatherization workshop run by Portland non-profit Community Energy Project (where I am on the Board), teaches people how to make simple, inexpensive changes to their home to reduce drafts and air leaks. While the program’s goal is to reduce energy use, participants report many other benefits: more disposable income, reduced need for public assistance, feeling less worried about paying bills, having more time to spend with family.

2) Not all people will be equally impacted by an environmental program, or even impacted in the same way. Further, there may be systematic differences in how, and how much, people are impacted.

Example #1: Energy efficiency programs assign a single value for energy savings, even though the same quantity of savings will mean very different things to different households, depending in large part on their energy burden  (or the percent of their income they spend on energy).

Example #2: A California energy efficiency program provided rebates on efficient household appliances, like refrigerators. Although the rebates were available to everyone, the households who redeemed them (and thus benefited from the program) were disproportionately wealthy and college-educated, relative to all Californians.

Rad Resources:

I’ve found three evaluation approaches to be helpful in identifying unintended impacts of environmental programs.

Outcome harvesting. This evaluation practice encourages us to look for all program outcomes, not just those that were intended. Ricardo Wilson-Grau, who developed it, hosts this site with materials to get you started.

Intersectionality. This conceptual approach originated in feminist theory and reminds us to think about how differing clusters of demographic characteristics influence how we experience the world and perceive benefits of social programs.

Open-ended qualitative interviews. It’s hard to imagine unearthing unexpected outcomes using closed-ended questions. I always enjoy what I learn from asking open-ended questions, giving people plenty of time to respond, and even staying quiet a little too long. And, I’ve yet to find an interviewee who doesn’t come up with another interesting point when asked, “Anything else?”

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Kara Crohn and Matt Galport here – we’re consultants with EMI Consulting, an evaluation and consulting firm based in Seattle, Washington that focuses on energy efficiency and renewable energy programs and policies. More than ever, evaluators must consider how their clients’ programs impact the well-being of the communities and environments in which they are embedded. It is also important for evaluators to consider how their clients’ program goals relate to state, national, or global sustainability goals. In this post, we offer five types of systems-oriented sustainability metrics that evaluators can use to connect clients’ program contributions to broader environmental, economic, health, and social metrics of well-being.

But first, what do we mean by “sustainability”?

In this post, we’re not talking about the longevity of the program, but rather the extent to which a program’s outcomes, intended or otherwise, contribute to or detract from the future well-being of its stakeholders. We are also using an expanded definition of “stakeholders” that includes communities and environmental resources affected by the program.

Hot Tip:

Consider incorporating these five types of sustainability metrics into your next evaluation:

#1: Public health: The extent to which a program contributes to or detracts from the health of program and community stakeholders

#2: Environment and energy: The extent to which a program implements environmental and energy conservation policies that support resource conservation

#3: Community cohesion: The extent to which a program promotes or detracts from the vibrancy and trust of the communities in which it is embedded

#4: Equity: The extent to which a program contributes to or detracts from fair and just distribution of resources

#5: Policy and governance: The extent to which a program’s policies support civil society, democratic institutions, and protect the disadvantaged

So, what would this look like in practice?

Here’s an example of how to connect program-specific metrics for a small, local after-school tutoring program to the broader set of social goals.

Rad Resources:

Resources for municipal and global sustainability metrics:

Municipal: STAR Rating system for U.S. cities

Global: United Nation’s Sustainable Development Goals

Continue the conversation with us! Kara kcrohn@emiconsulting.com and Matt mgalport@emiconsulting.com.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Allison Van and I am currently an evaluator at the Clinical and Translational Science Awards (CTSA) program at the University of North Carolina in Chapel Hill and the owner of Allison Van Consulting.  Previously I managed The Pasture Project for Winrock International, which was an effort to build a movement among farmers in the Midwest to reintegrate livestock rotation both for greater profit and environmental benefit.  The project benefitted from funders that were willing to take chances with us, allowing for a budget where resources could be shifted to account for new information or opportunities.  Our strategies were highly diverse – demonstration sites on farms, supporting a collaboration of farmer educators, training dedicated farmers in public speaking – yet all directed at influencing the decision-making of individual farmers. Some strategies were about direct influence while others focused on building capacity – in both cases results wouldn’t be seen for years and were highly dependent on external circumstances.

As both the program manager and default evaluator, my goal was to test strategies relatively quickly, rigorously, and cheaply – then modify, end or expand them within 6-18 months.  I needed an approach for the team to compare the development of different strategies so money could be funneled where it was most likely to make a difference.  Understanding that the core challenge was one of budget allocation amidst uncertainty and long time horizons was critical to selecting the right evaluation approach.

Rad Resources: The combination of Michael Quinn Patton’s Developmental Evaluation and E. Jane Davidson’s Real Evaluation were my constant guides to developing an evaluation approach and making decisions in the context of extreme uncertainty and long time horizons.

Hot Tip:  There are profound trade-offs and opportunity costs in social change, making value for money a critical measure of program effectiveness.  How programs invest their resources can be the most fundamental determinant of success.  A bootstrap method combining cost-effectiveness analysis, the logic model for each strategy, and rubrics of early stage indicators of behavior change allowed us to thoughtfully consider how to make and shift investments.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hello, I am Marcie Weinandt and have been working with Minnesota rural and agricultural communities my entire career, as an elected official, program manager and policy developer.  My state, “The Land of 10,000 Lakes” has had to face a hard truth: water quality in Minnesota is being threatened by agricultural field runoff.  My current work is as operations coordinator of the Minnesota Water Quality Certification Program (MAWQCP), designed to bridge the urban/rural divide and protect water quality by providing the regulatory certainty farmers need and the assurance the public demands.

MAWQCP has pioneered a new model of conservation delivery that works on a field-by-field, whole farm basis to identify and mitigate agricultural risks to water quality. Once a farmer has mitigated their farm’s risks to water quality the farmer is eligible to become certified and sign a 10 year contract with the State indicating the certified farmer will be in compliance with any new state water laws or rules. Through the contract farmers receive the regulatory certainty they need to make long-term decisions and the general public is assured that farmers are managing their operations to protect water.

Central to the program’s success is the collaboration among Minnesota’s state agencies. The Minnesota Departments of Agriculture, Natural Resources, Pollution Control Agency, the Board of Water and Soil Resources all support the program, uphold the contract provision of regulatory certainty, and are implementing additional benefits to MAWQCP-certified farmers within their respective agencies.

Recognizing early that this intergovernmental MAWQCP has several partners, funding streams and constituents, we realized it did not fit neatly into any single evaluation approach. Multiple evaluation methods were developed at inception to triangulate expected project outcomes. Formative Knowledge, Attitude and Practice (KAP) surveys were used during the pilot phase to inform program direction and to set a baseline. Later, summative KAP surveys yielded a second database against which behavioral changes could be measured in specific watersheds over time. In addition, advisory committee members were interviewed, and a post-certification farmer survey was done. MAWQCP gathers information on three other levels: environmental, participatory and political.

Lessons Learned:

  • Farmers have a very high concern for water quality and especially for reducing soil erosion.
  • They are also concerned about public perception of agriculture.
  • KAP Study revealed that technical assistance from a trusted source and that financial assistance was appreciated but not necessary to adopt and maintain an agricultural conservation or management practice.

Rad Resources:

MAWQCP:  Knowledge, Attitudes and Practices (KAP) Study Final Report June 20, 2016

This KAP Study was conducted to better inform the implementation process of the MAWQCP.

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Older posts >>

Archives

To top