AEA365 | A Tip-a-Day by and for Evaluators

TAG | outcomes

Greetings!  I am Tania Rempert, Evaluation Coach at Planning, Implementation, Evaluation Org.  This post is written together with my colleagues Molly Baltman from the McCormick Foundation, Mary Reynolds from Casa Central, and Anne Wells from Children’s Research Triangle.  We would like to share our experience speaking at the Office of Social Innovation White House convening on Outcomes Focused Technical Assistance (OFTA).

The purpose of this convening was to advance an outcomes mindset in government, across the public, private, and philanthropic sectors.  David Wilkinson shared the vision of OFTA to focus on building the capacity of social service providers using data to inform smarter service delivery and to implement evidence-based practices in local communities.  Wilkinson began the convening by pointing out,

“Government pays for 90% of the funding for social services in this country, but typically pays for outputs and compliance rather than outcomes and impact.  As a result, many social service providers do not have outcomes they are actively pursuing….and less likely to have consistent outcomes useful for comparison with their peers.” 

The White House Office of Social Innovation and Civic Participation would like to change that.  This convening was meant to draw attention to the technical assistance needed by social service agencies when tasked with measuring and reporting and using outcomes.

Hot Tip: Principles of OFTA:

  • Identify the most important measurable outcomes
  • Implement evidence-based practices
  • Use data to inform research-based service delivery

We were asked to speak based on our experience with the Unified Outcomes Project.  We shared our experiences focusing on increasing grantees’ capacity to report outcome measures and utilize this evidence for program improvement, while streamlining the number of tools being used to collect data across cohort members.  Our model emphasizes communities of practice, evaluation coaching, and collaboration between the foundation and 29 grantees to affect evaluation outcomes across grantee contexts:

Lessons Learned:

  • It takes at least 2 years to see measurable outcomes and be able to model the use of this data at the cohort level of shared outcomes.
  • Grantees are experts through lived experience, use their community voice to determine specific strategies, because they have the language and experience to take each other to the next level, so when they are brought together, a learning community organically develops.
  • The beauty of using an evaluation coach visiting organizations on-site to provide technical assistance is that each organization has different needs to make data-informed decision making.

We hope that this initial convening will encourage ongoing discussion and development of strategies in OFTA for evaluation practice and government policy making.  Since it is not a thing unless it has an acronym, let all of us in the evaluation community commit to “OFTA often!”

rempert

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Holly Lewandowski. I am the owner of Evaluation for Change, Inc. a consulting firm that specializes in program evaluation, grant writing, and research for nonprofits, state agencies, and universities. I worked as an internal evaluator for nonprofits for ten years prior to starting my business four years ago.

There have been some major changes in the nonprofit world as a result of the economic downturn -within the last four years especially. I’ve witnessed nonprofits that were mainstays in the community shut their doors because the major funding source they relied on for years dried up. Funding has become scarcer and much more competitive. Funders are demanding grantees demonstrate strong outcomes in order to qualify for funding. As a result, many of my clients are placing a much greater emphasis on evaluating outcomes and impact and less on evaluating program implementation in order to compete. The problem is you can’t have one without the other. Strong programs produce strong outcomes.

Here are some tips and resources I use to encourage my clients to think evaluatively to strengthen their programs and thus produce quality outcomes.

Hot Tips:

  • Take time to think. As an outside evaluator, I am very aware of the stress program staff and leadership are under to keep their nonprofits running. I am also aware of the emphasis for nonprofits to produce in order to keep their boards and funders happy. What gets lost, though, is time to think creatively and reflect on what’s going well and what needs to be improved. Therefore, I build in time in my work plan to facilitate brainstorming and reflection sessions around program implementation. What we do in those sessions are in the following tips.
  • Learn by doing. During these sessions, program staff learns how to develop evaluation questions and how to develop logic models.
  • Cultivate a culture of continuous improvement through data sharing. Also at these sessions, process evaluation data is shared and discussed. The discussions are centered on using data to reinforce what staff already knows about programs, celebrate successes, and identify areas for improvement.

Rad Resources:

  • The AEA Public eLibrary has a wealth of presentations and Coffee Break Demonstrations on evaluative thinking and building capacity in nonprofits.
  • If you are new to facilitating adults in learning about evaluation, check out some websites on Adult Learning Theory. About.com is a good place to start.

The American Evaluation Association is celebrating the Chicagoland (CEA) Evaluation Association Affiliate Week with our colleagues in the CEA AEA Affiliate. The contributions all this week to aea365 come from our CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


· · · · ·

My name is Helen Davis Picher, and I am the Director of Evaluation and Planning at the William Penn Foundation. The William Penn Foundation is a regional foundation dedicated to improving the quality of life in the Greater Philadelphia region through efforts that foster rich cultural expression, strengthen children’s futures, and deepen connections to nature and community.

We consider ourselves a “strategic” grantmaker meaning we have identified several areas in which we seek to achieve targeted changes using specific strategies or tactics (e.g., advocacy, demonstration projects).

Research shows though that while many foundations perceive themselves as strategic, they struggle to articulate, operationalize, and track strategy (Center for Effective Philanthropy, 2007).  In order to walk-the-walk and not just talk-the-talk, we developed a suite of tools that help us talk about our strategy and track progress toward our strategic goals.

Hot Tip: Program plans align specific funding strategies or tactics to achieve our program objectives and to target the grantees who are doing the work, the resources dedicated to the work, and the short-term and long-term outcomes.  Below is a short definition for each component.

  1. Strategy: Tactics or activities we use to achieve targeted change – the “how.”
  2. Grants: By lining up our grantees’ activities to the program objectives and strategies they support, we can clearly see the body of work around a particular strategy and more easily gauge whether a prospective grant is really working toward our outcome goals.
  3. Resources: This allows us to keep track of how much resources are devoted in a given year to the tasks at hand.  This simplistic form of tracking elevates a mismatch between resource level and target goal, ensuring that staff continually monitor whether work is adequately resourced for success.
  4. Short-term Outcomes: One-year changes or benchmarks toward longer-term outcomes are set and tracked.
  5. Long-term Outcomes: Longer-term outcomes mark the target accomplishments of several grants working in tandem and allow a realistic look at what we aim to achieve with our funding over the next several years.
  6. Report Card: Foundation staff report on their one-year outcomes at the end of each year.  This helps to ensure that progress is made toward reaching longer-term goals. It also ensures that new targets can be set if mid-course corrections are needed because external circumstances change, invalidating the original goals.

Hot Tip: The value of the program plan lies in both the product and the process.  Evaluation staff work with program officers to reach common understanding. Throughout the process, there is a push to clarify objectives and define success through short and long-term outcomes that are specific and measurable.

Rad Resource: Microsoft Visio makes the creation and update of a program plan easy.

The American Evaluation Association is celebrating evaluation in Not For Profits & Foundations (NPF) week with our colleagues in the NPF Topical Interest Group. The contributions all this week to AEA365 will come from our NPF members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting NPF resources.

· ·

Archives

To top