EEE Week: Cheryl Peters on Measuring Collective Impact

My name is Cheryl Peters and I am the Evaluation Specialist for Michigan State University Extension, working across all program areas.

Measuring collective impact of agricultural programs in a state with diverse commodities is challenging. Many states have an abundance of natural resources like fresh water sources, minerals, and woodlands. Air, water and soil quality must be sustained while fruit, vegetable, crop, livestock and ornamental industries remain efficient in yields, quality and input costs.

Extension’s outreach and educational programs operate on different scales in each state of the nation: individual efforts, issue-focused work teams, and work groups based on commodity types. Program evaluation efforts contribute to statewide assessment reports demonstrating the value of Extension Agricultural programs, including public value. Having different program scales allows applied researchers to align to the same outcome indicators as program staff.

Hot Tip: Just as Extension education has multiple pieces (e.g., visits, meetings, factsheets, articles, demonstrations), program evaluation has multiple pieces (e.g., individual program evaluation about participant adoption practices, changes in a benchmark documented from a secondary source, and impact assessment from modeling or extrapolating estimates based on data collected from clientele).

Hot Tip:  All programs should generate evaluation data related to identified, standardized outcomes. What differs in the evaluation of agriculture programs is the evaluation design, including sample and calculation of values. Impact reports may be directed at commodity groups, legislature, farming groups, and constituents. State Extension agriculture outcomes can use the USDA impact metrics. Additionally, 2014 federal requirements for competitive funds now state that projects must demonstrate impact within a project period. Writing meaningful outcomes and impact statements continues to be a focus of USDA National Institute of Food and Agriculture (NIFA).

Hot Tip: Standardizing indictors into measurable units has made aggregation of statewide outcomes possible. Examples include pounds or tons of an agricultural commodity, dollars, acres, number of farms, and number of animal units. Units are then reported by the practice adopted. Dollars estimated by growers/farmers are extrapolated from research values or secondary data sources.

Hot Tip: Peer-learning with panels to demonstrate scales and types of evaluation with examples has been very successful. There are common issues and evaluation decisions across programming areas. Setting up formulas and spreadsheets for future data collection and sharing extrapolation values has been helpful to keep program evaluation efforts going. Surveying similar audiences with both outcomes and program needs assessment has also been valuable.

Rad resource: NIFA  provides answers to frequently asked questions such as when to use program logic models, how to report outcomes, and how logic models are part of evaluability assessments.  

The American Evaluation Association is celebrating Extension Education Evaluation (EEE) TIG Week with our colleagues in the EEE Topical Interest Group. The contributions all this week to aea365 come from our EEE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.