Hi, we are Sophia Mansori, Tracy McMahon, Evangeline Ambat, and Leslie Goodyear from Education Development Center. We received a grant from the National Science Foundation (NSF) to do exploratory and foundational work experimenting with a community of practice as a way to engage with and build the capacity of NSF STEM evaluators. The primary goal of this project was to increase the capacity of evaluators to produce high quality, conceptually sound, methodologically appropriate evaluations of NSF programs and projects, specifically in the area of STEM education and outreach. In this work, we hoped to connect NSF STEM evaluators with each other and with the plethora of resources, best practices and learnings across the STEM education and outreach evaluation landscape.
The project was guided by the five-cycle conceptual framework developed by Wenger et al. (2011) of value creation in communities of practice and networks; at the core of the project was a group of about 20 stakeholders (NSF project and program evaluators, leaders of NSF-funded resource centers, and evaluators with expertise in systems-focused evaluation, culturally responsive evaluation, and federal program evaluation). Over the course of this project, we conducted a landscape study, a survey of NSF project and program evaluators, and held three Evaluation Community stakeholder meetings to discuss needs and approaches.
From our project evaluation conducted by Alexis Kaminsky from Kaminsky Consulting, we learned that stakeholders found immediate and potential value in participating in the project, but the diversity – of perspectives on evaluation, of backgrounds, of experience conducting STEM evaluations – within the group made it difficult to come up with clear action steps. Members of the group appreciated the opportunity to connect with and learn from each other and concluded that focusing on evaluator capacity building may not be the primary approach to improving evaluation quality. By the end of the project, we identified three key issues that affected our ability to build such a community of practice that would improve evaluation quality:
- Evaluation is part of a system, heavily shaped by funder, principal investigator, and program needs, interests, and capacity.
- There is a lack of alignment between available resources and evaluator needs, with the majority of resources designed for novice evaluators.
- We need to look beyond building capacity of evaluators to building capacity of evaluation stakeholders and users.
As challenging as it may be with limited time and resources, it benefits evaluators to invest in building the capacity of project PIs and program staff with respect to evaluation. Developing their understanding of the what, why, and how of evaluation will enable them to better support the evaluation in process, and, more importantly, increase the value and use of evaluation results.
There are resources aimed at helping programs and principal investigators to work with evaluators. These can be helpful to not just share with evaluation stakeholders, but proactively use to build their capacity.
Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects by CAISE is a great resource to give your projects as you begin to work on an evaluation.
Getting Started with Your Evaluation Toolkit by the NSF ATE evaluation resource center EvaluATE is a great resource for early evaluation planning and management.
The American Evaluation Association is hosting STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.