Hello from NASA Langley Research Center! We represent NASA Innovations in Climate Education (NICE), which provides funding for external institutions to carry out (and evaluate!) climate education projects. We’re Ann Martin (a postdoctoral fellow working on NICE evaluation internally, at the portfolio level), Monica Barnes (NICE project manager), Lin Chambers (project scientist), and Margaret Pippin (deputy project scientist).
The NICE team has the benefit of an embedded, internal evaluator, and today we’ll be sharing some of the lessons we’ve learned from the experience.
Here at NASA, we’re well aware that robust evaluation is a necessity for determining the true impact of a project, and that evaluation can also help shape a project and its strategy. An internal evaluator is somewhat rare within our context. How has NICE benefited from having an internal evaluator – and how could other federal STEM education initiatives benefit?
Lesson Learned: As a team, we’ve found that it is very helpful to have a go-to person who is focused on evidence and data. Programmatic issues, reporting requirements, and other short turn-around requests are always on the management team’s radar, which doesn’t leave a lot of time for monitoring and evaluation. This has given NICE the opportunity to build a community among the project-level evaluators within our portfolio. The evaluation of climate change education is a relatively new, and rapidly changing, field. We see community building as a key part of what an internal evaluator can do.
Hot Tip: An internal evaluator can also be helpful in a more informal way, providing a fairly independent and somewhat on-the-fly view of “how things are going.” While we may not formally evaluate basic programmatic elements like meetings and webinars, our conversations touch on ideas like, “What outcome are we hoping to achieve with this?”
Hot Tip: NASA is full of people who are used to thinking systematically and to applying data to answer questions, so evaluation fits right in. Working on a team with an internal evaluator is a great opportunity to learn about evaluation and its relationship to STEM’s ways of knowing. When Ann returned from Evaluation 2013, this year’s AEA meeting in Washington, DC, the team discussed her poster on meta-evaluation, and increased their evaluation literacy. In turn, Ann (who is also from a physical sciences background) has thought a lot about “translating” evaluation ideas into concepts that scientists and engineers recognize. She’s learned about how to reconcile evaluation “ideals” with NASA limitations and realities, leading to conversations about evaluation that more quickly get to a productive, valuable place.
Ann Martin is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.
The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.