My name is Cassie Bowman. As the coordinator of NASA’s Mars student intern program, I’ve looked for ways to continuously improve the experience for the student, teacher, and scientist participants. My tip is that empowerment evaluation, used in a modified format, can be an excellent way to make improvements to a program that has constantly changing membership.
Though empowerment evaluation is most commonly used to build capacity and foster improvement among stakeholders and staff in a group or organization, NASA’s Mars Public Engagement Program has found it can be used very successfully to evaluate and improve a program with changing group membership. We have used empowerment evaluation in our Mars student intern program with each year’s group of students and teachers—the first year under the direct guidance of Dr. David Fetterman and subsequent years on our own (I guess that’s the capacity-building part!).
The standard model of empowerment evaluation involves three main steps (Fetterman, 2001):
- Step 1: “Developing a mission, vision, or unifying purpose”
- Step 2: “Taking stock or determining where the program stands, including strengths and weaknesses”
- Step 3: “Planning for the future by establishing goals and helping participants determine their own strategies to accomplish program goals and objectives.”
Though it sounds highly conceptual, these steps are carried out in very concrete ways. For Step 1, if the group engaged in the evaluation is meeting face-to-face, they might go about drafting (or editing) a mission/vision/unifying purpose together in real-time. In our case, because our student/teacher groups are distributed across the country, we have people submit their ideas electronically, then share drafts back and forth before coming to consensus via teleconference. For Step 2, after brainstorming a long list of program aspects/elements, those meeting in person are given 10 stickers that they use to “vote” on aspects of their program that are the most important by placing the stickers next to aspects on the list that they value. Once the list is whittled down to a reasonable number through this voting process, participants rate each aspect (usually on a scale from 1-10) and then discuss the ratings. In our case, online voting or survey software is used for both the voting and rating, and discussions are held via teleconference. Finally, as with the previous steps, Step 3, planning for the future, can be held in person or via distance technologies.
We have found empowerment evaluation to be a very flexible tool, allowing us to “take stock” partway through the program and make some changes (as feasible) during the second part of the program. We then engage in a full empowerment evaluation at the end of the program, with participants using their experience in the program to plan for a “future” in which they will not take part but that will benefit the next group of students and teachers who participate. Over the years we have been able to implement and refine many of the suggestions generated by the empowerment evaluations, from more explicit training for scientist mentors to the inclusion of teacher “facilitators” to providing added opportunities for students to consider various STEM careers. The flexibility of empowerment evaluation techniques has resulted in a better experience for the students, teachers, scientists, and program coordinators each year of the program.
This week’s posts are sponsored by AEA’s Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (http://comm.eval.org/EVAL/cpetig/Home/Default.aspx) as part of the CPE TIG Focus Week. Check out AEA’s Headlines and Resources entries (http://eval.org/aeaweb.asp) this week for other highlights from and for those conducting Collaborative, Participatory, and Empowerment Evaluations.