Donna Fujimoto-Saka on Evaluation Capacity Building: Top Five Lessons Learned From Technical Assistance Efforts
No comments · Posted by Sheila Robinson in Organizational Learning and Evaluation Capacity Building
My name is Donna Fujimoto-Saka, an evaluation specialist with the System Planning and Improvement Section (SPIS) within the Hawaii State Department of Education since 2007. In 2011, the internal evaluators were asked to support the evaluation needs of the Hawaii’s Race to the Top (RTTT) grant. This post summarizes some of the lessons that I learned about evaluation capacity building over the last two years.
1. Develop Common Ground: Establish a conceptual framework and interest in evaluation by providing access to basic evaluation training to program managers.
- When we procured professional development workshops for ourselves with national evaluation experts, we opened enrollment to RTTT program managers and others in the Department interested in evaluation. As a result, several project managers who attended the sessions asked for follow-up evaluation support from the internal evaluators.
2. Start Small: Set realistic expectations by identifying specific needs and uses of evaluation data and selecting one manageable component to assess.
- I would discuss the overall program with the project manager to develop context but tried to keep my assistance focused and concise. Activities included developing logic models, analysis of program data, creation of professional development feedback surveys, and analysis of survey results. The overriding goal became to maintain the conversation and hands-on experience with evaluation by keeping the tasks manageable.
3. Work Together: Evaluators and project managers need to meet regularly and work collaboratively. Once both parties commit to a technical assistance project, set routines for written meeting agendas and minutes to document shared commitments and responsibilities as well as to document progress over time.
- At each meeting, we followed a written agenda which included discussing and editing the working logic model, discussing issues and concerns, developing next steps, and setting a next meeting date.
4. Add Value: Work together but expect to do most of the evaluation work to facilitate progress and maintain momentum.
- Initially, we had hoped that with training, project managers would be able to develop logic models independently. However, the often overburdened project managers did not always have time to “figure it out” on their own. Once the internal evaluators took over the task, we instantly became value-added free help that met the needs of project managers rather than creating one more thing for them to do.
5. Create Win-Win Scenarios: Both the evaluators and program managers should gain from the experience.
- Although most technical assistance efforts did not go through a complete evaluation cycle, we were able to gain valuable hands-on experience with evaluation capacity building while the project managers gained knowledge of and hands-on experience with evaluation.
Rad Resource: Learn more about Evaluation Capacity Building in A Multidisciplinary Model of Evaluation Capacity Building by Hallie Preskill.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.