Hi, I’m Chad Green, program analyst at Loudoun County Public Schools in northern Virginia. Over the past year I’ve been seeking developmental evaluation (DE) practitioners in school districts throughout the U.S. and abroad. Recently I had the pleasure of interviewing Keiko Kuji-Shikatani (C.E.) who is an educator and internal evaluator with the Ontario Ministry of Education. She also helped launch the Credentialed Evaluator designation process for the Canadian Evaluation Society (CES).
Credentialed Evaluators (currently 394 total) are committed to continuous professional learning, which is also the focus of DE as Keiko explained. More specifically, DE “supports innovation development to guide adaptation to emergent and dynamic realities in complex environments” (Patton, 2010). Keiko believes that DE is well-suited to public sector work in which adaptation and innovation are the norm in providing services given the changing realities of society.
Hot Tips:
- The best way to introduce DE, whether to program/policy staff or senior leadership, is to be conscious that DE is about learning, and that when properly applied, evaluation capacity building is happening 24/7.
- DE involves learning as you go which requires evaluators to engage in systems thinking so they can zoom in and out as they work and continue to co-create innovative solutions to complex challenges.
- DE is not evaluation light. Developmental evaluators must have a thorough knowledge of evaluation so they can facilitate user-centric use of learning (i.e., a focus on utilization) gained from the DE approach in real time to tackle complex issues.
Keiko prefers to use conventional evaluation tools like logic models to co-construct a theory of change with the team of stakeholders, resulting in a shared understanding of the evolving evaluand. What is unique here is that she insists on describing their ideas in full sentences, much like the clear language used in the AEA Evaluator Competencies, rather than short phrases so as to avoid misunderstandings which are easy to make when complexity is the norm in huge systems such as hers.
Once the team members feel like the desired changes are plausible, she helps them to co-construct the theory of action so that they can collaboratively embed evaluative thinking in the way they work and make the changes feasible. She then takes the team further into what the year looks like to identify (a) the forks in the road where evaluation rigor is fundamental and (b) the use of appropriate data collection methods, analysis, and user-centric use of data so DE or “learning as we go” becomes the way the team makes sense of changing circumstances.
Rad Resources:
- Foundational text on DE by Michael Quinn Patton (2010)
- Case study on DE use in the education sector by Keiko Kuji-Shikatani, Mary Jean Gallagher, Richard Franz, and Megan Börner (2015)
- Upcoming CES workshop on DE facilitated by Keiko Kuji-Shikatani, Wendy Rowe, and Megan Börner on utilizing DE as a change agent for the evaluand
- DE Toolkit by the Spark Policy Institute introducing DE principles, methods, case studies, and inquiry frameworks
The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
I made a few connections to this posting and wanted to comment. I am currently taking a course for my master’s on program evaluation and this article exemplified some of my readings. Our recent discussions have been surrounding the utilization of evaluation, and this is a terrific example of how to have an evaluation that is highly useful to the stakeholders because they are active participants. Shulha and Cousins wrote that when “attending to the connection between context and evaluation purpose is pivotal” (p. 204, 1997). In the developmental evaluation you have described I can see this connection being apparent. Throughout the process the dialogue between stakeholders and evaluator allows for adjustments and clarification to assure the evaluation is meeting the needs of the program. Keiko’s conviction to write out stakeholder goals in complete sentences to make sure there isn’t any misunderstanding assures that the evaluation starts heading in the right direction. I would also think this helps build a strong and open relationship between the evaluator and stakeholders.
Another connection I made to my own learning was how conceptualizing builds focus for stakeholders. Involving them in the process must make the experience more meaningful and make it easier for stakeholders to receive the results of the evaluation. I wondered if in your experience are educational stakeholders more responsive to the process and results of evaluation than the private sector?
Reference:
Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.