Hi, I’m Nora F. Murphy, co-founder of TerraLuna Collaborative, an evaluation cooperative in Minneapolis, MN with some thoughts about practicing Developmental Evaluation. Like many, I eagerly read Michael Quinn Patton’s Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use from cover to cover. The approach excited me because naturally taps into my chosen profession of evaluator and my past experience as a non-profit program manager. Upon reaching the last page I was left feeling inspired, but also like I had read a riveting book about learning to ride a bike but still had no idea how to actually ride a bike. So I took a deep breath and jumped right in, carrying Patton’s book with me and consulting it often. What follows are two things I do differently as a developmental evaluation practitioner.
Flexible and responsive reporting. Timing is everything. Working as a nonprofit program manager I have experienced what it’s like to get information at the wrong time—such as formative reports presented after all of the next years’ program planning has occurred. Because DE seeks to enhance innovation and support evaluation use, timing is everything. Rather than producing findings at arbitrary time points, we seek to share findings at times that naturally support development. We might share the findings of a relevant survey before a staff planning session, facilitate an activity at stakeholder retreat about patterns we are observing, or share a PowerPoint report at Board Meeting. As opportunities for stakeholder engagement emerges, we revisit and revise our reporting timeline.
Hot Tip: Our evaluation teams often keep a calendar of agencies’ events so we can plan evaluation activities accordingly, allowing us to support development by providing the right information at the right time.
Pay attention to the “it” that is being developed. Often times the “it” (evaluand) being developed in a developmental evaluation is an approach, a set of strategies, or a collaboration, rather than a clearly defined program. With rapidly developing innovative approaches it can be challenging to understand where the boundaries around the evaluand are. And once you have it figured out, it might change. For example, if the evaluation is supporting the development of an innovative approach to working with students, should the evaluation focus on what happens in classrooms? In schools? In the district? In the community? And in community-driven or participatory work, who gets to decide? Engagement in the Adaptive Action cycle helps our team focus this line of inquiry.
Hot Tip: It’s helpful to periodically revisit the simple question: What is the “it” that is being developed?
Rad Resource: Adaptive Action: Leveraging Uncertainty in Your Organization by Eoyang and Holladay
This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
I am trying to evaluate an assessment program of the department of education on teacher performance and development. I am not sure if it is proper to use developmental evaluation in relation to the evaluation of this program. Please advise.