Hi, we are Tom Archibald and Jane Buckley with the Cornell Office for Research on Evaluation. Among other initiatives, we work in partnership with non-formal educators to build evaluation capacity. We have been exploring the idea of evaluative thinking, which we believe is an essential, yet elusive, ingredient in evaluation capacity building (ECB). Below, we share insights gained through our efforts to understand, describe, measure, and promote evaluative thinking (ET)—not to be confused with the iconic alien!
Lesson Learned: From evaluation
- Michael Patton, in an interview with Lisa Waldick from the International Development Research Center (IDRC), defines it as a willingness to ask: “How do we know what we think we know? … Evaluative thinking is not just limited to evaluation projects…it’s an analytical way of thinking that infuses everything that goes on.”
- Jean King, in her 2007 New Directions for Evaluation article on developing evaluation capacity through process use, writes “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization.”
- Evaluative thinkers are not satisfied with simply posing the right questions. According to Preskill and Boyle’s multidisciplinary model of ECB in the American Journal of Evaluation in 2008, they possess an “evaluative affect.”
Lesson Learned: From other fields
Notions related to ET are common in both cognitive research (e.g., evaluativist thinking and metacognition) and education research (e.g., critical thinking), so we searched the literature in those fields and came to define ET as comprised of:
- Thinking skills (e.g., questioning, reflection, decision making, strategizing, and identifying assumptions), and
- Evaluation attitudes (e.g., desire for the truth, belief in the value of evaluation, belief in the value of evidence, inquisitiveness, and skepticism.)
Then, informed by our experience with a multi-year ECB initiative, we identified five macro-level indicators of ET:
- Posing thoughtful questions
- Describing and illustrating thinking
- Active engagement in the pursuit of understanding
- Seeking alternatives
- Believing in the value of evaluation
Rad Resource: Towards measuring ET
Based on these indicators, we have begun developing tools (scale, interview protocol, observation protocol) to collect data on ET. They are still under development and have not yet undergone validity and reliability testing, which we hope to accomplish in the coming year. You can access the draft measures here. We value any feedback you can provide us about these tools.
Rad Resource: Towards promoting ET
One way we promote ET is through The Guide to the Systems Evaluation Protocol, a text that is part of our ECB process. It contains some activities and approaches which we feel foster ET, and thus internal evaluation capacity, among the educators with whom we work.
Tom and Jane will be offering an AEA Coffee Break Webinar on this topic on May 31st. If you are an AEA member, go here to learn more and register. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
8 thoughts on “Tom Archibald and Jane Buckley on Evaluative Thinking: The ‘Je Ne Sais Quoi’ of Evaluation Capacity Building and Evaluation Practice”
Pingback: How Evaluation Can Strengthen Communities - Pritpal S Tamber
Pingback: Ode to AEA365: A “Meta-post” | Sheila B Robinson
Pingback: A Roundup of Survey Design Resources (Cross-Post with Actionable Data) | Sheila B Robinson
Pingback: A Roundup of Survey Design Resources | actionable data
Pingback: OL-ECB Week: Tom Archibald and Guy Sharrock on Integrating Learning by Promoting Evaluative Thinking · AEA365
Pingback: A Roundup of Survey Design Resources (Cross-Post with Actionable Data) « Evaluspheric Perceptions
Hi Tom and Jane–thanks for sharing those great resources! One question: have you modified or used the ET Inventory to understand organization culture re: ET? Or do you always do your analysis at the individual level? I understand the focus group protocol gets more at the organization-level, but just wondering if you have used an inventory-like tool to assess ET organization culture.
Thanks for your comments and question. We don’t exactly have an organization-level scale to assess evaluative culture, although we hope to work towards developing one. We have developed a survey to get at organizational evaluation capacity more generally, but that tool came about prior to our work on evaluative thinking/culture.
I also know of some other efforts in the area of organizational evaluation capacity measurement (such as those by Tina Taylor-Ritzler, Yolanda Suarez-Balcazar, and colleagues at University of Illinois at Chicago.)
In the future, we hope to more closely explore the relationship between evaluative thinking and culture, which will require drawing from studies of social cognition and the social relations of intellectual and institutional processes.
We’d love to hear any ideas or approaches you draw from related to efforts to better understand evaluative culture!