AEA365 | A Tip-a-Day by and for Evaluators

Search

This is Jean King, University of Minnesota, chair of AEA’s Competencies Task Force, and John LaVelle, Louisiana State University, enthusiastic Task Force member. This week we and other members of the Competencies Task Force are excited to share with you our progress toward developing a set of AEA evaluator competencies.  To put these efforts in context, a bit of background is in order. In 2015, the AEA Board approved the creation of an official taskforce to explore and refine a unified set of evaluator competencies as a next step in AEA’s continuing development.  AEA members have previously endorsed three documents contributing to the professionalization of our field (Rad Resources all):

  • The Joint Committee’s Program Evaluation Standards, currently in their 3rd edition (2011, http://www.jcsee.org/program-evaluation-standards-statements)
  • AEA’s Guiding Principles for Evaluators (revised in 2004, http://eval.org)
  • AEA’s Cultural Competence Statement (2011, http://eval.org)

The proposed competencies will be the fourth such document.

As an initial step, AEA’s Task Force reviewed existing sets of general and subject-specific competencies for program, policy, and personnel evaluators to identify foundational competencies necessary to the diverse evaluation practice of AEA members. The resulting crosswalk suggested five broad domains of evaluator competencies: professional, methodology, context, management, and interpersonal.

Who’s on the Task Force? When we were appointed, special care was taken to include as diverse a group as possible, representing numerous segments of AEA’s membership. Here is the current list of members in addition to Jean and John:

  • Sandra Ayoo, University of Minnesota
  • Eric Barela, Salesforce.org, San Francisco, CA
  • Dale Berger, Claremont Graduate University
  • Gail Vallance Barrington, Barrington Research Group, representing the Canadian Evaluation Society
  • Nicole Galport, Claremont Graduate University
  • Michelle Gensinger, University of Minnesota
  • John LaVelle, Louisiana State University
  • Robin Miller, Michigan State University
  • Donna Podems, OtherWISE: Research and Evaluation, Cape Town, South Africa
  • Anna Rodell, Collective Progress, Minneapolis, MN
  • Laurie Stevahn, Seattle University
  • Hazel Symonette, University of Wisconsin
  • Susan Tucker, Evaluation & Development Associates LLC
  • Elizabeth Wilcox, Education Evaluation Exchange, Golden Valley, MN

Lesson Learned. AEA is not alone in addressing professionalization. Similar discussions are growing in frequency and intensity around the world. Consider two examples. Our colleagues in the Canadian Evaluation Society offer the Credentialed Evaluator (CE) Program, currently the only formal credentialing available to evaluators anywhere (http://evaluationcanada.ca/ce). The European Evaluation Society and the United Kingdom Evaluation Society have joined forces to develop the Voluntary Evaluator Peer Review process, whereby evaluators will prepare portfolios for review by qualified peers, leading to purposeful professional development (http://www.europeanevaluation.org/events/ees-conferences-and-events/conferences/evalyear-2015x/vepr-project).

Hot Tip: Get Involved Now. From the beginning, we have worked hard to get feedback from AEA’s membership. If you are willing to contribute to the discussion, please go to the AEA website (www.eval.org) where you will see the link to the draft competencies. Send your thoughts big or small to us at competencies@eval.org.

The American Evaluation Association is celebrating AEA’s Competencies Task Force week. The contributions all this week to aea365 come from members of AEA’s Competencies Task Force. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a series remembering and honoring evaluation pioneers leading up to Memorial Day in the USA (May 30).

I am Jean King, professor at the University of Minnesota and, like my colleague John McLaughlin, who collaborated with me on this In Memoriam, an original AEA member. I met Bob Ingle for the first time in New Orleans, LA when I served as the Local Arrangements Chair for the 1988 AEA conference. Bob had charged me with purchasing bottles of liquor for the Conference Chair’s suite—free-flowing alcohol being one of the perquisites of the role at that time—and Associate Conference Chair John McLaughlin delivered the heavy box to its rightful place. Bob let us hire a jazz band for one of the big receptions and even let us serve shrimp. Bob Ingle knew how to put on a conference. He also knew the field of program evaluation because he helped to create it.

Pioneering contributions:

With Bill Gephart, Bob was one of the founders of the Evaluation Network in the early 1970’s, creating a national organization of professionals interested in advancing the practice of program evaluation. With the help of his ever resourceful assistant Nan Blyth, he soon became responsible for planning and managing the Network’s national meetings.

When the Evaluation Network joined with the Evaluation Research Society to become the American Evaluation Association in 1986, Bob became one of its founding members. For AEA’s first ten years, he served as the Annual Conference Chair in a manner that only he could, seemingly enjoying his role as in-house curmudgeon, often with a twinkle in his eye. In his role as Conference Chair, Bob sat on the AEA Board and became a relentless advocate for member services. In recognition of his contributions to the organization, AEA established the Robert Ingle Service Award, presented annually to a member who has provided exceptional service to the organization and been instrumental in promoting its interests and operations.

Enduring contributions:

  1. In the founding years of AEA’s conference, Ingle ensured that one of its signature features would be the opportunity for as many members as possible to showcase their practice, share successes and concerns, and reflect on the future of the field. Bob Ingle was dedicated to sustaining an atmosphere of openness and collegiality.
  2. Bob may have cultivated his gruff image, but he couldn’t mask his kindness. Despite his well-known harrumphing, he genuinely cared about people and wanted the conference to engage as many as possible. Attending one of Ingle’s conference dinners where he held court was an indisputable delight.
  3. Bob worked long hours with us as program chairs ensuring a well-organized conference. The original conference schedule was developed in pencil—with countless erasures—on large sheets of tissue paper. Imagine the increase in productivity when Post-it notes were created.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We’re Jean King and Frances Lawrenz (University of Minnesota) and Elizabeth Kunz Kollmann (Museum of Science, Boston), members of a research team studying the use of concepts from complexity theory to understand evaluation capacity building (ECB) in networks.

We purposefully designed the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) case study research to build on insider and outsider perspectives. The project has five PIs: two outsiders from the University of Minnesota who were not as involved in the network being studied prior to this study; and three insiders, one each from the museums that led the network’s evaluation for over a decade (Museum of Science, Boston; Science Museum of Minnesota; and Oregon Museum of Science and Industry).

Lessons Learned:

Outsiders were helpful because

  • They played the role of thinking partner/critical friend while bringing extensive theoretical knowledge about systems and ECB.
  • They provided fresh, non-participant perspectives on the network’s functioning and helped extend the interpretation of information gathered to other networks and contexts.

Insiders were helpful because

  • They knew the history of the network, including its complex structure and political context and could easily provide explanations of how things happened.
  • They had easy access to network participants and existing data, which was critical to obtaining data about the ECB processes CASNET was studying, including observing internal network meetings and attending national network meetings, using existing network evaluation data, and asking network participants to engage in in-depth interviews.

Having both perspectives was helpful because

  • The outsider and insider perspectives allowed us to develop an in-depth case study. Insiders provided information about the workings of the network on an on-going basis, adding to the validity of results, while outsiders provided an “objective” and field-based perspective.
  • Creating workgroups including both insiders and outsiders meant that two perspectives were constantly present and occasionally in tension. We believe this led to better outcomes.

Hot Tips:

  • Accept the fact that teamwork (especially across different institutions) requires extended timelines.
    • Work scheduling was individualized. People worked at their own pace on tasks that matched their skills.       However, this independence resulted in longer than anticipated timelines.
    • Decision making was a group affair. Everyone worked hard to obtain consensus on all decisions. This slowed progress, but allowed everyone—insiders and outsiders–to be an integral part of the project.
  • Structure more opportunities for communication than you imagine are needed. CASNET work taught us you can never communicate too much.       Over three years, we had biweekly telephone meetings as well as multiple face-to-face and subgroup meetings and never once felt we were over-communicating.
  • Be ready to compromise. The different perspectives of team members owing in some cases to their positions within and outside of the network resulted regularly in the need to accept another’s perspective and compromise.

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

This is Jean King and Gayra Ostegaard Eliou, from the University of Minnesota, members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. NSF funded CASNET to provide insights on (1) the implications of complexity theory for designing evaluation systems that “promote widespread and systemic use of evaluation within a network” and (2) complex system conditions that foster or impede evaluation capacity building (ECB) within a network. The complex adaptive system (CAS) in our study is the Nanoscale Informal Science Education Network (NISE Net), a network that has been continuously operating for ten years and is currently comprised of over 400 science museum and university partners (https://player.vimeo.com/video/111442084). The research team involves people from University of Minnesota, the Museum of Science in Boston, the Science Museum of Minnesota, and the Oregon Museum of Science and Industry.

This week CASNET team members will highlight what we’re learning about ECB in a network using systems and complexity theory concepts. Here is a quick summary of three lessons we learned about ECB in a network and systems readings we found helpful.

Lessons Learned:

  1. ECB involves creating and sustaining infrastructure for specific components of the evaluation process (e.g., framing questions, designing studies, using results). Applying a systems lens to the network we studied demonstrated how two contrasting elements supported ECB:
  • “Internal diversity” among staff’s evaluation skills (including formally trained evaluators, novices, thoughtful users, and experts in different subject areas) provided a variety of perspectives to build upon.
  • “Internal redundancy” of skill sets helped ensure that when people left positions, evaluation didn’t leave with them because someone else was able to continue evaluative tasks.
  1. ECB necessitates a process that engages people in actively learning evaluation, typically through training (purposeful socialization), coaching, and/or peer learning. The systems concepts of neighbor interactions and massive entanglement pointed to how learning occurred in the network. NISE Net members typically took part in multiple projects, interacting with many individuals in different roles at different times. Network mapping visually documented the “entanglement” of people from multiple museums, work groups, and in numerous roles that supported ECB over time.
  1. The degree of decision-making autonomy a team possessed influenced the ways in which–and the extent to which–ECB took place. Decentralized or distributed control, where individuals could adapt an evaluation process to fit their context, helped cultivate an ECB-friendly internal organizational context. Not surprisingly, centralized control of the evaluation process was less conducive to building evaluation capacity.

Rad Resources:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

This is Jean King, professor of Evaluation Studies at the University of Minnesota and mother of the Minnesota Evaluation Studies Institute (MESI—pronounced “messy” because evaluation is that way). MESI began 20 years ago to provide high quality evaluation training to all comers: evaluation practitioners, students, accidental evaluators, and program staff and administrators. We are fortunate to have had Minnesotans Michael Quinn Patton and Dick Krueger as regular MESI trainers from the beginning and, with funding from Professor Emerita Mary Corcoran, guest sessions from many of our field’s luminaries. Over the years MESI has taught me a great deal. This entry details three learnings.

Lesson Learned: Structured reflection is helpful during evaluation training. Experiential educators remind us that merely having an experience does not necessarily lead to change; reflection is the key to taking that experience and learning from it. At MESI plenaries we regularly build in time when the speaker finishes for people to “turn to a neighbor” (groups of 2 to 4–no larger) and talk about what they took as the main ideas and any confusions/questions they have. The reflection is easy to structure, and people engage actively. If appropriate, the facilitator can ask people to jot down their questions, which can become the basis of Q&A.

Hot Tip: I never ask an entire large group, “Are there any questions?” At the end of sessions in large conferences/training sessions, the facilitator/presenter will frequently ask the entire group if there are any questions. In these situations there is often an awkward pause, sometimes lasting long enough that people start glancing nervously at each other or at the door, and then someone who can’t stand the silence thinks of a question, raises a hand, and is instantly called on. Everyone breathes a sigh of relief. When I facilitate a session, I instead use the “turn to a neighbor” strategy (briefly—just a couple of minutes) so that everyone can start talking and generate potential questions. You can even call on people and ask what they were discussing in their small group.

Cool Trick: Create Top Ten lists as part of a meeting or training session. Since MESI’s inception, attendees have participated in an annual tongue-in-cheek Top Ten competition where they submit creative answers to a simile that describes how evaluation is like something else (e.g., the state fair, baseball, Obamacare). We provide prizes for the top three responses, and I am continually impressed with people’s cleverness. This year’s topic compared evaluation to interstellar space travel, and the final list is posted at www.evaluation.umn.edu. The Top Ten is a useful activity because it spurs creativity and helps a group come together around a common, low-key cause.

The American Evaluation Association is celebrating MESI Spring Training Week. The contributions all this week to aea365 come from evaluators who presented at or attended the Minnesota Evaluation Studies Institute Spring Training. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello everyone—Laurie Stevahn (Seattle University) and Jean King (University of Minnesota) here—continuing to grapple with issues relevant to program evaluator competencies (whether essential sets exist) and usefulness (if enhanced practice results). In fact, for over a decade we have been working on a formal set of evaluator competencies, trying to answer the daunting question of what knowledge, skills, and dispositions distinguish the practice of professional program evaluators. Applying what we’ve learned to the work of “qualitative evaluators” didn’t quite make sense because an evaluator is neither qualitative nor quantitative. As we learned in Evaluation 101, methods come second, once we’ve established an evaluation’s purpose and overarching questions; methods do not guide evaluations. So a competent qualitative evaluator is first and foremost a competent evaluator. But what is a competent evaluator?

Rad Resources: Reading through sets of competencies—there is a growing number of them around the world—can be a helpful form of reflection. We synthesized four competency taxonomies:

  1. Essential Competencies for Program Evaluators
  2. Competencies for Canadian Evaluation Practice
  3. International Board of Standards for Training, Performance, and Instruction (ibstpi) Evaluator Competencies
  4. Professional Competencies of Qualitative Research Consultants

Lesson Learned: Thankfully, there was overlap across the domain sets. We can say with considerable confidence that a competent evaluator demonstrates competencies in five areas:

  1. Professional—acts ethically/reflectively and enhances/advances professional practice.
  2. Technical—applies appropriate methodology.
  3. Situational—considers/analyzes context successfully.
  4. Management—conducts/manages projects skillfully.
  5. Interpersonal—interacts/communicates effectively and respectfully.

Lesson Learned: What distinguishes a competent qualitative evaluator? An enduring commitment to the qualitative paradigm. Qualitative evaluators understand and intentionally use the qualitative paradigm, choosing projects with questions that require answers from qualitative data. They need technical methodological expertise related to collecting, recording, and analyzing qualitative data.

Hot Tip: When using qualitative methods, focus on developing a special “sixth sense” to ensure a high-quality process and outcomes for qualitative studies. This is your ability to interact skillfully with a wide range of others throughout an evaluation to produce trustworthy and meaningful results. It involves interpersonal skills on steroids. A competent qualitative evaluator has to be attuned to social situations and skillfully interact with people in authentic ways from start to finish, knowing quickly when things are tanking.

Hot Tip: In the end, highly specialized sets of competencies unique to a particular evaluator role are less important than your commitment to engaging in ongoing reflection, self-assessment, and collaborative conversation about what effectiveness means in particular conditions and circumstances.

Rad Resource: Stevahn, L. and King, J. A. (2014). What does it take to be an effective qualitative evaluator? Essential Competencies. In Goodyear, L., Jewiss, J., Usinger, J., & Barela, E. (Eds.), Qualitative inquiry in evaluation: From theory to practice. Jossey-Bass, pp. 139-166.

9780470447673.pdf

The American Evaluation Association is celebrating Qualitative Evaluation Week. The contributions all this week to aea365 come from evaluators who do qualitative evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Jean King and Laura Pejsa, Minnesota Evaluation Studies Institute (MESI), here, with broad smiles on our faces. We are the proud coaches who are wrapping up this week of posts written by our creative student consultants about ways to evaluate a conference (using exit surveys of Ignite sessions, network visualizing, Twitter, and video clips).  Progressive educators long ago documented the value of experiential learning–“learning by doing”–and our experiences during this year’s AEA conference again provide support for the idea as a means of teaching evaluation. Thoughts about how to use a conference setting to engage evaluation students follow.

Hot Tips:

  • Create an evaluation team. Our experience at MESI confirms the value of having students collaborate on projects. Not only do they learn how to do evaluation tasks, but they also learn how to collaborate, an important skill set for evaluators, regardless of their eventual practice.
  • Encourage innovation. Our charge was to think broadly about conference evaluation. At our first meeting, students brainstormed many possible ways to collect data at the conference, no holds barred, the more creative, the better.  As we sought to be “cutting edge,” technology played a role in each of the four methods selected.
  • Make assignments and hold people accountable. Social psychology explains the merit of interdependence when working on a task. We divided into four work groups, each of which operated independently, touching base with us as needed. Work groups knew they were responsible for putting their process together and being ready at the conference. As coaches, we did not micromanage.
  • Make the process fun. University of Minnesota students take evaluation seriously, but their conference evaluation work generated a great deal of laughter. In one sense it was high-stakes evaluation work (we knew people would use the results), but without the pressure of a full-scale program evaluation.

Lessons Learned:

  • Students can learn the evaluation process by collecting data at a conference or other event. Unlike programs, short-term events offer an evaluation venue with multiple data-collection opportunities and fewer complexities than a full-scale educational or social program.
  • A week-long conference offers numerous opportunities to engage in creative data collection. It is a comparatively low-stakes operation since most conference organizers opt for the traditional post-conference “happiness” survey, and any data gathered systematically will likely be of value.
  • Innovative data collection can generate conversation at an evaluation conference.  Many people interacted with the students as they collected data. Most were willing to engage in the process.
  • Minnesota evaluation students really are above average. Garrison Keillor made this observation about Minnesota’s children in general, but this work provided additional positive evidence.

We’re learning all this week from the University of Minnesota Innovative Evaluation Team from Evaluation 2012. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

No tags

Greetings from Jean King and Laura Pejsa from the Minnesota Evaluation Studies Institute (MESI) at the University of Minnesota. This week we will be introducing you to a crop of graduate student evaluators who (we think) made quite a splash at the AEA conference last month.  If you attended, you may have seen one or more of them filming video interviews, conducting on-the-spot i-Pad surveys, tweeting “Aha” moments, or helping participants identify favorite sessions on the giant TIG visualization. If you were not with us at the conference this year, today’s post will give you some background on this project.

It all started with the local arrangements committee for the AEA conference; the committee wanted to add some sparks of evaluation throughout the week and document experiences not captured on the standard after-conference survey. We created a one-credit, special course at the University of Minnesota titled Creative methods for training and event evaluation, and invited students to join us for a grand experiment. The course and the conference activities would be developed based on the interests and ideas of the students in it.

At our first class meeting, we introduced the students to the goals and history of the conference, provided a place (and food) to come together, and gave them the following loose guidelines:

  • to both pilot and model creative ways of documenting conference experiences;
  • to provide some real-time feedback;
  • to make the evaluation process fun/engaging for conference participants;
  • to explore the potential of emerging technologies;
  • to provide meaningful, usable data to AEA;
  • and to make sure they still had time to attend and enjoy the conference themselves.

Hot Tips

  • You don’t have to look much further than your own back yard for meaningful evaluation experiences for students. Instead of simulating or creating projects, check out the events that may already be happening where a little extra evaluation will go a long way.
  • When it comes to creative methods and technology, students can expand our thinking. Give them an opportunity with relatively low stakes, and see the connections they make between the ways they have learned to use things like social media and the evaluation problem.

This week we will be presenting you with more hot tips, cool tricks, rad resources, and lessons learned from this intrepid group of conference evaluators. Days 2-5 of this week will be written by our four student teams: Survey, Video, Network Visualization, and Twitter. We will wrap up the week with a post summarizing what we learned as instructors that may help others in designing meaningful, real-world evaluation experiences for novice evaluators.

We’re learning all this week from the University of Minnesota Innovative Evaluation Team from Evaluation 2012. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

We are Courtney Blackwell, Heather King, and Jeanne Century from Outlier Research & Evaluation at the University of Chicago. For the last 3.5 years, we have been researching and evaluating computer science education efforts.

Computer Science (CS) is becoming a buzzword in education, with educators, policymakers, and industry developers promoting CS as key to developing 21st Century skills and a pathway to employment. While CS is not new to education, the spotlight on it is. In 2014, over 50 U.S. school districts, including the seven largest, pledged to make CS education available to all students.

Like all buzzwords, most people have their own vague idea of what CS means, but even experts working within CS education do not, yet, have a clear, agreed-upon definition. If evaluators are going to be able to accurately measure the effects of CS education efforts on teaching and learning, and accumulate knowledge and understanding, we need to have a clear definition of what “CS education” is. Until CS educators create shared definitions themselves, we, as evaluators, can do our part by ensuring our logic models, strategies, and measures clearly and specifically describe the innovation — computer science education — so that our work can inform others and further the field.

Lessons Learned: Evaluating an ill-defined intervention is not an uncommon problem. In the case of CS, however, the capacity to articulate that definition is limited by the state of the field. As evaluators, we have to find alternatives. In our evaluation of the Code.org’s computer science education efforts, we ask students to provide their own definition of CS at the beginning of our questionnaires. Then, we provide a specific definition for them to use for the remainder of the questionnaire. This way, we capture student interpretations of CS and maintain the ability to confidently compare CS attitudes and experiences across students. Similarly, we begin interviews with teachers, school leaders, and district leaders by asking, “How do you define computer science education?”

Hot Tip: Always ask participants to define what they mean by computer science.

Rad Resources #1: A recent survey by the Computer Science Teacher’s Association (CSTA) found that high school leaders don’t share a common definition of CS education. This suggests that school leaders may promote their schools as providing “computer science” when in fact they are providing activities that would fail to be considered CS at the college and professional levels.

Rad Resources #2: Check out LeadCS.org, a new website about to be launched, for definitions of key terms in Computer Science education. The website offers a range of tools for K-12 school and district leaders and their partners who seek to begin or improve CS education programs.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi, we are Tom Archibald and Jane Buckley with the Cornell Office for Research on Evaluation. Among other initiatives, we work in partnership with non-formal educators to build evaluation capacity. We have been exploring the idea of evaluative thinking, which we believe is an essential, yet elusive, ingredient in evaluation capacity building (ECB). Below, we share insights gained through our efforts to understand, describe, measure, and promote evaluative thinking (ET)—not to be confused with the iconic alien!

Lesson Learned: From evaluation

  • Michael Patton, in an interview with Lisa Waldick from the International Development Research Center (IDRC), defines it as a willingness to ask: “How do we know what we think we know? … Evaluative thinking is not just limited to evaluation projects…it’s an analytical way of thinking that infuses everything that goes on.”
  • Jean King, in her 2007 New Directions for Evaluation article on developing evaluation capacity through process use, writes “The concept of free-range evaluation captures the ultimate outcome of ECB: evaluative thinking that lives unfettered in an organization.”
  • Evaluative thinkers are not satisfied with simply posing the right questions. According to Preskill and Boyle’s multidisciplinary model of ECB in the American Journal of Evaluation in 2008, they possess an “evaluative affect.”

Lesson Learned: From other fields

Notions related to ET are common in both cognitive research (e.g., evaluativist thinking and metacognition) and education research (e.g., critical thinking), so we searched the literature in those fields and came to define ET as comprised of:

  • Thinking skills (e.g., questioning, reflection, decision making, strategizing, and identifying assumptions), and
  • Evaluation attitudes (e.g., desire for the truth, belief in the value of evaluation, belief in the value of evidence, inquisitiveness, and skepticism.)

Then, informed by our experience with a multi-year ECB initiative, we identified five macro-level indicators of ET:

  • Posing thoughtful questions
  • Describing and illustrating thinking
  • Active engagement in the pursuit of understanding
  • Seeking alternatives
  • Believing in the value of evaluation

Rad Resource: Towards measuring ET

Based on these indicators, we have begun developing tools (scale, interview protocol, observation protocol) to collect data on ET. They are still under development and have not yet undergone validity and reliability testing, which we hope to accomplish in the coming year. You can access the draft measures here. We value any feedback you can provide us about these tools.

Rad Resource: Towards promoting ET

One way we promote ET is through The Guide to the Systems Evaluation Protocol, a text that is part of our ECB process. It contains some activities and approaches which we feel foster ET, and thus internal evaluation capacity, among the educators with whom we work.

 

Tom and Jane will be offering an AEA Coffee Break Webinar on this topic on May 31st. If you are an AEA member, go here to learn more and register. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Older posts >>

Archives

To top