AEA365 | A Tip-a-Day by and for Evaluators

TAG | complex adaptive systems

Hello, we’re Elizabeth Kunz Kollmann from the Museum of Science, Boston and Marjorie Bequette from the Science Museum of Minnesota, two of the co-PIs for the study Complex Adaptive Systems as a Model for Network Evaluations (CASNET).

As a part of CASNET, we’ve coordinated meetings, document sharing, and data analysis across team members located at the University of Minnesota, Science Museum of Minnesota, Oregon Museum of Science and Industry, and Museum of Science, Boston. Creating a functional team across sites can be difficult and requires extra work. For example, we’ve all probably experienced some aspects of an awful conference call, like “A conference call in real life”. However, over three years of CASNET, we learned ways to make our nationwide team more effective, and we even still like each other! 

Lessons Learned:

  • Meet regularly. Even if each institution has individual tasks, it’s best to meet on a regular basis to check in about the status of everyone’s work and ensure everyone is up to date on current and pending assignments.
  • Create structure. Have an agenda and meeting leader, even for a brief meeting. This lets you avoid awkward phone silence with everyone trying to think about what else needs to be discussed.
  • Encourage chit-chat. Allow time (but not too much) for chit-chat, especially when new team members join the group and don’t really know who is who on a call. Use people’s names frequently, and encourage individuals to introduce themselves when they speak.
  • Facilitate document sharing. Sharing documents in multiple ways can be helpful. Some of our team members preferred email attachments while others preferred document sharing websites, so we used both.
  • Use a common analysis software. Using shared data analysis software is vital in allowing for analysis across coders at different sites.
  • Meet in-person for big topics. Having in-person meetings when making important decisions can help you work through big issues such as determining study findings. It’s worth the additional expense.

 

Rad Resources:

There are many paid and free sources available to help facilitate communication and sharing across nationwide teams.

  • Conference calls are great, and you can set them up through free services such as FreeConferenceCall.com. Take advantage of virtual meeting mechanisms such as Skype and Adobe Connect which allow you to see other meeting participants and share documents.
  • File sharing. There are a variety of websites available to share documents across sites. Google Drive and Drop Box are free options while Basecamp is a paid option.
  • Data analysis. For qualitative research, there are now many ways to do collaborative coding with researchers in different locations. Two that we’ve used, each with their own strengths and weaknesses, include Dedoose and NVivo Server. 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Frances Lawrenz and Amy Grack Nelson, University of Minnesota, and Marjorie Bequette, Science Museum of Minnesota (where Amy works, too). We are members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. When we started this project, complexity theory seemed exciting, but daunting. What is complexity theory, you ask? Complexity theory, long used by biologists, ecologists, computer scientists, and physicists, has recently been rethought of as a method for facilitating organizational and educational change. Davis and Sumara (2006) suggest that complexity theory can be used as a framework for understanding the conditions through which change can emerge, specifically stating that “complexity thinking has evolved into a pragmatics of transformation—that is, a framework that offers explicit advice on how to work with, occasion, and affect complexity unities” (p. 130).

To wrap our brains around complexity theory, we dug into the literature to understand characteristics of complex adaptive systems (CAS), with a focus on educational networks. Our literature review identified three broad categories of attributes: (1) those related to behaviors within a CAS, (2) those related to agent structure within the system, and (3) those related to the overall network structure.

We wanted to know if the network we were studying was, indeed, a complex adaptive system and, if so, how characteristics of a CAS affected evaluation capacity building within the system. This meant we needed to code our data from a complexity theory lens. We developed a coding framework based both on our extensive literature review and characteristics of complex adaptive systems that emerged from our data. Our coding framework for complex adaptive systems ended up being organized into the following broad categories:

  1. Interactions between agents within and outside of the system
  2. Decision-making practices within the system
  3. Structures within the system to do the work
  4. Aspects of system stability
  5. Characteristics of the agents
  6. Other codes as needed for the specific project

Rad Resources:

We found our literature review matrix and coding framework to be extremely helpful at breaking the concepts into chunks that could be identified in what people did on a day-to-day basis. We’re excited to share our tools here as we think they could be useful to anyone interested in studying evaluation within complex adaptive systems.

  • Matrix of the findings from our literature review of complex adaptive system (umn.edu/site)
  • Our coding framework for complex adaptive systems in educational networks (umn.edu/theothersite)

Rad Resource:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

This is Jean King and Gayra Ostegaard Eliou, from the University of Minnesota, members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. NSF funded CASNET to provide insights on (1) the implications of complexity theory for designing evaluation systems that “promote widespread and systemic use of evaluation within a network” and (2) complex system conditions that foster or impede evaluation capacity building (ECB) within a network. The complex adaptive system (CAS) in our study is the Nanoscale Informal Science Education Network (NISE Net), a network that has been continuously operating for ten years and is currently comprised of over 400 science museum and university partners (https://player.vimeo.com/video/111442084). The research team involves people from University of Minnesota, the Museum of Science in Boston, the Science Museum of Minnesota, and the Oregon Museum of Science and Industry.

This week CASNET team members will highlight what we’re learning about ECB in a network using systems and complexity theory concepts. Here is a quick summary of three lessons we learned about ECB in a network and systems readings we found helpful.

Lessons Learned:

  1. ECB involves creating and sustaining infrastructure for specific components of the evaluation process (e.g., framing questions, designing studies, using results). Applying a systems lens to the network we studied demonstrated how two contrasting elements supported ECB:
  • “Internal diversity” among staff’s evaluation skills (including formally trained evaluators, novices, thoughtful users, and experts in different subject areas) provided a variety of perspectives to build upon.
  • “Internal redundancy” of skill sets helped ensure that when people left positions, evaluation didn’t leave with them because someone else was able to continue evaluative tasks.
  1. ECB necessitates a process that engages people in actively learning evaluation, typically through training (purposeful socialization), coaching, and/or peer learning. The systems concepts of neighbor interactions and massive entanglement pointed to how learning occurred in the network. NISE Net members typically took part in multiple projects, interacting with many individuals in different roles at different times. Network mapping visually documented the “entanglement” of people from multiple museums, work groups, and in numerous roles that supported ECB over time.
  1. The degree of decision-making autonomy a team possessed influenced the ways in which–and the extent to which–ECB took place. Decentralized or distributed control, where individuals could adapt an evaluation process to fit their context, helped cultivate an ECB-friendly internal organizational context. Not surprisingly, centralized control of the evaluation process was less conducive to building evaluation capacity.

Rad Resources:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Pat Seppanen.  I’ve worked as an evaluator now for more than 20 years.  A good part of my practice has centered on evaluating complex change—initiatives designed to address human needs and community problems that do not fit into established program and policy categories.  If concepts like non-linearity, emergence, adaption, uncertainty, and coevolution describe the work of your clients, I bet you have been having a tough time applying traditional evaluation designs.  I’d like to offer a few tips and resources regarding evaluation of complex systems, hoping that you can use them in your work.

Hot Tips:

  1. Evaluation and consultation are inseparable in complex change initiatives and need to be integrated.   In my experience the work is 50% evaluation and 50% consultation to facilitate successive iterations of collaborative problem solving and mutual learning.  Since I am strongest on the evaluation side, I have joined up with someone who is an experienced facilitator and coach to do work. 
  2. A situational analysis is a vital step to getting the work going.   I usually build a situational analysis in as an activity in my work plan.   The information you assemble will help you see patterns that will inform your design.  For example, for one citywide initiative we are working on, the evaluation design is organized by the major buckets of work and different types of data are generated based on the information needs of stakeholder groups operating at different levels of the system.  In another initiative, we have organized data collection in terms of the different levels of the system:  national, community, center, program, and individuals.    In doing a situational analysis, I use versions of Human Systems Dynamic tools developed by Glenda Eoyang (see the resource below).
  3.  A complex initiative may include components that are simple, complicated, and complex (see Chapter 4 of Patton’s book that I’ve listed as a resource below for a great discussion of the characteristics of these components)—some components may not benefit from developmental evaluation while others will.  If the focus is on improving and stabilizing the component, then a formative evaluation design is needed.  But if the focus is on “learning by doing” then I’d propose a developmental evaluation design

Rad Resources:

I look to Michael Quinn Patton’s book, Developmental evaluation:  Applying complexity concepts to enhance innovation, as my primary resource.

A much shorter monograph by Hallie Preskill and Tanya Beer titled Evaluating social Innovation very concisely offers lessons about doing evaluation to support adaptation.

If you are looking to learn more about complex adaptive change I recommend looking into Human Systems Dynamics.

The American Evaluation Association is celebrating Minnesota Evaluation Association (MN EA) Affiliate Week with our colleagues in the MNEA AEA Affiliate. The contributions all this week to aea365 come from our MNEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, this is Barbara Heath and Aruna Lakshmanan of East Main Educational Consulting, LLC. Our group focuses on the evaluation of science, mathematics, and technology programs in K-12 schools and institutes of higher learning.

Most evaluation methods assume that the organization or program being evaluated is stable, controllable, and predictable. When the organization is complex, and constantly changing to adapt to its environment, a systems-based evaluation may be a better choice for the evaluator. A systems perspective provides the evaluation team with a framework to investigate beyond activities and their anticipated outcomes.

Our group has been applying systems tenets to evaluate the development of a multi-institutional, interdisciplinary collaborative. The framework provides the opportunity to see the collaborative as the developing, changing system that it is in reality.

Within the systems framework, our team has selected Eoyang’s CDE Model, which blends tenets from Complex Adaptive Systems (CAS) and Human System Dynamics (HSD). This model represents conditions that exist for self-organizing behavior to occur: containers (C), differences (D), and exchanges (E). It provides the appropriate mechanism to demonstrate and capture the complexities of the collaborative in the differences and exchanges that take place across the boundaries of the containers within the system. Using this method has resulted in a more complete understanding of the collaborative, which in turn, has improved the quality of information that we can provide to the client.

Hot Tip #1: Enter this process knowing that systems analysis takes a great deal of time. It requires good organizational skills, the ability to work with detail while being able to understand the big picture, and time to brainstorm.

Hot Tip #2: Our data analysis parallels the processes used for qualitative analysis: unitizing, categorizing, and linking categories to identify trends are essential steps to the process.

Hot Tip #3: Having graphics support is important. The reporting of data and results from systems analysis is challenging and our results require the extensive use of visualization methods to represent the interactions between the various components of the system and how they influence each other.

Rad Resource #1: Williams, B., & Imam, I. (Eds.). (2007). Systems Concepts in Evaluation. Point Reyes: EdgePress of Inverness.

Rad Resource #2: Systems workshops offered at the AEA Annual Conference.

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · · ·

Archives

To top