AEA365 | A Tip-a-Day by and for Evaluators

TAG | complexity

We are Frances Lawrenz and Amy Grack Nelson, University of Minnesota, and Marjorie Bequette, Science Museum of Minnesota (where Amy works, too). We are members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. When we started this project, complexity theory seemed exciting, but daunting. What is complexity theory, you ask? Complexity theory, long used by biologists, ecologists, computer scientists, and physicists, has recently been rethought of as a method for facilitating organizational and educational change. Davis and Sumara (2006) suggest that complexity theory can be used as a framework for understanding the conditions through which change can emerge, specifically stating that “complexity thinking has evolved into a pragmatics of transformation—that is, a framework that offers explicit advice on how to work with, occasion, and affect complexity unities” (p. 130).

To wrap our brains around complexity theory, we dug into the literature to understand characteristics of complex adaptive systems (CAS), with a focus on educational networks. Our literature review identified three broad categories of attributes: (1) those related to behaviors within a CAS, (2) those related to agent structure within the system, and (3) those related to the overall network structure.

We wanted to know if the network we were studying was, indeed, a complex adaptive system and, if so, how characteristics of a CAS affected evaluation capacity building within the system. This meant we needed to code our data from a complexity theory lens. We developed a coding framework based both on our extensive literature review and characteristics of complex adaptive systems that emerged from our data. Our coding framework for complex adaptive systems ended up being organized into the following broad categories:

  1. Interactions between agents within and outside of the system
  2. Decision-making practices within the system
  3. Structures within the system to do the work
  4. Aspects of system stability
  5. Characteristics of the agents
  6. Other codes as needed for the specific project

Rad Resources:

We found our literature review matrix and coding framework to be extremely helpful at breaking the concepts into chunks that could be identified in what people did on a day-to-day basis. We’re excited to share our tools here as we think they could be useful to anyone interested in studying evaluation within complex adaptive systems.

  • Matrix of the findings from our literature review of complex adaptive system (umn.edu/site)
  • Our coding framework for complex adaptive systems in educational networks (umn.edu/theothersite)

Rad Resource:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

We’re Jean King and Frances Lawrenz (University of Minnesota) and Elizabeth Kunz Kollmann (Museum of Science, Boston), members of a research team studying the use of concepts from complexity theory to understand evaluation capacity building (ECB) in networks.

We purposefully designed the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) case study research to build on insider and outsider perspectives. The project has five PIs: two outsiders from the University of Minnesota who were not as involved in the network being studied prior to this study; and three insiders, one each from the museums that led the network’s evaluation for over a decade (Museum of Science, Boston; Science Museum of Minnesota; and Oregon Museum of Science and Industry).

Lessons Learned:

Outsiders were helpful because

  • They played the role of thinking partner/critical friend while bringing extensive theoretical knowledge about systems and ECB.
  • They provided fresh, non-participant perspectives on the network’s functioning and helped extend the interpretation of information gathered to other networks and contexts.

Insiders were helpful because

  • They knew the history of the network, including its complex structure and political context and could easily provide explanations of how things happened.
  • They had easy access to network participants and existing data, which was critical to obtaining data about the ECB processes CASNET was studying, including observing internal network meetings and attending national network meetings, using existing network evaluation data, and asking network participants to engage in in-depth interviews.

Having both perspectives was helpful because

  • The outsider and insider perspectives allowed us to develop an in-depth case study. Insiders provided information about the workings of the network on an on-going basis, adding to the validity of results, while outsiders provided an “objective” and field-based perspective.
  • Creating workgroups including both insiders and outsiders meant that two perspectives were constantly present and occasionally in tension. We believe this led to better outcomes.

Hot Tips:

  • Accept the fact that teamwork (especially across different institutions) requires extended timelines.
    • Work scheduling was individualized. People worked at their own pace on tasks that matched their skills.       However, this independence resulted in longer than anticipated timelines.
    • Decision making was a group affair. Everyone worked hard to obtain consensus on all decisions. This slowed progress, but allowed everyone—insiders and outsiders–to be an integral part of the project.
  • Structure more opportunities for communication than you imagine are needed. CASNET work taught us you can never communicate too much.       Over three years, we had biweekly telephone meetings as well as multiple face-to-face and subgroup meetings and never once felt we were over-communicating.
  • Be ready to compromise. The different perspectives of team members owing in some cases to their positions within and outside of the network resulted regularly in the need to accept another’s perspective and compromise.

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

This is Jean King and Gayra Ostegaard Eliou, from the University of Minnesota, members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. NSF funded CASNET to provide insights on (1) the implications of complexity theory for designing evaluation systems that “promote widespread and systemic use of evaluation within a network” and (2) complex system conditions that foster or impede evaluation capacity building (ECB) within a network. The complex adaptive system (CAS) in our study is the Nanoscale Informal Science Education Network (NISE Net), a network that has been continuously operating for ten years and is currently comprised of over 400 science museum and university partners (https://player.vimeo.com/video/111442084). The research team involves people from University of Minnesota, the Museum of Science in Boston, the Science Museum of Minnesota, and the Oregon Museum of Science and Industry.

This week CASNET team members will highlight what we’re learning about ECB in a network using systems and complexity theory concepts. Here is a quick summary of three lessons we learned about ECB in a network and systems readings we found helpful.

Lessons Learned:

  1. ECB involves creating and sustaining infrastructure for specific components of the evaluation process (e.g., framing questions, designing studies, using results). Applying a systems lens to the network we studied demonstrated how two contrasting elements supported ECB:
  • “Internal diversity” among staff’s evaluation skills (including formally trained evaluators, novices, thoughtful users, and experts in different subject areas) provided a variety of perspectives to build upon.
  • “Internal redundancy” of skill sets helped ensure that when people left positions, evaluation didn’t leave with them because someone else was able to continue evaluative tasks.
  1. ECB necessitates a process that engages people in actively learning evaluation, typically through training (purposeful socialization), coaching, and/or peer learning. The systems concepts of neighbor interactions and massive entanglement pointed to how learning occurred in the network. NISE Net members typically took part in multiple projects, interacting with many individuals in different roles at different times. Network mapping visually documented the “entanglement” of people from multiple museums, work groups, and in numerous roles that supported ECB over time.
  1. The degree of decision-making autonomy a team possessed influenced the ways in which–and the extent to which–ECB took place. Decentralized or distributed control, where individuals could adapt an evaluation process to fit their context, helped cultivate an ECB-friendly internal organizational context. Not surprisingly, centralized control of the evaluation process was less conducive to building evaluation capacity.

Rad Resources:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I’m Kate McKegg, Director of The Knowledge Institute Ltd a member of the Kinnect Group with Nan Wehipeihana. We want to share what we have learned about explaining developmental evaluation (DE).

Evaluation isn’t something that our clients or our communities fully understand and it can create anxiety.  So, when we suggest that a client or community undertakes a developmental evaluation this can be extra puzzling for folks.

Rad Resource:We usually begin by reinforcing some key messages about what evaluation is:

McKegg 1

Hot Tip:  In our experience, stressing the importance of systematic, well informed evaluative reasoning is a key step in convincing people that DE is evaluative, and not just some kind of continuous quality improvement process.

Hot Tip:  We explain why we think DE is best suited to their situation, meaning:

  • There is something innovative going on, something is in development and people are exploring, innovating, trying things out and creating something they hope will make a difference
  • The situation is socially and/or technically complex, and rapidly changing.  People are experimenting with new ideas, new ways of doing things, approaches, different relationships and roles – and this is likely to be happening for a while
  • There is a high degree of uncertainty about what is likely to work, in terms of process, practice and outcomes.  Which pathway the initiative might take is not yet clear, i.e., what the future holds is still unknown
  • The situation is emergent, i.e., there are continually emerging questions, challenges, successes and issues for people to deal with in real time.

Hot Tip:  Finally, we explain the key features of DE. We typically focus on the following 4 features:

  • DE has a systems orientation i.e., that understanding a DE evaluation challenge systemically involves paying attention to relationships, different perspectives and boundaries, and that this approach is ideally suited to working with complexity and emergence
  • DE involves cycles of learning to inform action using real time data, as part of an ongoing process of development – probing, venturing, sensing, learning, and re-learning

Rad Resource: Adaptive action and reflection graphic:

McKegg 2

  • DE typically has an emergent evaluation design – in order that it can be responsive and changing needs, issues, and challenges as they arise
  • With DE, the evaluator typically becomes part of the team bringing together evaluative thinking with evidence in ways that support key stakeholders to understand the quality and value of something in real time.

Rad Resource: The Australasian Evaluation Society (AES) Best Evaluation Policy and Systems Award, 2013, was for a Developmental Evaluation we conducted of He Oranga Poutama, a M?ori sport and recreation initiative. You can read about it here.

This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

I am Michael Quinn Patton, author of Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. I am an independent consultant based in Saint Paul, Minnesota.  I have been doing and writing about evaluation for over 40 years.  This week features posts by colleagues and clients engaged in various developmental evaluation initiatives. MQP 1

Rad Resource:Developmental evaluation (DE) informs and supports innovative and adaptive development in complex dynamic environments. DE brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product, and/or organizational development with timely feedback. The first chapter of the Developmental Evaluation book is available online.

Hot Tip:  Understand the difference between formative and developmental evaluation. Developmental evaluation is NOT ongoing formative evaluation. This is a common confusion.  Developmental evaluation supports adapting and changing an innovation for ongoing development.  Formative evaluation supports improving a model and, as originally conceptualized, serves the purpose of getting ready for summative evaluation (Michael Scriven, 1967, “The methodology of evaluation “).

Hot Tip: Developmental evaluation is NOT the same as development evaluation.  This is another common confusion.  Development evaluation refers to evaluations done in developing countries. Some development evaluation is developmental, but by no means all.

MQP 2

Hot Tip: Developmental evaluation may be called by other names: adaptive evaluation, real time evaluation, or emergent evaluation.  I often hear from folks that they’ve been doing DE without calling it that.  Here’s an example just published in the journal EVALUATION.

Cool Trick: Go to the AEA Public eLibrary and search for developmental evaluation.  You’ll find lots of presentations and examples.

This week, we’re diving into issues of Developmental Evaluation (DE) with contributions from DE practitioners and authors. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

Feb/14

23

Cameron Norman on The Evaluator-as-Designer

You might not think so, but I think you’re a designer.

My name is Cameron Norman and I work with health and human service clients doing evaluation and design for innovation. As the Principal of CENSE Research + Design I bring together concepts like developmental evaluation, complexity science and design together for clients to help them learn about what they do and better create and re-create their programs and services to innovate for changing conditions.

Nobel Laureate Herbert Simon once wrote: “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones”.

By that standard, most of us who are doing work in evaluation probably are contributing designers as well.

Lessons Learned: Design is about taking what is and transforming it into what could be. It is as much a mindset as it is a set of strategies, methods and tools. Designing is about using evidence and blending it with vision, imagination and experimentation.

Here are some key lesson’s I’ve learned about design and design thinkers that relate to evaluation:

  1. Designers don’t mind trying something and failing as they see it as a key to innovation. Evaluation of those attempts is what builds learning.
  2. When you’re operating in a complex context, you’re inherently dealing with novelty, lots of information, dynamic conditions and no known precedent so past practice will only help so much. Designers know that every product intended for this kind of environment will require many iterations to get right; don’t be afraid to tinker
  3. Wild ideas can be very useful. Sometimes being free to come up with something outlandish in your thinking reveals patterns that can’t be seen when you try too hard to be ‘realistic’ and ‘practical’. Give yourself space to be creative.
  4. Imagination is best when shared. Design is partly about individual creativity and group sharing. Good designers work closely with their clients to stretch their thinking, but also to enlist them as participants throughout the process.
  5. Design (and the learning from it) is doesn’t stop at the product (or service). Creating an evaluation is only part of the equation. How the evaluation is used and what comes from that is also part of the story because that informs the next design and articulates the next set of needs.

I write regularly on this topic on my blog, Censemaking, which has a library section where you can find more resources on design and design thinking. Design is fun, engaging and taps into our creative energies for making things and making things better. Try it out and unleash your inner designer in your next evaluation.

Clipped from http://censemaking.com/library/

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

My name is Cameron Norman and I am the Principal of CENSE Research + Design. My work brings together complexity science and design together with developmental evaluation into something I refer to as developmental design, which is about making decisions in the face of changing conditions.

Lesson Learned: At the heart of developmental evaluation is the concept of complexity and innovation. Complexity is a word that we hear a lot of, but might not fully know what it means or how to think about it in the context of evaluation.

For social programs, complexity exists:

… where there are multiple, overlapping sources of input and outputs

… that interact with systems in dynamic ways

… at multiple time scales and organizational levels

… in ways that are highly context-dependent

Rad Resources: Complexity is at the root of developmental evaluation. So for those who are new to the idea or new to developmental evaluation, here are 7 resources that might help you get your head around this complex (pun intended) concept:

  1. Getting to Maybe is a book co-written by our good friend Michael Quinn Patton and offers a great starting place for those working in community and human services;
  2. Patton’s book Developmental Evaluation (ch 5 in particular) is, of course, excellent;
  3. The Plexus Institute is a non-profit organization that supports ongoing learning about complexity applications for a variety of settings;
  4. Tamarack Institute for Community Engagement has an excellent introduction page including an interview with Getting to Maybe co-author Brenda Zimmerman
  5. Ray Pawson’s new book The Science of Evaluation is a more advanced, but still accessible look at ways to think about complexity, programs and evaluation;
  6. My blog Censemaking has a library section with sources on systems thinking and complexity that include these and many more.
  7. The best short introduction to the concept is a video by Dave Snowden on How to Organize A Children’s Party that is a cheeky way to illustrate complexity that I often use in my training and teaching.

Complexity is part theory, part science and all about a way of seeing and thinking about problems. It doesn’t need to scare you and these resources can really help get you in the right mind-frame to tackle challenging problems and use evaluation effectively as a means to addressing them. It might be complex, but it’s fun.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

This is Andy Rowe. I am an independent consultant operating from the US in South Carolina and Canada from Hilton Beach Ontario. My first evaluation of a resource related program was an evaluation of the Newfoundland Bait Service in 1985. Since then I have undertaken evaluations in many social and natural settings and on all continents. For the past decade most of my work has been on environmental and conservation efforts in the US and Western Pacific.

The hallmark of evaluation in resource, conservation and environmental settings is that it occurs at the intersection of complex and linked natural and human systems. Broadly speaking there are three programmatic classes of interventions (hence resource, environmental and conservation): resource use is about human use of the natural environment for commercial, recreational, subsistence and ceremonial purposes; conservation is about protecting the natural system from harmful resource use; and environment is generally about improving the state of both natural (e.g. improving water quality) and human (e.g. public health) systems. Evaluation thinking about these settings is still nascent and most evaluations are undertaken by domain specialists with little or no evaluation training or experience.

The mechanisms of change are always found in the human system, usually transmitted through both systems and resulting in changes to both systems. Evaluating two complex intersecting systems is hard. For example, it is hard enough to feasibly and ethically apply experimental and quasi experimental designs in human systems, much harder when one has to control for two complex systems. Likewise, getting salmon to tell the story of their experiences while in the open sea for two to five years is often a challenge.

Hot Tip:

  • Engage clients/program officers in identifying the mechanisms of change and discussing sustainability.
  • Avoid over simplifying; logic models and related approaches do not easily capture complexity.

Those commissioning evaluations usually acknowledge that both systems have a role, but they are most interested in results in the natural system. They usually have what could be termed a faith based vision of change: for example peer reviewed publications will lead resource managers and governments to change their policies; or that in a world of rapidly declining resources and growing inequality enforcement is a sustainable approach against poaching.

Rad Resource: The work of Westray et al adapting Stacey’s complexity model is a useful framing tool when dealing with two complex systems. Their approach is invaluable as a descriptive frame and for discourse about the location of mechanisms of change. Click here for an illustration from our recent formative evaluation of the David and Lucile Packard Foundation Ecosystem Based Management Initiative.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

· · · ·

Archives

To top