AEA365 | A Tip-a-Day by and for Evaluators

CAT | Systems in Evaluation

Hi, I’m Bethany Laursen, interdisciplinarity consultant and chair of the 2016 Systems in Evaluation TIG (Un)Conference coordinating committee. When SETIG was searching for a community learning activity, we chose the unconference format to demonstrate the systems principles our TIG emphasizes.  An unconference applies Open Space Technology to group meetings. Instead of a predetermined program with speakers and presenters, the participants themselves decide the topics to be discussed in breakout rooms when they arrive, and everyone is an equal contributor.

(click for larger image)

Copy of Promo Poster Aug 3Participants loved our inaugural unconference.  The follow-up evaluation showed that 95% of participants would attend another SETIG (Un)Conference, and 79% would recommend a similar style event to others.  You can run an unconference too! 

Rad Resource: MIT’s UnHangout Platform is designed especially for virtual unconferences. It’s free, user-friendly, and their support team is great.

Lesson Learned: Since none of us had any prior experience running a purely “open” unconference format (one in which participants build the entire structure once they show up), we chose to add a moderate amount of structure to our format and hopefully entice more people to participate. We predetermined 12 discussion topics and recruited a facilitator for each topic. However, we also gave participants the option to create new topics when they arrived online.  UCopy of SETIG unconference resultnlike traditional conferences, facilitators did not present on the topic but rather facilitated discussion and sharing. Participants were free to come and go from the breakout rooms as they pleased.

Rad Resource: We structured our unconference around the Principles of Open Space Technology:

  1. Whoever comes are the right people.
  2. Whatever happens is the only thing that could have.
  3. When it starts is the right time.
  4. When it’s over it’s over.
  5. The Law of Two Feet: If you find yourself in a situation where you are neither learning nor contributing, move somewhere where you can.

Lesson Learned: Many people did not test their connections before the first day, so some experienced technical problems that might have easily been avoided. This year, we plan to hold a dress rehearsal to facilitate technical troubleshooting before the event.

Hot Tip: If your TIG is considering an unconference, be sure to coordinate your planned date with AEA’s calendar of events.

Rad Resources: See this post for a more detailed write up of our inaugural unconference, including a link to our evaluation results.

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Jennifer Lawlor, volunteer promotions coordinator for the annual Systems in Evaluation (Un)Conference. The AEA Systems in Evaluation TIG invites evaluators around the world to participate in this collaborative learning event on June 21, 2016.

This year’s (Un)Conference theme will align with AEA’s annual conference theme, Evaluation + Design.  Unlike a typical conference, an unconference has no formal program or speakers and it’s fully online! The (Un)Conference will last for two hours, during which time participants will log on to the unconference platform and select a discussion topic or propose their own topic related to systems in evaluation. Some possible topics might include strategies for incorporating a systems approach in evaluation, using developmental evaluation as part of systems evaluation, and visualizing complex systems. Throughout the (Un)Conference, participants are encouraged to change groups and explore multiple topics.

The group-directed discussions work well for evaluators who are just starting to incorporate systems into their work as well as seasoned experts with something to share. This unique process creates a space where emergent learning can occur and adapt in real time to the needs of participants.

Rad Resource: Check out the (Un)Conference website for details as they emerge.   Registration is free, but space is limited! Reserve your spot here!

Rad Resource: To learn more about last year’s Systems in Evaluation (Un)Conference, see Bethany Laursen’s blog post.

Hot Tip: This year we’re going to hold a short dress rehearsal so participants will have a chance to test their audio and visual connection to the (Un)Conference software before the event begins.  An unconference is not like a traditional webinar where you can log-in at the last minute.  We strongly recommend you take advantage of this opportunity when it’s announced to prepare for the big day!

(click for larger image)

final poster 2016

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

Hello! I’m Kylie Hutchinson (a.k.a. @EvaluationMaven), independent evaluation consultant with Community Solutions Planning & Evaluation and volunteer member of the 2016 Systems in Evaluation (Un)Conference coordinating committee.

The day I completed my first real system map felt like a real accomplishment.  A system map visually depicts the various elements present in a given system and identifies opportunities to intervene.  I put a lot of time into developing the map, and thoughtfully expanded it to legal-size paper for improved readability.  Then I proudly presented it at my next advisory committee meeting, only to be met with a table of blank stares looking back at me.  People were polite, but I heard things like, “It’s…uh…kind of overwhelming.”

I stood there baffled.  To me, the potential leverage points and strategies were literally jumping off the page!  How could they not get it?  Then it dawned on me that not everyone likes to learn through visuals like I do.  Put simply, people have different learning preferences. Some prefer visual learning, some prefer auditory, and still others prefer learning kinesthetically by “doing.” Learning preferences is a massive area of study, but suffice to say that if I wanted my stakeholders to understand and use the map, I needed to accommodate these learning preferences in some way.

Hot Tip:  For those who aren’t visual learners, consider reformatting your map into an accompanying table, with each element of the map comprising a row in the table.  For those with an auditory learning preference, walk them through the map discussing the various elements and connections present.  For those who prefer to learn kinesthetically, provide them with an opportunity to physically develop the map themselves in a group using post-it notes, crayons, and mural paper, or thumbtacks and string or rubber bands.

Hot Tip:  Print out your map as an 11 x 17 (A3) data placemat and bring copies to every meeting.  This encourages stakeholders to not ignore the map and gives them additional opportunities to review it, doodle on it, and use it.  Or print it out as a 36” x 48” poster and hang it in the lunch room with a pencil.

System map poster

Lesson Learned: I am now convinced that systems maps and flow charts are primarily helpful for one group of people only – those that actually created them.  So involve as many users as you can in their hands-on development.  Then people can see for themselves why one element or arrow in the map might influence another.

Rad Resource:  To draw my systems map I used VensimPLE software.  With only a short video tutorial I was up and running with the basics.

Rad Resource:  You can also find resources for developing systems maps on this Pinterest page.  

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

I’m Srik Gopal and I co-lead the Strategic Learning and Evaluation practice at FSG, a social impact consulting firm. We engage with a variety of clients – foundations, nonprofits, and companies – in designing and implementing evaluations with a systems lens. In this post, I talk about using system mapping as a tool to support evaluation and learning.

System mapping is a visual way to represent a system’s components and connections. Popular types of system maps include actor maps, trend maps, concept maps, causal loop diagrams, social network maps, and stock and flow diagrams. We have used system mapping in our work with initiatives ranging from improving college access and success, to catalyzing change in early childhood, to strengthening local food systems.

Hot Tip: Kumu is an interactive platform to create, display, and manipulate system maps. We have used it extensively for creating actor maps.  

Rad Resource:  The UK Open University Systems Group has developed various useful resources for different kinds of mapping and diagramming processes.

Rad Resource:  FSG has recently put together a guide to actor mapping plus a four-part blog series based on our experience in this area.

Lessons Learned:

  1. System mapping is more effective when it is part of a larger process of strategy, evaluation, and learning, rather than a stand-alone activity.
  2. The purpose of the system map is to tell a story about the system that is being studied.
  3. The process of a group coming together to create a system map is as, or more important, than the end product itself.
  4. Co-creation and iteration are the name of the game when it comes to system mapping.

Happy mapping!

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

I’m Brandon Coffee-Borden, an Associate at Community Science and Program Co-chair for the Systems in Evaluation TIG. In this post, I will discuss the use of social network analysis (SNA) as a way to describe and analyze systems.

SNA is an approach to understanding structures and connections through the use of network theory. Networks are described in terms of actors within a network – people, organizations, groups or other entities – (called “nodes”) and relationships, interactions, or links that connect these actors (“ties” or “edges”). Evaluators can use SNA to understand the structural characteristics of a system, identify key actors, examine the role of different actors, and explore the presence and structure of subgroups within a system. SNA can provide insight into promising approaches for changing the structure, interrelationships, and flows that exist within the system and how these changes could impact how the system functions. SNA can also help document and analyze how a system develops and evolves over time.

Hot Tip: Gephi is a free open source software for exploring and manipulating networks. It provides basic network statistics and allows the user to interact with a representation of the network to explore patterns and trends.

Rad Resource:  The American Evaluation Association’s Social Network Analysis in Evaluation Topical Interest Group provides great resources for training in SNA and information on how SNA can be useful for evaluators.

Lessons Learned:

  1. The reflection and participation of those involved in, and knowledgeable of, the system is critical for contextualizing the findings derived from SNA.
  2. An evaluator has to take great care in the collection, management, and analysis of network data. For instance, missing or erroneous data can lead to incorrect assumptions and interpretations resulting from changes in the network’s structure. Similarly, an evaluator has to consider how to appropriately specify and measure interrelationships within the system.
  3. As with any systems inquiry, boundary questions must be attended to when thinking about what lies within the network and what lies outside it to avoid excluding important elements of the network.

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Hi! My name is Jan Noga and I am the owner of Pathfinder Evaluation and Consulting in Cincinnati, Ohio. I’m also a long-time member of the Systems TIG and user of systems thinking in my evaluation practice.

A systems perspective for evaluation provides incredible insight into the “how” and “why” of program outcomes, generating rich descriptions of complex, interconnected situations that help stakeholders and clients build a deeper understanding and inform choices for subsequent action. But, as much as we all might intuitively understand that we live in an interconnected world of multiple perspectives, agents, and boundaries, it can be challenging to get clients, stakeholders, and even other evaluators on board with using a systems approach to evaluation.

Why is this always so hard? I find a good deal of the resistance I encounter springs from the perception that systems theory and systems thinking are highly technical, specialized fields that are difficult to learn and to use. I say “systems” and folks immediately think “more complicated than we need” or “lots of math” or “expensive.”

The notion of systems thinking is intimidating for the uninitiated, but that doesn’t mean you should give up on using systems to inform your evaluation. What it does mean is learning how to “talk systems” in ways that are approachable, understandable, and well within the comfort zone of clients, stakeholders, and evaluation partners.

Hot Tip: Start with the familiar. You don’t have to come into an evaluation armed with an entirely new language or a raft of new tools drawn from the systems field. Instead, think about how you can use systems thinking to inform the design of the survey, observation, or interview you were going to do anyway in a way. Many systems approaches are highly usable and lend themselves well to integration into an overall data collection plan without you ever having to use the term “systems.”

Rad Resource: The Systems Thinking Playbook by Linda Booth Sweeney and Dennis Meadows and Systems Concepts in Action: A Practitioner’s Toolkit by Bob Williams and Richard Hummelbrunner are rich sources of strategies and concepts for integrating a systems perspective into your evaluation design.

Hot Tip: Graphic organizers can go a long way toward easing resistance to the notion of systems in evaluation. Rich pictures, maps, process graphics, even non-linear logic models can all help illustrate systems thinking concepts in a way that is non-intimidating and friendly. But, don’t go crazy, either. Simple and easy to follow are critical.
Rad Resource: A good example of a user-friendly model and graphic is the systems thinking iceberg. An excellent description of this model can be found here. When using this with clients, take out the word “systems” and you’ll be ready to go!

AEA365_Noga_systems iceberg

The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Systems in Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Greetings! This is Scott Pattison and Melanie Francisco from the Oregon Museum of Science and Industry and Juli Goss from the Museum of Science, Boston. We are part of the research team for the NSF-funded Complex Adaptive Systems as a Model for Network Evaluations (CASNET) project. Today we’re focusing on how project leaders and senior managers can use system-level thinking to support evaluation use and capacity building within project teams, institutions, or networks.

While a lot has been written about the importance of professional development and training strategies for fostering ECB at different levels, we’ve found that many system factors beyond training shape how evaluation is used and how evaluation knowledge, skills, and value spread across individuals and throughout organizations. In fact, in the right circumstances, ECB can be supported without explicit training. Here are recommendations from the CASNET team.

Hot Tips:

#1: Create a buzz! Express your own valuing of evaluation, share evaluation reports and findings, regularly participate in outside data collection opportunities, and connect with other projects with strong evaluation components. One of the biggest surprises in our research was the synergistic impact that many diffuse evaluation-related influences can have on an individual’s evaluation capacity building. Study participants often shared stories about how the combined effect of these influences shaped their perspectives on and use of evaluation.

#2: Build teams for success and resilience. Create teams of individuals with different evaluation-related skills, experiences, and comfort levels. We found that the exchange of diverse experiences and knowledge contributed to strong evaluative thinking within the team. Even those with more evaluation experience benefited from the perspectives and knowledge of other team members.

Also, incorporate duplicate experience within your institution and your projects so that evaluation capacity building can continue even if one or two individuals move on. For example, sending at least two staff members to an evaluation training is a great way to ensure that the knowledge from that training persists and that training participants are able to motivate each other to share and act on what they have learned.

#3: Empower teams to take control. Communicate your expectation that evaluation and data-based decision making should be an integral part of the work at your institution or in your projects, but also explicitly empower groups to use knowledge and resources in ways that makes sense to them.

We observed a strong shared value for evaluation communicated by project leaders and a clear expectation for teams to incorporate evaluation and team-based inquiry into their work. At the same time, there was a great deal of freedom in how team members and partners chose to meet these expectations. Groups adapted evaluation and team-based inquiry in diverse ways to meet their own needs and settings.

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is Jean King and Gayra Ostegaard Eliou, from the University of Minnesota, members of the Complex Adaptive Systems as a Model for Network Evaluations (CASNET) research team. NSF funded CASNET to provide insights on (1) the implications of complexity theory for designing evaluation systems that “promote widespread and systemic use of evaluation within a network” and (2) complex system conditions that foster or impede evaluation capacity building (ECB) within a network. The complex adaptive system (CAS) in our study is the Nanoscale Informal Science Education Network (NISE Net), a network that has been continuously operating for ten years and is currently comprised of over 400 science museum and university partners (https://player.vimeo.com/video/111442084). The research team involves people from University of Minnesota, the Museum of Science in Boston, the Science Museum of Minnesota, and the Oregon Museum of Science and Industry.

This week CASNET team members will highlight what we’re learning about ECB in a network using systems and complexity theory concepts. Here is a quick summary of three lessons we learned about ECB in a network and systems readings we found helpful.

Lessons Learned:

  1. ECB involves creating and sustaining infrastructure for specific components of the evaluation process (e.g., framing questions, designing studies, using results). Applying a systems lens to the network we studied demonstrated how two contrasting elements supported ECB:
  • “Internal diversity” among staff’s evaluation skills (including formally trained evaluators, novices, thoughtful users, and experts in different subject areas) provided a variety of perspectives to build upon.
  • “Internal redundancy” of skill sets helped ensure that when people left positions, evaluation didn’t leave with them because someone else was able to continue evaluative tasks.
  1. ECB necessitates a process that engages people in actively learning evaluation, typically through training (purposeful socialization), coaching, and/or peer learning. The systems concepts of neighbor interactions and massive entanglement pointed to how learning occurred in the network. NISE Net members typically took part in multiple projects, interacting with many individuals in different roles at different times. Network mapping visually documented the “entanglement” of people from multiple museums, work groups, and in numerous roles that supported ECB over time.
  1. The degree of decision-making autonomy a team possessed influenced the ways in which–and the extent to which–ECB took place. Decentralized or distributed control, where individuals could adapt an evaluation process to fit their context, helped cultivate an ECB-friendly internal organizational context. Not surprisingly, centralized control of the evaluation process was less conducive to building evaluation capacity.

Rad Resources:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

We’re Bethany Laursen and Kylie Hutchinson, volunteer members of the upcoming Systems in Evaluation Unconference coordinating committee. This October 13 – 15, 2015 from 3 – 4:30 pm ET evaluators everywhere will be able to participate in a unique online learning event, a virtual unconference, hosted by the AEA Systems in Evaluation Topical Interest Group (SETIG).

If you’ve never attended an unconference before, you’ll find it quite different from regular conferences. Rather than following the conventional program of speakers and presenters, an unconference is a more loosely structured meeting that emphasizes the informal and emergent exchange of ideas between participants. Each day for 1.5 hours participants will log on at the indicated time and, following a short welcome and orientation, either join a specific discussion topic in a virtual breakout room or create their own. Share and build your knowledge and skills in systems evaluation without ever leaving the office! This learning opportunity is a direct response to the expressed desire of SETIG members for more networked learning opportunities outside of the AEA conference.

Rad Resource: For more information about how the unconference will work and proposed topics check out the unconference website.

Hot Tip: Registration is free but space will be limited so register early.

Hot Tip: Participants will need a Google account to participate, so don’t leave this until the last minute. The unconference will operate using MIT’s UnHangout so take five minutes to watch the orientation video on the unconference website.

Hope to see you online in October!

Hutch 1

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No tags

Kia Ora! I’m Bob Williams. In our book Systems Concepts in Action : A Practitioner’s Toolkit, Richard Hummelbrunner and I distinguished between describing situations, thinking systemically, and being systemic.  I’ve see these notions describing three stages of a journey.  As you read these three scenario, pose yourself the following questions.  How well do the scenario describe my own journey?  In what way do the similarities and differences matter?  Who or what can help me move further along my journey?

Describing situations (or systems).  During this part of the journey you may be talking about systems as ‘real’ things, often big things (eg. the health system or the school system).  You have acknowledged that much of what you observe and describe is complex.  You may have heard about holism and trying to include everything into your evaluations.  You are seeing how inter-relationships create observable and significant patterns.  You are describing fresh differences that make a difference.  On the other hand you may feel overwhelmed by the sheer scale of what you need to consider.  You are starting to be worried about practicality and how to simplify in order to get your head around the vastness of it all.

Thinking systemically.  At this point in your journey you may be simplifying by considering ‘systems’ less as real life entities and more as mental models that help you think about ‘situations’.  You are engaging in how different people ‘see’ the same situation in entirely different ways and learning more ways to set boundaries around your systemic thinking.  You are probably looking at specific systems and complexity methods in order to help you with this process.  You are applying some of these approaches and gaining deeper insights into how to evaluate messy situations.  On the other hand, you may be frustrated by the range of methods and uncertain which ones work best in which circumstances.

Being systemic.  You find that you intuitively understand inter-relationships, engage with multiple perspectives and reflect deeply on the practical and ethical consequences of the boundary choices you make.  You use these insights with existing evaluation approaches rely less on specific systems methods.  You probably realise that choosing values that underpin your judgments of merit, worth and significance is a form of boundary setting.

Hot Tip: Every endeavour is bounded.  We cannot do or see everything.  Every viewpoint is partial.  Therefore, holism is not about trying to deal with everything, but being methodical, informed, pragmatic and ethical about what to leave out.  And, it’s about taking responsibility for those decisions.

Bob Williams received the 2014 AEA Lazarsfeld Award for contributions to “fruitful debates on the assumptions, goals and practices of evaluation.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top