AEA365 | A Tip-a-Day by and for Evaluators

TAG | Systems

Hello, I’m Roger A. Boothroyd from the University of South Florida. One thing I have learned from conducting mental health services research over the past 30 years: Research repeatedly documents that approximately two-thirds of adults diagnosed with a mental health disorder have at least one physical health condition. It is also well known that comorbidity of mental health disorders and substance abuse disorders is high, ranging between 35 and 45%. Further, many adults with mental illness are likely to be arrested. Finally, over half of adults with a mental health disorder do not receive treatment. Thus: 1) mental health issues seldom occur by themselves but often occur with other comorbid conditions; and 2) of adults with mental health disorders who enter treatment, comorbid conditions often result in them being served simultaneously by multiple service systems.

For children and youth with emotional and behavioral challenges, the issue of simultaneous multiple service system involvement is even more complex. Children and youth attend school, so the educational system is necessarily involved. Often, they are involved with the child welfare and/or juvenile justice systems; and, of course, their families play a significant role in their day-to-day lives. Thus, the question for us as evaluators is: How can we realistically evaluate the effectiveness of a program or an intervention without assuming a more systems level evaluative perspective?

Lesson Learned: Some 20 years ago, I was involved in an evaluation that explored why so few adults with severe mental illness who sought vocational rehabilitation services received them and were successful in obtaining jobs. Our evaluation included a systems thinking framework that involved modeling how individuals with severe mental illness entered and moved through the mental health and vocational rehabilitation systems. At the start of the evaluation, the prevailing hypothesis (mine included) was that there were not enough resources available for vocational rehabilitation services for adults with severe mental illness. Yet, when the cross-systems model was constructed, many adults with severe mental illness were receiving vocation rehabilitation services. The real problem was the lack of sufficient numbers of jobs for those adults who were trained; and the lack of jobs prevented them from exiting the vocational rehabilitation system. In fact, the model predicated that if more resources had been devoted to vocational rehabilitation services, the functioning of both systems would have gotten much worse. The answer was straightforward: Open up more jobs. The county mental health and vocational rehabilitation departments worked together with their Chamber of Commerce and local businesses to secure job placements for adults who had completed vocational rehabilitation training. As the flow of adults through these systems improved, the capacity to train other adults increased – all without new resources. This was my first introduction into systems thinking and seeing firsthand the importance of assuming a broader evaluation perspective.

This week, evaluators from the Behavioral Health (formerly Alcohol, Drug Abuse, and Mental Health) Topical Interest Group will share their strategies, experiences, and insights gained from conducting behavioral health-related evaluations that assumed this broader systems-level perspective.

The American Evaluation Association is celebrating Behavioral Health (BH) TIG Week with our colleagues in Behavioral Health Topical Interest Group. The contributions all this week to aea365 come from our BH TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is the beginning of a series remembering and honoring evaluation pioneers leading up to Memorial Day in the USA on May 30.

My name is Sara Miller McCune, Co-founder and Chair of Sage Publications. In 1975, Sage published the 2-volume Handbook of Evaluation Research co-edited by Marcia Guttentag. That Handbook helped establish Evaluation as a distinct field of applied social science scholarship and practice. Marcia conceived the Handbook while serving as president of the Society for the Psychological Study of Social Issues (1971) and Director of The Center for Evaluation Research affiliated with Harvard University. She was a deeply committed feminist ahead of her times in focusing on gender equity, women’s mental health, reduction of poverty, and intercultural dynamics. As we worked together to finalize the Handbook, I came to appreciate her vivacious personality, wonderful sense of humor, brilliant intellect, and feminist perspective, all of which came into play in conceptualizing the Handbook and seeing it through to publication. Our collaboration on the Handbook led to publishing her breakthrough work on “the sex ratio question” after her untimely death at the age of 45.handbook of evaluation research

Pioneering and Enduring Contributions:

The Handbook articulated methodological appropriateness as the criterion for judging evaluation quality at a time when such a view was both pioneering and controversial. She wrote in the Introduction: “The Handbook provides the type of information that should lead to the consideration of alternative approaches to evaluation and, by virtue of considering these alternatives, to the development of the most appropriate research plan” (p. 4). The Handbook anticipated four decades ago the significance of context and what has become an increasingly important systems perspective in evaluation by devoting four chapters to the conceptual and methodological issues involved in understanding the relationships of individuals, target populations, and programs to “attributes of their environmental context” (p.6). She was surprised, like everyone else at the time, by the huge response to the book, but understood that it foretold the emergence of an important new field. The Handbook introduced a wide readership to evaluation pioneers like Carol Weiss and Donald Campbell. In addition, Marcia Guttentag led the founding of the Evaluation Research Society in 1976, AEA’s predecessor organization. It is altogether appropriate that the AEA Promising New Evaluator Award is named in honor of Marcia Guttentag.

Resources:

Derner, G.F. (1980). Obituary: Marcia Guttentag (1932-1977). American Psychologist, Vol 35(12), 1138-1139.

Guttentag, M., & Secord, P. F. (1983). Too many women?: The sex ratio question. Beverly Hills: Sage Publications.

Marcia Guttentag, Psychology’s Feminist Voices

http://www.feministvoices.com/marcia-guttentag/

Struening, E. L., & Guttentag, M. (1975). Handbook of evaluation research (Vol. 2). Sage Publications.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Kia Ora! I’m Bob Williams. In our book Systems Concepts in Action : A Practitioner’s Toolkit, Richard Hummelbrunner and I distinguished between describing situations, thinking systemically, and being systemic.  I’ve see these notions describing three stages of a journey.  As you read these three scenario, pose yourself the following questions.  How well do the scenario describe my own journey?  In what way do the similarities and differences matter?  Who or what can help me move further along my journey?

Describing situations (or systems).  During this part of the journey you may be talking about systems as ‘real’ things, often big things (eg. the health system or the school system).  You have acknowledged that much of what you observe and describe is complex.  You may have heard about holism and trying to include everything into your evaluations.  You are seeing how inter-relationships create observable and significant patterns.  You are describing fresh differences that make a difference.  On the other hand you may feel overwhelmed by the sheer scale of what you need to consider.  You are starting to be worried about practicality and how to simplify in order to get your head around the vastness of it all.

Thinking systemically.  At this point in your journey you may be simplifying by considering ‘systems’ less as real life entities and more as mental models that help you think about ‘situations’.  You are engaging in how different people ‘see’ the same situation in entirely different ways and learning more ways to set boundaries around your systemic thinking.  You are probably looking at specific systems and complexity methods in order to help you with this process.  You are applying some of these approaches and gaining deeper insights into how to evaluate messy situations.  On the other hand, you may be frustrated by the range of methods and uncertain which ones work best in which circumstances.

Being systemic.  You find that you intuitively understand inter-relationships, engage with multiple perspectives and reflect deeply on the practical and ethical consequences of the boundary choices you make.  You use these insights with existing evaluation approaches rely less on specific systems methods.  You probably realise that choosing values that underpin your judgments of merit, worth and significance is a form of boundary setting.

Hot Tip: Every endeavour is bounded.  We cannot do or see everything.  Every viewpoint is partial.  Therefore, holism is not about trying to deal with everything, but being methodical, informed, pragmatic and ethical about what to leave out.  And, it’s about taking responsibility for those decisions.

Bob Williams received the 2014 AEA Lazarsfeld Award for contributions to “fruitful debates on the assumptions, goals and practices of evaluation.”

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I am Brian Pittman, a Research Associate at Wilder Research. Our work involves many different topics, scopes, and stakeholders, but an increasing proportion of our projects deal with complexity. Therefore, we work to learn and use the principles and practices of complex systems in our work. This post includes a brief primer of the concepts, but it is not intended to be a thorough explanation of complex systems theory.

Hot Tip: Identifying complexity.

First, it is important to understand when you are dealing with a complex system. The three primary characteristics of complex systems are:

  • Openness. Complex systems include many inter-related and interacting entities (including other systems) that are scalable (agent affects the system and system affects agent) and co-evolving.
  • Diversity. Complex systems have diverse and varying types of entities or agents.
  • Uncertainty. Unpredictable may, and often do, occur.

These characteristics help to define systems capable of emergence. Now you may be recognizing that some of your projects are dealing with complex systems, or even the projects themselves are complex systems! Next, let’s look at some of the considerations for engaging complex systems.

Hot Tip: Key considerations.

The following are the mechanisms for understanding and influencing complex systems:

  • Connections (aka relationships) represent exchanges between agents and determine the cohesiveness of the system.
  • Perspectives (aka differences) refer to the diversity of agents within the system and provide the “energy” the system needs to be dynamic.
  • Boundaries (aka containers) are what define the scope of the system and help to hold its components together in a pattern.

Lessons learned:

Three lessons we have learned about working with complex systems include:

  • Ask a different kind of evaluation question. First, what is the system and what are its patterns? The mechanisms can help answer. Next, what patterns are wanted or needed? Third, how do we get there? Manipulate the mechanisms. Ask: Are we doing the right thing? Instead of: Are we doing things right?
  • Quick and useful feedback. The evaluation questions are not answered just at the end of a project, they are ongoing explorations of a system that is always adapting and changing.
  • Adapt as needed. Complex systems are adaptive, so don’t be afraid to adapt your evaluation methods, tools, or plans based on your observations and understandings of the system.

Rad Resources:  

  • Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. (2010) Michael Patton. (Evaluation).
  • Getting to Maybe: How the World is Changed. (2007). Frances Westley, Brenda Zimmerman, and Michael Patton. (Social change)
  • Evaluating Systems Change: A Planning Guide. April 2010. Margaret B. Hargreaves. (Evaluation)

The American Evaluation Association is celebrating with our colleagues from Wilder Research this week. Wilder is a leading research and evaluation firm based in St. Paul, MN, a twin city for AEA’s Annual Conference, Evaluation 2012. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Russell Cole, a researcher at Mathematica Policy Research. At Mathematica, we have used Social Network Analysis (SNA) in a number of systems change evaluations. The goals of system change interventions are to affect individuals, organizations, and communities, with changes in outcomes occurring at various levels. In our work, we have used SNA to examine relationships among organizations in these types of evaluations.

Rad Resource: A report on how to conduct systems change evaluation by Margaret Hargreaves includes a description of how SNA data can be used to define system boundaries and identify relationships for evaluation purposes (see Evaluating System Change: A Planning Guide).

Hot Tip: When using SNA for systems change evaluation, identify the key organizations involved in the system. If key organizations are missing or excluded from consideration, then the observed network will be incomplete and potentially misleading. Evaluators should develop clear criteria and decision rules to define which organizations to include (and exclude) in the SNA. Creating transparent inclusion rules will define the system boundaries, and the inclusion rule will maintain a consistent representation of the organizations in the system. (See also Stacey Friedman’s AEA365 post this week on defining network boundaries.)

Lesson Learned – The organizations participating in a systems change initiative may change over time. During the planning stages for the systems change initiative, consider how or why organizations might enter or leave the system, and hypothesize what impact these changes might have on inter-organizational relationships or performance. As a result of the systems change initiative, organizations not originally included in the system may become involved and may meet the evaluation’s inclusion rules. Similarly, some organizations originally in the population may dissolve or become obsolete and will become ineligible members of the system. Documenting the entry and exit of organizations into and out of the system change effort is one way to be responsive to the population of interest for the systems change evaluation. As recommended above, population changes should be based on transparent inclusion/exclusion criteria in order to ensure that the observed social structure at each time point is grounded in a consistent definition. (See Evaluating Systems Change Efforts to Support Evidence-based Home Visiting: Concepts and Methods as an example of how a single inclusion rule will be used to characterize the population of interest over time in Mathematica’s Supporting Evidence Based Home Visitation evaluation.)

The American Evaluation Association is celebrating SNA TIG Week with our colleagues in the Social Network Analysis AEA Topical Interest Group. The contributions all this week to aea365 come from our SNA TIG members and you can learn more about their work via the SNA TIG sessions at AEA’s annual conference. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi, I’m Jonny Morell and I’m looking for people interested in how agent based modeling can be combined with traditional evaluation methods.

For the past few years, I have been thinking and writing a lot about how evaluation can anticipate and respond to unexpected changes in programs. The difficulty as I see it is that many powerful evaluation designs have inherent rigidities that make it difficult to adapt them to new circumstances. For instance, there are designs that require well-validated psychometrically tested scales. There are designs that require maintaining boundaries among comparison groups. There are designs that require data collection (whether qualitative or quantitative) during narrow windows of opportunity in a program’s life cycle. There are designs that require carefully developed and nurtured relationships with a particular group of stakeholders. Many other examples are easy to find.

So, how can we keep these kinds of designs in our arsenal when there is a high probability that programs will change in such a way as to require a different evaluation design? Most of what I have been writing on this topic embeds specific data collection and research design methodologies in a theory that draws from elements of organizational behavior and complex adaptive systems. Any given specific method I advocate however, is well known and familiar.

Hot Tip: Lately I have been teaming with a computer scientist to test an approach that is less familiar in evaluation. He and I have been working on processes that will tightly integrate continual iterations of traditional evaluation with agent based modeling. Our hypothesis is that such integration will provide evaluators with leading indicators of program change. We have two contentions. First, that the longer the lead time, the greater the opportunity to adjust evaluation designs to changing circumstances. Second, that agent based modeling can provide information that will not come from other simulation methods.

Hot Opportunity: We are now hunting for evaluators who have access to ongoing or incipient evaluations who may wish to work with us. Don’t be shy. Send me an email at jamorell@jamorell.com.

Rad Resource #1: For in depth coverage of the ideas in this post, check out Morell J.A. (2010) Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. Guilford Press.

Rad Resource #2: For more on these ideas, check out Morell J.A., Hilscher, R., Magura, S., and Ford, J. (2010) Integrating Evaluation and Agent-Based Modeling: Rationale and an Example for Adopting Evidence-Based Practices. Journal of Multidisciplinary Evaluation Vol 6, No 14. (http://survey.ate.wmich.edu/jmde/index.php/jmde_1/issue/view/30/showToc)

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· · · ·

Greetings,

Our names are Mehmet “Dali” Ozturk, Associate Vice President of Research, Evaluation and Development and Kerry Lawton, Senior Research Specialist from Arizona State University’s Office of the Vice President for Education Partnerships (VPEP). Our office works with P-20, public and private sector partners to enhance the academic performance of students in high need communities. Along with our colleagues, we work to develop sound evaluation designs that consider the many contextual factors that affect educational partnerships and their ability to promote systemic change and increase student achievement. We offer the following advice to evaluators:

Hot Tip #1: Build relationships with experts from across disciplines.

Educational systems focused on improving student outcomes are exceedingly complex. Student achievement is influenced by political and financial considerations that dictate a school’s culture and learning environment. In addition, achievement is also mediated by economic and societal factors affecting students outside the schools. Due to this complexity, evaluators should seek assistance from experts across a variety of disciplines, including psychology, economics, sociology, and political science. Including multiple perspectives is likely to provide valuable information into why a program or initiative was or was not successful in addition to the likelihood any results will remain consistent if a program is replicated elsewhere.

Hot Tip #2: Ensure participation from stakeholders across the entire evaluated entity.

When evaluating K-12 university partnerships, the evaluation team should also include administrators from the University and partnering entity, as well as teachers, school personnel, and, if possible, students and parents. Including teachers provides input from those closest to the actual work being done. Including administrators provides information on the extent to which the program or initiative is moving the school towards its overall goals.

Hot Tip #3: Create rules of order to guide the actions of the evaluation team.

Just as programs and initiatives are subject to influence from organizational structure, diverse evaluation teams are also subject to influence from group dynamics. This may be limiting, particularly when group members represent disparate fields, with each speaking a different “language” and drawing from different knowledge bases. In these situations, the lead evaluator must ensure that each member understands their role in relation to the group and is willing to collaborate in support of the overall evaluation goals. We suggest during the initial meeting, team members mutually adopt a procedure through which to gain consensus and make decisions if consensus cannot be reached.

Rad Resource: For more on our office’s activities, go to our website (http://educationpartnerships.asu.edu/asu/index.shtml).

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

·

Hello! My name is Jan Noga and I am an independent consultant and owner of Pathfinder Evaluation and Consulting in Cincinnati, Ohio, focusing on the evaluation of K-12 programs and support services at both the local and statewide levels. I am also the chair of the Systems in Evaluation TIG. The TIG is pleased to sponsor this week of AEA365 posts with thoughts and advice from several of our members on issues relevant to systems thinking and evaluation.

Hot Tip: We encourage you to attend some of the sessions sponsored by our TIG in San Antonio this November. We have a strong mix of sessions ranging from the applied to the conceptual as it relates to systems thinking and systems approaches in evaluation. Here is a link to the searchable schedule: http://www.eval.org/search10/search.asp

Hot Tip: Running into systems-related issues in your evaluation practice but not sure how to address them? Check out some of the professional development sessions offered this year that address one or more areas relevant to systems theory and thinking:

  • Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Monday and Tuesday)
  • Systems Thinking and Evaluation Practice: Tools to Bridge the Gap (Tuesday)
  • Advanced Topics in Concept Mapping for Evaluation (Wednesday)
  • Useful Tools for Integrating System Dynamics and System Intervention Elements into System Change Evaluation Designs (Wednesday)
  • Social Network Analysis: Theories, Methods, and Applications (Wednesday)
  • Purposeful Program Theory (Sunday morning)

Hot Tip: Come to the TIG Business Meeting at the AEA Conference. You are invited to attend the business meeting of the Systems in Evaluation TIG. Please join us on Thursday, November 11 from 4:30 pm – 6:00 pm in Lone Star B in the Grand Hyatt. The highlight of our meeting is a Meet and Greet the Authors panel discussion by TIG members Michael Patton, Patricia Rogers, Bob Williams, and Richard Hummelbrunner, authors of three new books appearing in press this year. It’s a great opportunity to learn more about the cutting edge of systems and evaluation.

Hot Tip: Want to get to know us better? Join us Thursday night at Boudro’s on the Riverwalk. The Systems in Evaluation TIG is hosting this Thursday Nights Out event during AEA. It’s a chance to enjoy good food and even better company. Watch for the announcement about sign-ups from AEA.

Rad Resource: Check out our website for news and a detailed list of TIG-sponsored sessions (http://comm.eval.org/EVAL/Systems_in_Evaluation/Home/Default.aspx).

The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.

· ·

Archives

To top