The purpose of this blog is to offer my first impressions of the hot topics discussed in the sessions I attended at the 2022 conference in New Orleans. By way of background, I am Anand Desai, Senior Fellow at Clarivate and Professor of Public Policy Emeritus at the John Glenn College of Public Affairs, Ohio State University. Before joining Clarivate I served as the Head of the Evaluation and Assessment Capability Section and Chief Evaluation Officer. at the U.S. National Science Foundation.
Conferences are invigorating. I usually come back feeling energized, full of new ideas and novel linkages among old ideas. This conference lived up to my expectations, plus it offered something I had not expected, which was how genuinely friendly, curious, interested, and attentive everyone was, not only in the sessions but also in the hallways. I do not know whether AEA conference attendees were always so, and I had not noticed, or whether it was the city, post(?)-pandemic euphoria, the pleasure of bumping fists and elbows or shaking hands with three dimensional versions of pixelated images, or simply the joy of debate and discussion in person. Whatever the reason (cause is so difficult to establish, right?) it felt like a joyous gathering of birds of a feather.
Most of the sessions I attended were organized by the Research, Development, and Technology topical interest group (TIG). I also made forays into sessions organized by Systems in Evaluation, Research on Evaluation, and Data Visualization and Reporting TIGs. Setting my selection bias aside, one of the first things I noticed was the tremendous (people sitting on the floor and standing along the walls) interest in systems, complexity, and the use of artificial intelligence (AI) and machine learning (ML) methods, which appeared to be creating cracks in the mono-method monopoly of statistics as the method of choice for data collection and the analysis of numerical information.
Lessons Learned
ML and AI tools are clearly mixed methods in that they can be used for summarizing and seeking patterns in numerical, textual, and visual data, but they are no panacea, yet. These methods are clearly fast (velocity) and can handle large amounts (volume) of data, but more work needs to be done to address the question of inference (validity and veracity), which will require careful consideration and study. Although progress has been made in this context, researchers and practitioners will have to work closely together to demonstrate that the evidence generated by such approaches can withstand careful scrutiny before these methods will be embraced by the evaluation community.
The ideas and language of complexity and systems have been accepted in the evaluation vernacular for some time[1] and now, ML and AI tools offer tools for operationalizing the underlying concepts for the evaluation of complex adaptive systems and their outputs and outcomes. However, before that can be done, evaluators will have to resolve how to address some of the key characteristics of complex adaptive systems. For instance, an explicit process will have to be established for defining system boundaries. If emergence is a fundamental feature of a complex system, then some form of contextualized attribution will have to be developed to address causality in such systems. Logic models will have to be adapted to accommodate program and project dynamics and feedback. If theories of change are going to correspond to systemic change, then we will have to consider the implications of a systems perspective for data and data collection. Currently most of the data are focused on episodes and collected at the individual or component level. For system level change we need to look at data on processes, flows, interdependencies, and relationships in addition to data on specific indicators of changes in variable values and the correlations among them.
Early progress on some of the issues mentioned above was reported in the sessions I attended. But as always with research, exciting developments in perspectives and approaches lie ahead. I look forward to what colleagues have to offer in Indianapolis in 2023.
[1] New Directions in Evaluation June 2021.
The American Evaluation Association is hosting Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to AEA365 come from our RTD TIGmembers. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.