Hi, I’m Jonny Morell and I’m looking for people interested in how agent based modeling can be combined with traditional evaluation methods.
For the past few years, I have been thinking and writing a lot about how evaluation can anticipate and respond to unexpected changes in programs. The difficulty as I see it is that many powerful evaluation designs have inherent rigidities that make it difficult to adapt them to new circumstances. For instance, there are designs that require well-validated psychometrically tested scales. There are designs that require maintaining boundaries among comparison groups. There are designs that require data collection (whether qualitative or quantitative) during narrow windows of opportunity in a program’s life cycle. There are designs that require carefully developed and nurtured relationships with a particular group of stakeholders. Many other examples are easy to find.
So, how can we keep these kinds of designs in our arsenal when there is a high probability that programs will change in such a way as to require a different evaluation design? Most of what I have been writing on this topic embeds specific data collection and research design methodologies in a theory that draws from elements of organizational behavior and complex adaptive systems. Any given specific method I advocate however, is well known and familiar.
Hot Tip: Lately I have been teaming with a computer scientist to test an approach that is less familiar in evaluation. He and I have been working on processes that will tightly integrate continual iterations of traditional evaluation with agent based modeling. Our hypothesis is that such integration will provide evaluators with leading indicators of program change. We have two contentions. First, that the longer the lead time, the greater the opportunity to adjust evaluation designs to changing circumstances. Second, that agent based modeling can provide information that will not come from other simulation methods.
Hot Opportunity: We are now hunting for evaluators who have access to ongoing or incipient evaluations who may wish to work with us. Don’t be shy. Send me an email at email@example.com.
Rad Resource #1: For in depth coverage of the ideas in this post, check out Morell J.A. (2010) Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. Guilford Press.
Rad Resource #2: For more on these ideas, check out Morell J.A., Hilscher, R., Magura, S., and Ford, J. (2010) Integrating Evaluation and Agent-Based Modeling: Rationale and an Example for Adopting Evidence-Based Practices. Journal of Multidisciplinary Evaluation Vol 6, No 14. (http://survey.ate.wmich.edu/jmde/index.php/jmde_1/issue/view/30/showToc)
The American Evaluation Association is celebrating Systems in Evaluation Week with our colleagues in the Systems in Evaluation AEA Topical Interest Group. The contributions all this week to aea365 come from our Systems TIG members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting Systems resources. You can also learn more from the Systems TIG via their many sessions at Evaluation 2010 this November in San Antonio.