Hi, I am Jim Van Haneghan, Professor of Professional Studies at the University of South Alabama.
I am writing today about Nate Silver’s book The Signal and the Noise: Why Most Predictions Fail-But Some Don’t. It is an intriguing book about statistical prediction and “big data.” For those who have not read the book, it covers variety of topics including: economic forecasting, sports prediction, gambling, political polling, epidemics, terrorist attacks, and climate change.
1. The book provided me with both reminders of important habits of mind for evaluators and some new ways to think about data. For example, early on Silver talks about being “out of sample.” The idea is that the data we collect may not be the right data in the context we are addressing. As evaluators, we have to ask the question whether the logic model we are following leads us data appropriate to the evaluation at hand. While this seems obvious, many times we go into evaluation contexts with one expectation only to find those expectations changed making the model we develop inaccurate. For example, I am currently rethinking my approach to school improvement evaluations because of changes in how schools are now evaluated in our state.
2. Another highlight was Silver’s descriptions of Tetlock’s ideas about experts who are foxes versus hedgehogs. Hedgehogs thrive on a single big idea and limited data. Consequently, they are often horrible prognosticators (political pundits on TV for example). Foxes, on the other hand, are more self-critical, look at data from a variety of perspectives, examine many sources, and draw more modest conclusions. I like to believe that evaluators act like foxes examining a variety of data to make more informed decisions. Sometimes clients desire us to act like hedgehogs making bold predictions based on limited information. It is important to stay “fox like” in such situations.
3. Another valuable discussion is Silver’s consideration of Bayesian probability to improve prediction. Paying attention to prior probabilities, adjusting probabilities of outcomes based on new information, and focusing on conditional probabilities of events are discussed. In some respects, I believe many evaluators are intuitive Bayesians. Attempts to use Bayesian analysis in evaluation are not new, but the book has led me to think about new ways to integrate this approach.
4. Another important lesson concerns the noise in our data. This is especially true in education where the measures are psychometrically noisy and sometimes not plentiful enough to distinguish the signal from the noise.
5. Finally, Silver reminds us that the advent of “big data” does not change the need to attach meaning to data. The availability of more data does not relieve us of the need for rigorous interpretation.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.