AEA365 | A Tip-a-Day by and for Evaluators

Jul/13

19

Jim Van Haneghan on The Signal and the Noise in Evaluation

Hi, I am Jim Van Haneghan, Professor of Professional Studies at the University of South Alabama.

I am writing today about Nate Silver’s book The Signal and the Noise: Why Most Predictions Fail-But Some Don’t. It is an intriguing book about statistical prediction and “big data.” For those who have not read the book, it covers variety of topics including: economic forecasting, sports prediction, gambling, political polling, epidemics, terrorist attacks, and climate change.

Lessons Learned:

1. The book provided me with both reminders of important habits of mind for evaluators and some new ways to think about data. For example, early on Silver talks about being “out of sample.” The idea is that the data we collect may not be the right data in the context we are addressing. As evaluators, we have to ask the question whether the logic model we are following leads us data appropriate to the evaluation at hand. While this seems obvious, many times we go into evaluation contexts with one expectation only to find those expectations changed making the model we develop inaccurate. For example, I am currently rethinking my approach to school improvement evaluations because of changes in how schools are now evaluated in our state.

2. Another highlight was Silver’s descriptions of Tetlock’s ideas about experts who are foxes versus hedgehogs. Hedgehogs thrive on a single big idea and limited data. Consequently, they are often horrible prognosticators (political pundits on TV for example). Foxes, on the other hand, are more self-critical, look at data from a variety of perspectives, examine many sources, and draw more modest conclusions. I like to believe that evaluators act like foxes examining a variety of data to make more informed decisions. Sometimes clients desire us to act like hedgehogs making bold predictions based on limited information. It is important to stay “fox like” in such situations.

3. Another valuable discussion is Silver’s consideration of Bayesian probability to improve prediction. Paying attention to prior probabilities, adjusting probabilities of outcomes based on new information, and focusing on conditional probabilities of events are discussed. In some respects, I believe many evaluators are intuitive Bayesians. Attempts to use Bayesian analysis in evaluation are not new, but the book has led me to think about new ways to integrate this approach.

4. Another important lesson concerns the noise in our data. This is especially true in education where the measures are psychometrically noisy and sometimes not plentiful enough to distinguish the signal from the noise.

5. Finally, Silver reminds us that the advent of “big data” does not change the need to attach meaning to data. The availability of more data does not relieve us of the need for rigorous interpretation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

3 comments

  • Jim Van Haneghan · July 23, 2013 at 11:25 am

    Chad

    My reading of Silver suggests that you have to consider alternative models of how a program works other than just the one advocated by the developers of the program. Silver spends a lot of time advocating for paying attention to alternative models when talking about prediction. It would follow that those who think about multiple models of how something works would be more “foxlike”

    Reply

  • Mike · July 23, 2013 at 8:13 am

    Just to point out, Tetlock is a secondary source for the idea of “hedgehogs” and “foxes.” The modern origin for this distinction is Isaiah Berlin (who drew on the Greek poet Archilocus) in the eponymous essay “The hedgehog and the fox.”

    http://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox

    Reply

  • Chad Green · July 22, 2013 at 8:58 am

    Related to #2, would evaluators who rely on a single logic model be considered hedgehogs while those who use a multiplicity of models be considered foxes?

    Reply

Leave a Reply

<<

>>

Archives

To top