November 2020

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30          
Blog powered by Typepad

« Israel Kirzner on the History and Importance of the Austrian Theory of the Market Process | Main | Colander and Mankiw Clash Over the Principles of Economics »


Feed You can follow this conversation by subscribing to the comment feed for this post.

The win probability would register as "100%" instead of "99.9%" I would assume the moment the model predicts a probability of one in two thousand (99.95%) that Texas A&M would win. Maybe the win probability model actually read 99.9999%, in which case it is miscalibrated. But since it may have read something like 99.95%, and given what we know about how extreme the circumstances were, I don't see any reason to believe something is amiss. "100%" is literally a rounding error.


Do you deny that there can be genuine surprises in the world?

Perhaps human's in making "rounding errors" end up being genuinely surprised and thus panic and don't even consider options on the menu that otherwise would have allowed them to weather the storm (like throwing the ball all the way downcourt as was suggested by J. Williams after the game). All Northern Iowa needed to do was to use some clock and time would have ran out for A&M, but not 4 turnovers. What is the analogy in business life? Cash flow problems with a construction company hit with weather delays that were more severe than anticipated, or poor workmanship that has a domino effect. These issues -- some of which are quantifiable risk -- are not exhausted by quantifiable risk. Similar to our ignorance in general -- we can distinguish usefully between rational ignorance (I know I don't know), ignorance (what I think I know ain't so), and utter ignorance (I don't know that I don't know) -- I think we can distinguish between risk, manageable risk, and genuine uncertainty ... and that these distinctions matter for certain explanations and not for others.

99.9999% is a case of completely phony "precision." These probability numbers don't mean anything more objective than what odds The person projecting the numbers thinks one ought to take in a bet. There just is no objective thing floating out there which is the "probability" of a one off event. Mises is good on this point.

I believe it was very surprising. But what you seemed to be implying is that this somehow proved something.

If a model says something will happen 75% of the time, we shouldn't "want" it to always happen." The model is correct if what it claims will happen 75% of the time happen 75% of the time. Likewise, the model is correct if what it says happens 99.95% of the time happens 99.95% of the time. The 0.05% should happen 0.05% of the time.

I can take this a step further. First, theoretically I don't think sports will provide good examples of what you are talking about, because they are so artificial and countable in comparison to the real world. Second, there is a more technical reason to be skeptical of this specifically. Most would model genuine surprise as kurtosis (i.e., fat tails), but models with binary outcomes (e.g., a basketball game) are resistant to the issue. I no longer agree with Taleb on much of anything anymore, but see his technical note on the problem:

excelent article and comments

The comments to this entry are closed.

Our Books