September 2022

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  
Blog powered by Typepad

« Praxeology and the Purpose of Political Economy: Illustration from Ben Powell | Main | Machlup and Mises »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Once computers can simulate a neural network, I believe your argument falls apart. Yes there are many many more permutations in an open field of soccer than there are on a chess board, but eventually machines will be capable of running the neural net that's in your head. It's an extremely difficult problem (you're going to need billions and billions running simultaneously, some with millions of connections to others, all being rewarded and reweighted in real time), but perhaps one that's not further than a century or so off.

I mostly agree with Brandon and think you overstate the problems involved with soccer-laying robots. I would note that there is still on purely intellectual game that humans can beat computers at. That is Go. But the day will almost surely come when the computers will beat even the best Go masters, just as they pretty much can do now with the top chess players.

The problem is not just the locality of tacit information, which is a problem, but the one that Roger Koppl and I have written about that involves deeper levels of computability, putting the limit on what computers can do more seriously. In effect this involves the infinite regress models of non-computability that arise when the agent/[planner attempts to account for the impacts of his/her actions on the system. This runs into Godel/Hayek problems of self-referencing, which are also not unrelated to the Lucas Critique in its higher formulation, trying to take account of how behavioral response functions change in response to policy changes.

Lucas cheated in how he got out of that by assuming rational expectations, which is a cheap cheat: just assume that people expect what will happen, although there even with that remains the problem of multiple self-fulfilling prophetic equilibria, something Roger E.A. Farmer has been recently stressing in the econoblogosphere. How does one select which one to coordinate on, the focal point problem that got Tom Schelling his trip to shake hands with the King of Sweden given especially that he played such a role in focusing the nuclear armed part of the world on a point that says "No first use of nuclear weapons."

Anyway, these infinite regress problems associated with seriously trying to figure out best responses to a system that is itself trying to formulate best responses to what you do end up in infinite do-loops with halting problems. Oooops! This is a deep problem that real world central planners (and others following them) resolve by at some point simply throwing up their hands and rolling dice or throwing darts at boards.

Yes to all that Barkley said. On the soccer pitch there are only three dimensions of movement for the ball at a given moment. No innovations will change that. The rules are fixed. The players can move in only so many different ways. Soccer is a static universe. The space of possible games is vast but unchanging over time. There is no particular reason robots could not be programmed to make good choices consistently, and to do so in such a way as to regularly defeat fallible humans. Robots would be immune to mental fatigue, overconfidence bias, and egoism.

The "Austrian" argument on socialist calculation assumes a dynamic world in which you always have something new to learn. There is no calculation problem in a static world. Soccer is a static world.


I don't disagree with either Barkley or you, but I think you are both not seeing the arbitrage opportunity that exists between the literatures on computability and the philosophical points about the contextual nature of knowledge, and the distinction between syntactic knowledge and semantic knowledge. I'd like to explore the possible complementarities between these arguments.

However, I do want to disagree a bit about soccer (or any advanced sport) that requires on the spot judgements constantly --- this is my points about degrees of freedom (in the machine sense, obviously not statistical sense).



I think you may be right about the arbitrage opportunity. Certainly, computability is about syntax and not semantics.

On judgment in sports:
Humans have to use judgment. Machines can use algorithms. In some cases of judgment or intuition the unconscious part of our thinking (be it pre-conscious, meta-conscious, or something else) may be running an algorithm and just reporting the output to the conscious mind. Thus, some cases of intuition may be perfectly algorithmic. It does seem unlikely that most of the judgment in sports would be algorithmic in that sense. But even assuming zero algorithmic content to such judgments, it may still be possible for a computer with its algorithms to make better decisions that an athlete, just as a computer can make better chess moves than a human chess master.

I think Boettke is right, and I would add that very important aspects of sports is deception and being able to anticipate what other players do in response to what you do.

I was the neural net expert for a small software company years ago and was disappointed in what they could do. They no where come close to the hype of the industry.

I would guess that a lot of the computer models used to forecast weather are NN type and look at how bad they are. I would think predicting weather would be a lot simpler than playing soccer.

The comments to this entry are closed.

Our Books