September 2022

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  
Blog powered by Typepad

« SDAE 2012 Call for Papers | Main | Teaching Austrian Economics »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I don't think Klein is interpreting Caplan very well, partly because he's rather glibly mischaracterizing probability theory. In order to ascribe probabilities, you have to classify events. The interesting probability distribution of a poker hand isn't the probability of any one unique hand, it's the probability of certain classes of hands. Getting dealt a flush is surprising. Getting dealt a pair of fours and some middle-rank cards is not. So that is probably not a good vector of response.

Perhaps a good example would be the Frisbee. Makers of pie pans had literally no idea that they would be thrown around for sport. And the first people who threw them around had no idea that they could be sold at a profit. It was when a random person offered WF Morrison 25 cents for a 5 cent tin that good old profit-and-loss calculation showed him which way to go with his life.

What is you pull out a fish?

What if there are no balls inside, but only liquid?

etc.

The assumption that the world is made up of "givens" which are "given" to us and in need only of statistical genius to "predict" is one of the great and constant fallacies.

It's a version Hume's fallacy (or Bacon's fallacy) and its a sickness deeply embedded in marrow of modern economics.

"Getting dealt a flush is surprising. Getting dealt a pair of fours and some middle-rank cards is not."

This is true enough, but merely restates the problem. Subjective probabilities of both events are exactly the same (a person would pay nothing in a bet hinging on either of the events happening) and vanishingly small. According to Caplan, they should both be surprising. But as you point out, one is surprising while the other is not. So something besides low probability enters into the phenomenon of surprise, which was Klein's point. Contrary to Caplan, probability theory cannot explain surprise.

I still think Bryan did an excellent job in that debate. The point is, there's a difference between what we take to be the "expected value", which - to the extent that we are acting rationally - we are making our decisions off of, and what we "expect to be the value". "Expectation" is not used in probability like it's used in casual language.

My "expected value" for a lottery ticket is some low, but positive prize. But what I "expect to be the value" is zero. Surprise occurs relative to what you expect to be the value, not the expected value.

Klein talks about hands of cards, but I think he's somewhat mistaken here. The probability distribution we're concerned with is hands that mean something - not random collections of cards. A random collection of five cards has the same probability as any other random collection. But the family of hands we call a "royal flush" has a much lower probability than the family of hands we call "a pair". So of course we're surprised if we draw a royal flush. Surprise can definitely happen in cards - but the point is we're surprised at getting particular families of hands, not some random collection.

As with a lot of the issues raised in your debate with Bryan, I think there's a lot of piling on of mainstream neoclassicism that just doesn't hold up.

zb -
re: "Subjective probabilities of both events are exactly the same (a person would pay nothing in a bet hinging on either of the events happening) and vanishingly small. According to Caplan, they should both be surprising. But as you point out, one is surprising while the other is not"

But you're confusing the point of an expected value with the value you expect. If you're just talking about a random collection of cards, the expected probability of any particular combination is very low. Fine. But what hand do you "expect" to get? You don't expect any particular hand. Because every random collection of cards has the same extremely small probability of occurring. You have no combination that you expect to see, so of course you're not surprised.

I like how Klein puts it better than how Caplan puts it, let me add. But that's because Klein was focusing on the particular phrasing of the point, while Caplan was just trying to get across that neoclassicism is perfectly comfortable with surprise.

We act on a combination of "expected value" and "the value that we expect", depending on how rationally we're acting at the time. But we react relative to what value we expected to see.

"But you're confusing the point of an expected value with the value you expect."

To the contrary, I am pointing out that the two are distinct. And so is Klein. Contrary to that, Caplan asserts that you can reduce surprise to probability.

"...while Caplan was just trying to get across that neoclassicism is perfectly comfortable with surprise."

But the point is that it's not perfectly comfortable. It is reasonably comfortable, covering a range of events, but also creating anomalies.

"But we react relative to what value we expected to see."

Again: this is true, but what "we expect to see" can not be reduced to subjective probability - it requires a conceptual pre-processing which will parse the universe of possibilities into interesting/relevant classes of events.

The bottom line is, you can't reduce a concept to a number - you can only reduce is to a number + another concept. Which often (though not always) defeats the whole purpose of reduction.

Daniel,

Read Larry Samuelson's "Modeling Knowledge in Economics" JEL (2004) and then we can see if the ideas discussed are being discussed in a way that meets the conventional discourse or not.

Pete

ZB, no, you're not pointing out EV and probability are different. You're flushing out the imprecision in what Caplan said with rhetorical pedantry while ignoring that it's quite obvious what he's talking about. He's obviously talking about events of some interest or significance. It's not interesting when a number comes up on the roulette wheel. It's interesting when it matches my number. From my standpoint, all the numbers that don't match mine are equivalent. If I have my chips on 5, don't care about the probability of the ball landing on 12. I care about the probability of getting a 5 (1/36) versus the probability of not-five (35/36). If the ball lands on 12, that's merely the ball landing on not-five, which has a high probability and therefore isn't surprising.

I'm sure if you pointed out that the probability of a fly landing on any specific spot on a wall is very low, but we're not surprised when the fly lands, Caplan would refine his definition further to your satisfaction.

re: "To the contrary, I am pointing out that the two are distinct. And so is Klein. Contrary to that, Caplan asserts that you can reduce surprise to probability."

Bryan was responding to Pete, remember - not Dan. As I said, I like Dan's way of phrasing it better too.

I think if Bryan were to respond to Dan (which is the dialogue everybody's trying to construct here), I think he'd respond about how I did.

I have the same feeling about Pete's most recent post on Buchanan. These attempts to deconstruct mainstream neoclassical thinking often dumb it it down quite a bit before they make the attack.

In the context of the Boettke/Caplan debate, Caplan stands up just fine. He understands Smith too. I like to think I do too.

Josh S - precisely.

We all need to remember that Bryan was making a basic point to Peter. If he gets around to responding to Dan, I'm sure he'd refine his point to about what we're saying.

Dumb theories don't last this long. Neoclassical mainstream might be surpassed one day, but it's not this dumb.

I find the exchange here interesting, despite the unbecoming conduct by Josh.

I corresponded with Bryan about the matter and expect he will not respond.

Your still not reading Larry Samuelson on modeling knowledge are you?! Then we can talk about "dumbing down" neoclassical economics. You are a graduate student, don't blog read the technical literature by the best people in the field. While looking at Samuelson's paper from the JEL, then look at the Makowsky and Ostroy JEL paper as well, where they talk about knowledge and creativity as well in economic models and analysis.

There is an actual serious literature in the professional journals that it does seem is incumbent on you to learn if you want to talk about these things beyond the textbook presentation and blog-type discussions.

BTW, which post on Buchanan do you find so objectionable and why?

Peter - I downloaded Samuelson, but I'm not sure when I'll get a chance to read it.

I read plenty of technical literature - on the things I work on on a regular basis. It's not like I have a lot of time to read additional material.

There seems to be little incentive to read additional technical literature when simple points in the literature aren't even met with what seems to be a reasonable argument. Take the "Search vs. Browse" paper. If the argument is going to be stuck in "neoclassicists assume perfect knowledge" mode in looking at the classic search models (which I am familiar with), then that needs to be disputed first before we even get into issues in more recent decades.

I think I'm well read enough to engage in this particular blog discussion outside of my primary area of research. Certainly you and Bryan were contesting points that you didn't need to read Samuelson to talk about.

I want to reiterate that I liked Dan's presentation of the material. That just seems to me to be very close to Bryan's approach.

I'm talking about the Buchanan post you did today. I don't know if I'd go as far as you just did in calling it "objectionable". But I don't see some major divide in how market processes are thought of.

I was not taught that people are utility maximizers, for example. I was taught that people have preferences, and that when they are faced with choices between different options it's the set of choices that can be characterized by a function. Nobody has a utility function and nobody goes out seeking to maximize utility. But we can construct utility functions that represent the set of outcomes of real world choices in order to characterize preferences in theoretical applications.

As Buchanan says, in this sense the market isn't just a means for maximizing utility - it is precisely in circumstances where we choose that we generate outcomes that can be characterized by a utility function. To say "people are out to maximize utility and markets are the best way to do that" puts the cart before the horse.

I agree with Buchanan on all that. My issue is that this is what I was taught the mainstream says. What bothers me is the bludgeoning of the mainstream, not the argument itself.

This is what we were taught, from the "technical literature" and the "best people" as you put it. I'm asking what I think is reasonable question - how is this really a critique of the neoclassical mainstream? Sure we teach undergrads that "people try to maximize utility". That explanation works. But that isn't the actual underpinnings of choice theory. That's not what I learned from Walker or Corbae or Mas-Colell.

Certainly Buchanan puts more of a philosophical spin on it than you usually hear. But to my ears he is not challenging the mainstream neoclassical economics I was taught.

One also shouldn't be surprised to get a little pushback, Peter. "They don't accurately discuss the market process", "they accurately discuss surprise and unexpected outcomes", "they don't accurately understand Smith". These are not small accusations. And they are not accusations that get put to Austrians. Clearly some of us are not convinced by the claim. Perhaps answering our unconvincedness would be productive.

For example, my reading of Walker and Corbae makes me think that I already think what Buchanan says.

Perhaps it would be helpful to let me know what it is that Walker or Corbae miss in what Buchanan says. Because as far as I can tell, they don't miss any of the points he makes. So I'm at a loss for exactly what the issue is here.

My theory of surprise:

http://evolutionaryaesthetics.blogspot.com/2008/01/ii-instincts-education-and-human-brain.html

Seriously Daniel, look at those papers by Samuelson and Makowsky and Ostroy (both survey articles summarizing the state of the art in the Journal of Economic Literature). I believe you will see what I am talking about. These ideas are very subtle and the theorist understand this, those who don't work with theory think they can square a circle with words that they CANNOT.

Vernon Smith also makes this point in Rationality in Economics, especially read his chapter on Asymmetric Information.

The argument I am making is based on a technical understanding of economic theory, not on a pragmatic reading of textbooks.

Perhaps the theorists are being too subtle -- see for example Franklin Fisher's work on the disequilibrium foundations of equilibrium economics --- but that is a different argument from claiming that it is technically acceptable to treat Knightian uncertainty as if it was Knightian risk without losing anything, or to treat the common knowledge assumption as if it is not critical.

But these issues occupy significant thinkers in the profession -- this isn't Austrian mombo jumbo. Problems with excess demand; problems with unawareness; problems with favorable surprise; all raise thorny issues for the traditional apparatus of equilibrium theory. You don't read Mises, Hayek, Kirzner to get that, you read Arrow-Rusticcini.

There is only surprise if there are predictable patterns. And there is only surprise if there is misallocations, disequilibrium, and mismatches. I'm not sure how one can be surprised in mainstream economics thinking.

Professor Boettke,

it seems that every time your initial exposition is challenged by someone (usually Daniel Kuehn) you just refer to "deep" and "subtle" literature instead of providing a serious encounter. That is very depressing.

Given that I've got other literature to plow through associated with other problems.

Perhaps at the very least there's a synopsis of what's wrong with the counter-argument that Josh S. and I have provided here? That doesn't seem unreasonable.

And perhaps just a brief synopsis of what Buchanan says that Corbae, Walker, Mas-Collel, etc. don't already say?

I guess I'm just not sure what the point is of reading more just yet if nobody's pointed out what's wrong with the counter-arguments I have offered.

I am not evading any questions, I am seeking common ground based on the professional literature about a technical topic. You cannot subsume "surprise" into the traditional probability analysis and say you have capture "surprise". This is not Austrian hand waiving, it is the admission by the theorist. I am pointing to the survey literature that makes this point extremely well, and with some reference to original papers in the field as well.

Bryan Caplan, who I have tremendous respect for, and who I personally have a deep affection for, is being a modeling pragmatist on this point, and not taking his starting point from the technical literature. There is no "bucket" available in the standard technical literature (see the Samuelson paper I cite). Bryan is making a pragamatic point about modeling, and what he can purchase with a set of tools he already knows, and the cost of giving those tools up, or finding new tools is too high for him compared to the benefits he sees from them. OK, he can make that argument. But what he cannot then do is switch gears and make an essentialist argument that he has in fact captured surprise without changing the meaning of the word surprise.

This is NOT Austrianism, this is straightforward neoclassical theorizing --- read the literature IF you want to have THIS particular conversation. I personally think blogs are not the best place for this, seminar rooms are, and blackboards. Blogs are "too cheap" a learning environment. The limits of neoclassical theory (traditional theory that is) are reached -- read someone like Alan Kirman (EJ), or better yet Fisher (Econometrica), let alone V. Smith (see his Nobel). What does theory look like when we don't allow the common knowledge assumption? What does theory look like when we try to capture learning within a world of Knightian uncertainty? (See Roman Frydman's AER paper on rational expectations equilibrium, or his book Imperfect Knowledge Economics, Princeton 2008).

If you are going to talk about these things seriously, you need to actually KNOW what it is that you are talking about. There is a literature on this, there are technical issues --- you cannot just absorb stuff into traditional models without changing the meaning of the words that you are working with.

Today we have a seminar by Nassim Taleb, on anti-fragility, the follow up to his The Black Swan. It is at 2:00, Daniel is more than welcome anytime to come from DC to the seminar --- Arlington --- and he can talk probability theory with Taleb, and ask questions about uncertainty and risk.

Calling on Roger Koppl to jump in here. I have not invoked once in the conversation Shackle, or Lachmann, nor even Keynes -- though I have invoked Knight's distinction. No "radical subjectivism" has been mentioned as a word in and of itself, I have invoked the actual professional experts who work on these technical issues in economics. They have shown multiple times and in the most respected outlets possible that you cannot do what some are saying here they can do. Something has to give in the theoretical apparatus. There are pressure points to the structure, push on them, and the structure weakens considerably and in some instances falls. So these are not willy-nilly matters.

So again, if we are going to have a SERIOUS conversation, then lets get serious about the technical details of the argument. If we want to have a serious argument about being pragmatic and introducing a loose way to talk about these things while not really committing to the ideas of taking uncertainty, awareness, disequilibrium, learning, etc. then lets continue down our current path.

Far from evading, I am asking for a serious encounter.

I don't think I quite understand this whole discussion. Daniel Kuehn seems to be supporting the view that "neoclassical" economics can handle surprise just fine. Basically, low probability events are surprising. Daniel wisely tweaks that basic statement to account for the difference between relatively low-level events that are all equally improbable and higher-level classes of events that are not equally probable. I think Pete is trying to point out that Daniel's response here is simply not informed by the current literature. The Samuelson paper runs through a lot of stuff about partitions and common knowledge and Bayesian updating precisely in order to conclude that this "neoclassical" apparatus "cannot accomodate unawareness." So the state of the art in "neoclassical" (i.e. mainstream) economics is that we know that standard probability tools just do not get at "unawareness" and the like, but we don't know what to do about it.

Thus, Bryan's attempt to model "surprise" as low probability is not consistent with at least a significant fraction of the best "neoclassical" thinking on the topic. That's a fair point. It supports the claim that Bryan erred in this particular and that Daniel's attempted defense of Bryan also misfires. All of these remark point to the conclusion that Austrians are right to deny that "surprise" is basically just low probability. There is more to it by far, and some of the best "neoclassical" theorist are themselves saying so. Why is this such a hard point to put across?

The thing is I actually agree with the Austrians about the general concept of "surprise" and "radical uncertainty" being a potentially useful critique of statistical modeling of choice-making. I just think that Dan Klein's counterargument is rather bad for the reasons stated above. I don't like bad arguments in general, whether they're for something I agree with, or something I agree with.

In fact, I could raise the same issue Vernon Smith's argument. If I reach into the bowl and pull out a green ball, I'd definitely be surprised. However, there is indeed a probability that whoever filled up the bowl accidentally put a ball in there that wasn't supposed to be there. It's very, very low, of course, but it is nonzero. So is this really not an example of an "extreme, low-probability event?"

I think if a meteor hit Baltimore tomorrow, we would all be very shocked and surprised. We'd probably react in just as terrified a way as the armies in the Smith passage. But the probability of a meteor hitting a populated city on a given day is calculable. It's just extremely low, so low that we order our lives as though the probability were zero. It's entirely a foreseen danger, yet if it happened, there'd be a lot of panic.

Note that I am not saying Caplan is right. What I am trying to do is point out that dismissing his argument is not quite as easy as Klein seems to make it.

Josh: I don't think you're being fair to Dan Klein. If I have understood, you are not impressed by Dan's remark on how each hand is, in a sense, unexpected. Your objection is basically that a straight flush is very different than an equally improbable, but less valuable hand. But the thing is that Dan's point works for its intended purpose. Dan explicitly says, "The occurrence of a low-probability event might be necessary [for surprise], but it is not sufficient." That's pretty clear. And I don't see how your remarks do any damage to Dan's analysis.

It can be hard to know, sometimes, what people are getting at in a blog post. I wonder though if a comment on Bayesian methods might be in order. It's not a new insight that Bayesian methods assume that your event space includes what will happen and convergence on the truth requires that your prior put a non-zero weight on the truth. Everybody gets that. But not everybody takes it seriously. I think we should. Our priors don't always put positive weight on events that happen, and we often use models that do not include the truth in the event space.

Complexity theory let's us go a bit further. We cannot always reduce a high-dimensional process to a low-dimensional model. Any low-dimensional model will omit events that might happen. Thus, if reality is sufficiently complex, it will appear to produce novel and surprising events even if it is "determinate" in some metaphysical sense. And it looks like the process is "sufficiently complex" if it achieves computational universality. That is, we get something that looks like novelty if we are at the top of a Wolfram-Chomsky hierarchy. Results of this sort tell us that the old mathematical equipment of "neoclassical" economics is simply not capable of "modeling" surprise and novelty.

Roger is of course right in regards to complexity and surprise.

The brain has 10x more neurons feeding forward from the brain to the eye as goes from the eye to the brain. It does this because once we are established in a place, our eyes just check to make sure everything is as it was -- that the patterns we predict to be there are really the patterns that are there. If they are not, we are surprised. Our attention is drawn to it. That's not about low probability, per se, but more a violation of the predicted patterns.

Does neoclassical economics deal with this at all? I don't know, which is why I ask the question. I would argue that Kirzner implicitly does.

re: "So the state of the art in "neoclassical" (i.e. mainstream) economics is that we know that standard probability tools just do not get at "unawareness" and the like, but we don't know what to do about it."

But this is wrong. We talk about neoclassical agents approaching things with their own subjective probabilities. "Unawareness" is simply a deviation between their subjective probabilities and the objective probabilities that characterize the events we're concerned with. Reams of literature consider the reaction of neoclassical systems to unpredicted shocks. Accounting for surprise is trivial. I still am not clear exactly what the objection is to this.

We can, of course, define it better than just having inaccurate subjective probabilities, if we'd like. That's what Bryan and Dan get into. And as I've said I think Bryan did a great job of dealing with Pete's concerns, and I think Dan did a great job expanding the structure of how to talk about these things.

Troy is quite right to highlight the importance of pattern recognition. Our understanding of cognition neuroscience and from principles of complexity in general is critical for thinking about exactly what the expectations and assumptions of an agent in a model are.

But the point is, that agent in the neoclassical model is acting on subjective preferences, probabilities, and expectations. The only way you can say neoclassical economics can't account for surprise is if you deny that what neoclassicals are talking about subjective assessments and assert that instead they are talking about objective probabilities and values. Of course that's simply wrong.

Daniel,

You say, " 'Unawareness' is simply a deviation between their subjective probabilities and the objective probabilities that characterize the events we're concerned with."

Um, nooo . . . and the point is *not* a doubtful point of interpretation or anything like that. Your statement is false and betrays ignorance of the definition of "unawareness" given in, ahem, the Larry Samuelson paper Pete has been talking about. Note where the paper appeared: Journal of Economic Literature. It is in part a literature review. And indeed the issue at hand was a part of the mainstream literature before Samuelson published his paper in JEL. In particular Rustichini and his co-authors had written on "unawareness" several years earlier. And here's the thing, Daniel. At the root of it is just old problem of listability, which Shackle, Lachmann, and others used to talk about decades ago.

“It ain’t what you don’t know that hurts you. It’s what you know that ain’t so.”

Could someone please furnish a counter-argument rather than a citation? I genuinely do not know what your concern with the argument is.

btw - the unawareness point is different from the surprise point that this question started on. One can be unaware but not surprised, for example. I'm not sure what more there is to unawareness than not being aware of the actual state of things, which is all I'm suggesting it is. Where am I wrong on that? If it's not failing to be aware of the actual nature of things, what is it Roger?

re: "At the root of it is just old problem of listability, which Shackle, Lachmann, and others used to talk about decades ago."

Yep, and Keynes and Knight decades before that. I'm not claiming Shackle and Lachman (or Boettke and Koppl for that matter) have it wrong. I'm asserting that neoclassicism isn't exactly floundering.

This whole line of attack reminds me of more lefty types that talk to me about different behavioral issues like hyperbolic discounting as a reason to scoff at neoclassicism.

If that's the case, then someone ought to tell David Laibson. He certainly seems to manage both these behavioralist insights and neoclassicism just fine.

That isn't to say that these non-neoclassical perspectives can't also talk about these things productively. They certainly can. But the fact that you can talk about these things outside the neoclassical tradition hardly implies that you can't talk about it within the tradition.

If you're so hip to the listability problem, Daniel, how can you say, "The only way you can say neoclassical economics can't account for surprise is if you deny that what neoclassicals are talking about subjective assessments and assert that instead they are talking about objective probabilities and values"? And now you seem to boil your point down to the claim that "neoclassicism isn't exactly floundering." Well, that's a nicely vague and hard-to-refute claim. But, really, who said something about "neoclassicism" "foundering"?

First - nobody's ever accused me of being hip in any capacity.

Second - I get you disagree. Can you give me a shred of a hint why rather than continuing to ask how I could have possibly said such a thing? You do understand, don't you, why this is frustrating for me. I'm still not exactly sure exactly what your issue even is.

I can't say I'm deeply familiar - I've read the Zappia chapter in your Mongiovi.

Roger, there are two problems here:

a) Klein was trying to refute the basic notion of using probability to define/model "surprise," not just expose Caplan's definition as insufficiently rigorous. So define "surprise" as "extremely low-probability events of significance to the observer," which eliminates such trivialities as an arbitrary hand of cards, or the chance a rain drop of specific mass will fall on a specific petal of a specific flower at a specific moment in time, or a green marble randomly happening to be in a bowl you thought just had black and white marbles, and that whole line of criticism evaporates.

b) This was a lay-level discussion. You should be able to explain why Caplan is wrong without resorting to reference to a survey of technical literature. Okay, so it seems like this paper you're all talking about says neoclassical economics really didn't have room for the concept of "surprise" thus far. So? To my lay ears, Caplan's definition (modified for precision) seems pretty reasonable. And so far, no one's come up with an example or explanation of "surprise" that doesn't fit within that definition. If what you're saying about the Samuelson paper is true, then it sounds to me like Caplan could advance the state of neoclassical economics, not that probabilistic modeling in general can't handle the notion of "surprise."

Given that Ludwig von Mises could explain the subjective theory of value and marginal utility theory using a pair of new gloves, piece of cake and an apple, or Paul Krugman's ability to explain trade in terms of hot dogs and buns, I don't think it's unreasonable to expect a decent refutation of Caplan's lay-level discussion.

@Daniel: Your quoted remark excludes the listability problem. Things happen that are not imagined in my model. When they do, I am surprised. Such events are not (in my model) low probability events. They are not even zero probability events. They are, well, surprises. Now please answer my question: But, really, who said something about "neoclassicism" "foundering"?

@Josh: Does listability make more sense now? Honestly, Josh, you came out swinging, saying Dan Klein was "glibly mischaracterizing probability theory." This remark certainly seems to say that you, unlike Dan Klein, really know your probability theory. Now you're saying you need a "lay-level discussion," which seems to say that you're not really very well informed on probability theory. Which is it?

Roger - you keep provide understandings of surprises that I concur with. I've long ago satisfied myself that at least on the nature of surprise I'm not at odds with you and Dan and Peter here.

The entire debate between Peter and Bryan was over whether neoclassicism could deal with problems like this.

re: "Now you're saying you need a "lay-level discussion," which seems to say that you're not really very well informed on probability theory."

Roger, what we're all concerned about is that you can't even trouble yourself to provide a straightforward reason why you think we're wrong. If you can cite a counter-argument, can't you at least summarize it? We're starting to wonder if you even have a counter-argument.

Heh, heh, heh. Let’s see if I’ve got this right. You agree with Dan, Pete, and me; it’s just that “neoclassicism” can handle surprise just fine. The point at issue was whether the traditional models of “neoclassicism” adequately model surprise. Bryan Caplan said “yes” and Pete and Dan said “no.” Apparently, you both agree and disagree that “neoclassicism” can “deal with problems like this.” Anyway, our conversation, such as it is, is all in writing right here on this thread, so readers can decide for themselves whether I “even have a counter-argument,” whether you’re being consistent, and so on. It seems unlikely that further engagement on this thread would be fruitful.

Bigger problem: Daniel seems to be asking that Roger prove that neoclassicism doesn't deal properly with surprise. You can't prove a negative. It is not up to Roger to show that it fails to deal properly with surprise; it is up to Daniel to prove it does when Roger says it doesn't. If Daniel cannot come up with such proof, then (assuming a neoclassical actor capable of knowing everything necessary to achieve his goals) it suggests Rogers is correct. Simply asserting "yes they do" won't cut it.

And if you agree entirely with someone's argument, it does make sense to send someone to read the original rather than restate it.

Roger, I have a BS and MA in math, so yes, I have a rudimentary grasp of probability. By "lay-level discussion," I mean sticking to examples and self-contained posts, not referencing people to academic literature.

I'm not sure why listability is a problem. If you identify something and assign it a probability of zero, you've removed it from the list of possible events, which is equivalent to not listing it at all. For example, I believe the probability that a pink elephant wearing a bikini and ridden by robot Hitler is currently stomping on my car is zero. I've conceived of it just now this moment. I've listed it. I would be quite surprised if it happened. There are lots of equally absurd things I could dream up, but won't, and if they happened, I'd be just as surprised.

To put it a different way, if I have a set of events, and I assign some of them a probability of zero, if I have actually assigned them a zero probability, my subsequent decision making and expectations are no different than if I haven't identified them at all.

As Caplan pointed out, though, a surprise birthday party is surprising. But is it something we can't list? We all know about them. So is it not something you'd consider a "surprise" is Boettke's sense?

And I'm with Daniel...I'd really like someone to explain why Caplan is wrong.

The standard model of knowledge builds on a state-space and assumes the agent can recognize which state she’s in. You might model uncertainty as a partition on the state space; you always know which subset your in, but may not know where you are within the subset. Up to some such partition, the agent’s got the state space ex ante and knows where she is in the state space ex post. But if she is “unaware” of some event (=a subset of the *modeler’s* state space) then you can’t use the standard model. Clearly, we cannot expect posteriors to converge under unawareness. More than that, the agent will have to change her state space after the realization of an event she was unaware of. That need for a revised state space is not an implication of having a prior that put zero weight on the realized outcome. So there is a behavioral difference between the realization of a zero-probability event and the realization of an event that was not in the agent’s state space. But standard theory has no model for updating the state space. “Surprise” may imply that you need to revise your state space. That is simply *not* the same as a low-probability event or even a zero-probability event. In other word, "surprise" is not the same as low probability, nor even the same as zero probability.

Josh,

Look -- a lot of these issues are not the common sensical notions you want to talk about, but are related to very specific claims within a technical literature of economic decision making. When you use a term such as neoclassical approach you invoke that technical literature, when it is shown to you that the best practitioners within that literature recognize that these are thorny issues and that when they attempt to fit "genuine surprise" into the model, the traditional models are not robust. Thus it requires additional repair work. This is what it going on in those papers. Nobody is arguing via citation, I am seeking to find a common ground because it is a mistake to talk in a common sensical way and in a casual way about "neoclassical" models of knowledge and decision making. The mistake is not to recognize the sensitivity of the model to these issues in terms of being unable to produce determinant results.

Your fly, your surprise party, and your pink elephant are examples of known unknowns, what these other issues are about are unknown unknowns --- think of a moment of radical innovation within an economy (the postit note, e.g.,), or think if the history of economic growth from the caveman to modernity and how the various opportunities for gains from trade and gains from innovation were discovered. Some were a consequence of known unknowns, but many simply are most in retrospect coming to learn from unknown unknowns becoming known.

If we don't leave room for the unknown and unexpected in our theory (now I am moving beyond the technical literature and giving you some Austrianism), then we will not have scope in our theories for entrepreneurship and creative (as opposed to mechanical) learning.

But again, the insisting on actually reading the arguments in the literature on these issues --- not blog discussions, and not even casual online "debates" --- is an attempt to get some agreement on the words being used and the technical issues at stake. We can talk about listability, we can talk about unwareness, we can also talk about common knowledge assumptions, and also the necessary restrictions on surprise. These are not exclusively Austrian points, they are as I have said issues that theorists themselves recognize and try to cope with.

The comments to this entry are closed.

Our Books