|Peter Boettke|
Let's debate the merits of this statement on the state of economics and how to fix it by David Colander.
Written Testimony of David Colander
Submitted to the Congress of the United States, House Science and Technology Committee
July 20th, 2010
Mr. Chairman and members of the committee: I thank you for the opportunity to testify. My name is David Colander. I am the Christian A. Johnson Distinguished Professor of Economics at Middlebury College. I have written or edited over forty books, including a top-selling principles of economics textbook, and 150 articles on various aspects of economics. I was invited to speak because I am an economist watcher who has written extensively on the economics profession and its foibles, and specifically, how those foibles played a role in economists’ failure to adequately warn society about the recent financial crisis. I have been asked to expand on a couple of proposals I made for NSF in a hearing a year and a half ago.
I’m known in the economics profession as the Economics Court Jester because I am the person who says what everyone knows, but which everyone in polite company knows better than to say. As the court jester, I see it as appropriate to start my testimony with a variation of a well-known joke. It begins with a Congressman walking home late at night; he notices an economist searching under a lamppost for his keys. Recognizing that the economist is a potential voter, he stops to help. After searching a while without luck he asks the economist where he lost his keys. The economist points far off into the dark abyss. The Congressman asks, incredulously, “Then why the heck are you searching here?” To which the economist responds—“This is where the light is.”
Critics of economists like this joke because it nicely captures economic theorists’ tendency to be, what critics consider, overly mathematical and technical in their research. Searching where the light is (letting available analytic technology guide one’s technical research), on the surface, is clearly a stupid strategy; the obvious place to search is where you lost the keys.
That, in my view, is the wrong lesson to take from this joke. I would argue that for pure scientific economic research, the “searching where the light is” strategy is far from stupid. The reason is that the subject matter of social science is highly complex—arguably far more complex than the subject matter of most natural sciences. It is as if the social science policy keys are lost in the equivalent of almost total darkness, and you have no idea where in the darkness you lost them. In such a situation, where else but in the light can you reasonably search in a scientific way?
What is stupid, however, is if the scientist thinks he is going to find the keys under the lamppost. Searching where the light is only makes good sense if the goal of the search is not to find the keys, but rather to understand the topography of the illuminated land, and how that lighted topography relates to the topography in the dark where the keys are lost. In the long run, such knowledge is extraordinarily helpful in the practical search for the keys out in the dark, but it is only helpful where the topography that the people find when they search in the dark matches the topography of the lighted area being studied.
What I’m arguing is that it is most useful to think of the search for the social science policy keys as a two-part search, each of which requires a quite different set of skills and knowledge set. Pure scientific research—the type of research the NSF is currently designed to support—ideally involves searches of the entire illuminated domain, even those regions only dimly lit. It should also involve building new lamps and lampposts to expand the topography that one can formally search. This is pure research; it is highly technical; it incorporates the latest advances in mathematical and statistical technology. Put simply, it is rocket (social) science that is concerned with understanding for the sake of understanding. Trying to draw direct practical policy conclusions from models developed in this theoretical search is generally a distraction to scientific searchers.
The policy search is a search in the dark, where one thinks one has lost the keys. This policy search requires a practical sense of real-world institutions, a comprehensive knowledge of past literature, familiarity with history, and a well-tuned sense of nuance. While this search requires a knowledge of what the cutting edge scientific research is telling researchers about illuminated topography, the knowledge required is a consumer’s knowledge of that research, not a producer’s knowledge.
How Economists Failed Society
In my testimony last year, I argued that the economics profession failed society in the recent financial crisis in two ways. First, it failed society because it over-researched a particular version of the dynamic stochastic general equilibrium (DSGE) model that happened to have a tractable formal solution, whereas more realistic models that incorporated purposeful forward looking agents were formally unsolvable. That tractable DSGE model attracted macro economists as a light attracts moths. Almost all mainstream macroeconomic researchers were searching the same lighted area. While the initial idea was neat, and an advance, much of the later research was essentially dotting i's and crossing t's of that original DSGE macro model. What that meant was that macroeconomists were not imaginatively exploring the multitude of complex models that could have, and should have, been explored. Far too small a topography of the illuminated area was studied, and far too little focus was given to whether the topography of the model matched the topography of the real world problems.
What macroeconomic scientific researchers more appropriately could have been working on is a multiple set of models that incorporated purposeful forward looking agents. This would have included models with multiple equilibria, high level agent interdependence, varying degrees of information processing capacity, true uncertainty rather than risk, and non-linear dynamics, all of which seem intuitively central in macroeconomic issues, and which we have the analytical tools to begin dealing with.1 Combined, these models would have revealed that complex models are just that—complex, and just about anything could happen in the macro-economy. This knowledge that just about anything could happen in various models would have warned society to be prepared for possible crises, and suggested that society should develop a strategy and triage policies to deal with possible crises. In other words, it would have revealed that, at best, the DSGE models were of only limited direct policy relevance, since by changing the assumptions of the model slightly, one would change the policy recommendation of the model. The economics profession didn’t warn society about the limitations of its DSGE models.
The second way in which the economics profession failed society was by letting policy makers believe, and sometimes assuring policy makers, that the topography of the real-world matched the topography of the highly simplified DSGE models, even though it was obvious to anyone with a modicum of institutional knowledge and educated common sense that the topography of the DSGE model and the topography of the real-world macro economy generally were no way near a close match. Telling policy makers that existing DSGE models could guide policy makers in their search in the dark was equivalent to telling someone that studying tic-tac toe models can guide him or her in playing 20th dimensional chess. Too strong reliance by policy makers on DSGE models and reasoning led those policy makers searching out there in the dark to think that they could crawl in the dark without concern, only to discover there was a cliff there that they fell off, pulling the US economy with it.
Economists aren’t stupid, and the macro economists working on DSGE models are among the brightest. What then accounts for these really bright people continuing working on simple versions of the DSGE model, and implying to policy makers that these simple versions were useful policy models? The answer goes back to the lamppost joke. If the economist had answered honestly, he would have explained that he was searching for the keys in one place under the lamppost because that is where the research money was. In order to get funding, he or she had to appear to be looking for the keys in his or her research. Funders of economic research wanted policy answers from the models, not wild abstract research that concluded with the statement that their model has little to no direct implications for policy.
Classical economists, and followers of Classical economic methodology, which included economists up through Lionel Robbins (See Colander, 2009), maintained a strict separation between pure scientific research, which was designed to be as objective as possible, and which developed theorems and facts, and applied policy research, which involved integrating the models developed in science to real world issues. That separation helped keep economists in their role as scientific economists out of policy.
It did not prevent them from talking about, or taking positions on, policy. It simply required them to make it clear that, when they did so, they were not speaking with the certitude of economic science, but rather in their role as an economic statesman. The reason this distinction is important is that being a good scientist does not necessarily make one a good statesman. Being an economic statesman requires a different set of skills than being an economic scientist. An economic statesman needs a well-tuned educated common sense. He or she should be able to subject the results of models to a “sensibility test” that relates the topography illuminated by the model to the topography of the real world. Some scientific researchers made good statesmen; they had the expertise and training to be great policy statesmen as well as great scientists. John Maynard Keynes, Frederick Hayek, and Paul Samuelson come to mind. Others did not; Abba Lerner and Gerard Debreu come to mind.
The need to separate out policy from scientific research in social science is due to the complexity of economic policy problems. Once one allows for all the complexities of interaction of forward looking purposeful agents and the paucity of data to choose among models, it is impossible to avoid judgments when relating models to policy. Unfortunately, what Lionel Robbins said in the 1920s remains true today, “What precision economists can claim at this stage is largely a sham precision. In the present state of knowledge, the man who can claim for economic science much exactitude is a quack.”
Why Economists Failed Society
One of J.M. Keynes’s most famous quotes, which economists like to repeat, highlights the power of academic economists. He writes, “the ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.” (Keynes, 1936: 135) What this quotation misses is the circularity of the idea generating process. The ideas of economists and political philosophers do not appear out of nowhere. Ideas that succeed are those that develop in the then existing institutional structure. The reality is that academic economists, who believe themselves quite exempt from any practical influence, are in fact guided by an incentive structure created by some now defunct politicians and administrators.
Bringing the issue home to this committee, what I am saying is that you will become the defunct politicians and administrators of the future. Your role in guiding research is pivotal in the future of science and society. So, when economists fail, it means that your predecessors have failed. What I mean by this is that when, over drinks, I have pushed macroeconomic researchers on why they focused on the DSGE model, and why they implied, or at least allowed others to believe, that it had policy relevance beyond what could reasonably be given to it, they responded that that was what they believed the National Science Foundation, and other research support providers, wanted. That view of what funding agencies wanted fits my sense of the macroeconomic research funding environment of the last thirty years. During that time the NSF and other research funding institutions strongly supported DSGE research, and were far less likely to fund alternative macroeconomic research. The process became self-fulfilling, and ultimately, all macro researchers knew that to get funding you needed to accept the DSGE modeling approach, and draw policy conclusions from that DSGE model in your research. Ultimately, successful researchers follow the money and provide what funders want, even if those funders want the impossible. If you told funders it is impossible, you did not stay in the research game.
One would think that competition in ideas would lead to the stronger ideas winning out. Unfortunately, because the macroeconomy is so complex, macro theory is, of necessity, highly speculative, and it is almost impossible to tell a priori what the strongest ideas are. The macro economics profession is just too small and too oligopolistic to have workable competition among supporters of a wide variety of ideas and alternative models. Most top researchers are located at a small number of interrelated and inbred schools. This highly oligopolistic nature of the scientific economics profession tends to reinforce one approach rather than foster an environment in which a variety of approaches can flourish. When scientific models are judged by their current policy relevance, if a model seems temporarily to be matching what policy makers are finding in the dark, it can become built in and its premature adoption as “the model” can preclude the study of other models. That is what happened with what economists called the “great moderation” and the premature acceptance of the DSGE model.
Most researchers; if pushed, fully recognize the limitations of formal models for policy. But more and more macroeconomists are willing to draw strong policy conclusions from their DSGE model, and hold them regardless of what the empirical evidence and common sense might tell them. Some of the most outspoken advocates of this approach are Vandarajan Chari, Patrick Kehoe and Ellen McGrattan. They admit that the DSGE model does not fit the data, but state that a model neither “can nor should fit most aspects of the data” (Chari, Kehoe and McGratten, 2009, pg 243). Despite their agreement that their model does not fit the data, they are willing to draw strong policy implications from it. For example, they write “discretionary policy making has only costs and no benefits, so that if government policymakers can be made to commit to a policy rule, society should make them do so.”
While they slightly qualify this strong conclusion slightly later on, and agree that unforeseen events should allow breaking of the rule, they provide no method of deciding what qualifies as an unforeseen event, nor do they explain how the possibility of unforeseen events might have affected the agent’s decisions in their DSGE model, and hence affected the conclusions of their model. Specifying how agents react to unexpected events in uncertain environments where true uncertainty, not just risk, exists is hard. It requires what Robert Shiller and George Akerlof call an animal spirits model; the DSGE model does not deal with animal spirits.
Let’s say that the US had followed their policy advice against any discretionary policy, and had set a specific monetary policy rule that had not taken into account the possibility of financial collapse. That fixed rule could have totally tied the hands of the Fed, and the US economy today would likely be in a depression.
Relating this discussion back to the initial searching in the light metaphor, the really difficult problem is not developing models; they really difficult policy problem is relating models to real world events. The DSGE model is most appropriate for a relatively smooth terrain. When the terrain out in the dark where policy actually is done is full of mountains and cliffs, relying on DSGE model to guide policy, even if that DSGE model has been massaged to make it seem to fit the terrain, can lead us off a cliff, as it did in the recent crisis. My point is a simply one: Models can, and should, be used in policy, but they should be used with judgment and common sense.
DSGE supporter’s primary argument for using the DSGE model over all other models is based on their model having what they call micro foundations. As we discuss in Colander, et al. (2008) what they call micro foundations are totally ad hoc micro foundations. As almost all scientists, expect macroeconomic scientists, fully recognize, when dealing with complex systems such as the economy, macro behavior cannot be derived from a consideration of the behavior of the components taken in isolation. Interaction matters, and unless one has a model that captures the full range of agent interaction, with full inter-agent feedbacks, one does not have an acceptable micro foundation to a macro model. Economists are now working on gaining insight into such interactive micro foundations using computer generated agent-based models. These agent based models can come to quite different conclusions about policy than DSGE models, which calls into question any policy conclusion coming from DSGE models that do not account for agent interaction.
If one gives up the purely aesthetic micro foundations argument for DSGE models, the conclusion one arrives at is that none of the DSGE models are ready to be used directly in policy making. The reality is that given the complexity of the economy and lack of formal statistical evidence leading us to conclude that any particular model is definitely best on empirical grounds, policy must remain a matter of judgment about which reasonable economists may disagree.
How the Economics Profession Can Do Better
I believe the reason why the macroeconomics profession has arrived in the situation it has reflects serious structural problems in the economics profession and in the incentives that researchers face. The current incentives facing young economic researchers lead them to both focus on abstract models that downplay the complexity of the economy while overemphasizing the direct policy implications of their abstract models.
The reason I am testifying today is that I believe the NSF can take the lead in changing this current institutional incentive structure by implementing two structural changes in the NSF program funding economics. These structural changes would provide economists with more appropriate incentives, and I will end my testimony by outlining those proposals.
Include a wider range of peers in peer review
The first structural change is a proposal to make diversity of the reviewer pool an explicit goal of the reviewing process of NSF grants to the social sciences. This would involve consciously including what are often called heterodox and other dissenting economists as part of the peer reviewer pool as well as including reviewers outside of economics. Along with economists on these reviewer panels for economic proposals one might include physicists, mathematicians, statisticians, and individuals with business and governmental real world experience. Such a broader peer review process would likely encourage research on a much wider range of models, promote more creative work, and provide a common sense feedback from real world researchers about whether the topography of the models matches the topography of the real world the models are designed to illuminate.
Increase the number of researchers trained to interpret models
The second structural change is a proposal to increase the number of researchers explicitly trained in interpreting and relating models to the real world. This can be done by explicitly providing research grants to interpret, rather than develop, models. In a sense, what I am suggesting is an applied science division of the National Science Foundation’s social science component. This division would fund work on the appropriateness of models being developed for the real world.
This applied science division would see applied research as true “applied research” not as “econometric research.” It would not be highly technical and would involve a quite different set of skills than currently required by the standard scientific research. It would require researchers who had a solid consumer’s knowledge of economic theory and econometrics, but not necessarily a producer’s knowledge. In addition, it would require a knowledge of institutions, methodology, previous literature, and a sensibility about how the system works—a sensibility that would likely have been gained from discussions with real-world practitioners, or better yet, from having actually worked in the area.
The skills involved in interpreting models are skills that currently are not taught in graduate economics programs, but they are the skills that underlie judgment and common sense. By providing NSF grants for this interpretative work, the NSF would encourage the development of a group of economists who specialize in interpreting models and applying models to the real world. The development of such a group would go a long way towards placing the necessary warning labels on models, making it less likely that fiascos, such as the recent financial crisis would happen again.
This is a version of the point I've been making for 15 years, and first presented at an HES meeting in 1996.
"First, it failed society because it over-researched a particular version of the dynamic stochastic general equilibrium (DSGE) model that happened to have a tractable formal solution."
The marginal valuation logic of heterogeneous production coordination across time is mathematically untractable -- and the central / explanatory element in economics (entrepreneurial learning in the context of changing prices) is not capturable in any nomological law or statistical formuka or bit of math or formal logic, etc.
I.e. the core of an economy is formally and numerically untractable and not subject to "scientific" measurement and/or "scientific" math function -- the core involves open ended / non-natural kind causal components that operate over an open ended and ever changing set of physically and "law" defined instantiations (compare Hayek, 1952).
For more:
http://www.hayekcenter.org/friedrichhayek/hayekmyth.htm
Colander writes,
"[economics] failed society because it over-researched a particular version of the dynamic stochastic general equilibrium (DSGE) model that happened to have a tractable formal solution."
Posted by: Greg Ransom | September 16, 2010 at 12:45 AM
This Colander piece is just terrific.
I think he's absolutely nailed it.
Posted by: Greg Ransom | September 16, 2010 at 12:49 AM
My point above is simply that you can prove the the central valuation problem in economics is intractable and you can prove that the central causal / explanatory element in economics (just like those in Darwinian biology) cannot be made into a variable that is specified by measurement and can be plugged into a math function that cranks out law-like or stastistical laws, subject to inductive verification.
Indeed, you can show historically how economics has pursued a bogus scientistic conception of "economic science" -- and you can prove by logic and emprical demonstration that the whole effort is a logically and empirically necessary FAIL.
Both Hayek and Mises have made this case -- in a variety of ways.
Advances in mathematical economics and in philosophy have given us the tools to make he same arguments in more sophisticated form.
Following Colander, I believe we would get that if the funding and career incentives existed to encourage this.
Posted by: Greg Ransom | September 16, 2010 at 01:14 AM
Teh takeaway messages: 1) NSF funding corrupts scientific research by rewarding only research that produces the answers government wants, and 2) the economy is a complex system that requires complex system models to understand it. Of course, the NSF isn't going to fund systems work because if one sees the economy as a complex system, you cannot come up with activist policy recommendations, as the nature of complex systems is that one ca never predict the outcome of interventions. Best, then, to leave it to its own devices.
I do find it interesting, though, that after he goes on about the problems of too-simple mathematical models and political interference in research, he then goes on to recommend that "Along with economists on these reviewer panels for economic proposals one might include physicists, mathematicians, statisticians, and individuals with business and governmental real world experience." Isn't the problem with economics as it is now practiced that it is too mathematical, that it too often tries to emulate the simplicity of physics, and that it is too often infected by the concerns of government? A better set of reviewers: anyone involved in complexity or process theories, biologists, anthropolgists, sociologists, psychologists, philosophers, etc. who study complex systems and processes themselves. The first set of theorists I mention will of course include mathematicians and physicists -- and those who study complex systems will actually be able to make good judgments in the field.
Posted by: Troy Camplin | September 16, 2010 at 02:37 AM
It's a very good post - there are parts to agree and disagree with.
One of my biggest concerns is that it's exactly when you see people moving away from the math and the econometrics that you see them making bad mistakes about properly identification, mixing up correlation and causality, etc.
And he doesn't always seem to say "do less math" -- he specifically talks about how we have the mathematical tools to investigate non-linearities, multiple equilibria, and forward looking agents (I thought we already did these things anyway! We certainly worked with them in my macro courses).
So its not clear exactly what the role of math is - but the point is you have to be innovative and think about what your math can't deal with - and that point is very well taken.
I think the problem that a lot of people don't fess up to is that trying to answer a question without math can potentially be riddled with even more problems. Not always, but it certainly can.
Posted by: Daniel Kuehn | September 16, 2010 at 06:02 AM
For example, in this Mark Thoma post discussing a DSGE model it has three of those points he raised - non-linearities, multiple equilibria, and forward looking agents. I'm not defending DSGEs as the way forward or anything, but I think people vastly overstate issues with them. And this model that Thoma presents is CLEARLY selected because it fits the problem we're dealing with, not because its the kind of math George Evans happens to know.
Posted by: Daniel Kuehn | September 16, 2010 at 06:05 AM
It's the use of math -- especially the wrong kinds of math (nonlinear, complexity, etc.) -- that has made economics the basket case of a social science it is today. It oversimplifies and cannot actually deal with social reality in its full complexity. Too often those who use math mistake it for reality rather than understanding it to be a precise approximation of reality -- and often a gross oversimplification. Which is fine for modeling a few simple processes, but utterly useless for understanding the true complexities behind human action.
Posted by: Troy Camplin | September 16, 2010 at 07:52 AM
Is it actually true that economists failed to predict the recent financial crises? I seem to recall hearing lot's of warnings from about 2006 onward.
And I, myself started reducing my personal debt and buying gold as early as 1st qtr 2007 because of those warnings.
I'll give him this, in as far that some economists were deep in DSGE theory, maybe they failed to see the obvious warning signs of a classic high credit- easy money bubble..
Posted by: Kyle8 | September 16, 2010 at 07:55 AM
The first thing that struck me about this Colander testimony is the view of economics expressed through the lamppost joke. I have no problem with the view that the social world is extremely complex and therefore that it is more difficult to find and corroborate eternal truths and causal relationships than in e.g. physics. But what I understand from Colander's use of the joke is that economics seems to be a thoroughly and fundamentally inductive science. After all, to Colander it seems there is no reason for the economist to search in the light of the lamppost other than that it is easier to "see" there. The fact that he knows not at all where the keys are or where they could be clearly indicates that there is no theoretical framework that guides his research.
I didn't read further than the first section. Frankly, the initial discussion made me a little sick to the stomach. Maybe he elaborates on more interesting and important points later in the text. What I got out of reading the first part was that this means we've finally come to the point where economics has given up on its deductive roots (quite predictably, while undesirable). So let's just go out there and collect data blindly and then try to make sense of it all.
Why we would need to search in the light rather than stumble in the dark escapes me, however.
Posted by: Per Bylund | September 16, 2010 at 08:40 AM
Colander is arguing for a wiser, more thoughtful direction for NSF funding. That's like arguing for wiser, more thoughtful war efforts.
Posted by: Steve Miller | September 16, 2010 at 08:48 AM
Colander strategically avoids discussing the unwillingness of most economists to search under the truly beaming light: the adverse role of government on the economy under state capitalism. Of course, to do so would have jeopardized his own fund application and upset those congressmen that he was addressing with evidently simple jokes designed for evidently simple minds. And that is why most non-public choice economists steer clear and end up with mostly irrelevant economic analysis.
Posted by: Charles Rowley | September 16, 2010 at 09:12 AM
Charles -
Have public choice theorists been turned down for NSF grants? It seems like a lot of it would be interesting to them. Do you know if any public choice theorists have bothered applying?
I imagine Austrians would have it harder because the NSF economists on staff - for better or worse - might have concerns about their methodological approach. But it's not immediately obvious why the NSF wouldn't be interested in the other two legs of GMU econ: public choice and experimental.
Arnold Kling made the same point recently, citing Zandi as someone who gets government support because he tells the government what they want to hear. Here's the thing - Barro got NSF money for telling the government what it (purportedly) DIDN'T want to hear and Zandi - the one Kling cites - as far as I can tell didn't get any government money for producing his estimates.
For methodological reasons, yes - Austrians might get shut out. Aside from that I'm wondering if people are overstating the bias. What is the rejection rate for public choice economists vs. others? Do you know?
Posted by: Daniel Kuehn | September 16, 2010 at 09:58 AM
* And I should be careful about even calling the Austrian thing a "bias". NSF would probably say "we do not think that has scientific rigor". Peter and others may think that's a wrong assessment, but that doesn't make it "bias". I want to be careful about what I accuse NSF economists of, because they do a lot of good work.
Posted by: Daniel Kuehn | September 16, 2010 at 10:00 AM
Might be of interest..
Deficiencies of Austrian Economic thought:
http://conversationwithcrombette.blogspot.com/2010/09/deficiences-of-austrian-economic.html
Posted by: Red Bane | September 16, 2010 at 10:53 AM
It's worth noting that Philip Mirowski lays out how money from the military and the NSF played a significant role in funneling economic "science" into a singular concern with "economics under the lamp post".
See Philip Mirowski, _Machine Dreams_.
Posted by: Greg Ransom | September 16, 2010 at 11:46 AM
Colander gets the problem partially correct, but as others have noted he completely misses the problem of limiting macro to inductive reasoning.
I was taught in statistics that you always formulate theory first and then test the theory on the data. Otherwise, you are data mining, which statisticians hate because you come up with all kinds of spurious correlations.
Another problem is that mainstream modelers do in fact use deductive reasoning; they simply deny that they're doing it. Their models are based on Keynesian or neo-classical reasoning, which is the reason their models don't fit with reality. If the models were based on Austrian theory, they would fit well.
But Colander's solution, more empirical data and inductive reasoning is worse than models based on Keynesian theory. Keynes wasn't a complete idiot and he got some things right. But trying harder to derive theory from empirical data will do nothing but cause even greater confusion and multiplication of theories.
I'm exhibiting a bit of hubris here, but indulge me please. I think the way forward is for Austrians to become more interested in math, but not DSGE. Use structural equation modeling, SEM. It is designed to test competing theories and does well in other social sciences. I use it in survey analysis. You collect data, present your models and SEM will tell you which model fits the data better.
Posted by: fundamentalist | September 16, 2010 at 11:57 AM
Pete,
I guess you liked David's testimony since you gave it so much space. I'd agree with that judgment too. Notice that he does not question the role of the NSF as an oligopolistic supplier of research support, which reflects his purpose in testifying. But in some sense the "real" problem is, once again, institutions. Given the NSF, however, his suggestions seem worth a try. It's just that it's hard to see where they're really going make an enduring difference, particularly given the NSF's identity of supporting "transformative" basic research.
Part of the problem is that you have better chances of getting funding if you promise to predict the economy rather then identify limits to prediction. A few years ago, Robert Axtell got a big NSF complexity grant with the goal of creating “an artificial economy-that can be calibrated to real economies of today.” The hope is “to produce macroeconomic models that have high performance at predicting the near term unfolding of an economy, thus providing more accurate forecasts of economic evolution.” Admittedly, he does not aim past the “near term.” Still, he is saying complexity makes us smarter, not wiser. Sigh. The Austrian view has always been that complexity makes us wiser, not smarter. I don’t see where the NSF is likely ever to start preferring wisdom to smarts.
Posted by: Roger Koppl | September 16, 2010 at 01:51 PM
I happen to agree with Colander pretty much fully, but then I have been a frequent coauthor and co-conspirator with him, so this should not surprise anybody particularly.
fundamentalist,
I think you are misreading Colander. While he says "more induction compared to DSGE" he is not saying "induction only, no deductive theory." That is a misinterpretation. As he notes, the hard core DSGE people, such as Chari, Kehoe, and McGrattan, who are just astoundingly arrogant, as are their followers (see the lunatic nasties populating EJMR), admit that their models do not "fit the data," but it does not deflect them the least from doing the same old same old and shooting off their mouths to all and sundry about how they have "the answer" and are the only people around doing "serious macro." In light of what has gone on recently, this is truly appalling, and Colander is absoluttely right to nail them on this.
He is for more agent-based modeling and other approaches, with expanded views of theoretical underpinnings. Oh, and SEM certainly has its limits.
Posted by: Barkley Rosser | September 16, 2010 at 02:01 PM
Barkley, what method has no limits? Do you know of a better method for comparing theories?
Posted by: fundamentalist | September 16, 2010 at 02:10 PM
The debate between time-series methods, including Bayesian ones, and structural equations modeling methods, is an old and ongoing one.
Posted by: Barkley Rosser | September 16, 2010 at 03:03 PM
To continue the joke, an entrepreneur walks up and offers to sell the economist a flashlight and a map, and to rent to him a metal detector. The economist replies, "so sorry, I don't have funding for that".
The section about applied research is important. Models, if they are to be any good, need to be validated with historical data. Otherwise they are worthless. Econometrics, if it is worthy, does a good job of predicting the history. Going forward into the future, it may provide some predictive power just as an umbrella merchant checks the weather forecast for the chance of rain. Marketing people do this all the time.
Forecasting into the future requires confronting uncertainty and not just risk. True uncertainty necessarily includes the possibility of Nicholas Taleb's "Black Swan" events. So, DSGE models, to be more credible, need to define the limits of their predictive scope. No DSGE model should be expected to forecast the full range of possible events in an uncertain world; it's impossible. However, Austrian theory can enlighten economists to these neoclassical blindspots.
Regarding peer review, the addition of 100 randomly selected taxpayers would add a healthy dose of common sense.
Last, let's not forget the approach of Sir John Cowperthwaite, finance minister of Hong Kong in the 1960's. He purposely refused to collect data in order to deprive the British "policymakers" the opportunity to wreck the economic progress in Hong Kong.
Also, I just finished reading "Raising the Colour Bar" by South African economist William Hutt written in 1964. Unless constrainted by iron-clad constitutional limits, policymakers will always end up favoring sectional interests to the detriment of the general interest. The same applies to the NSF. It would be better if economic research was privately funded.
Posted by: Andrew | September 16, 2010 at 03:12 PM
The bogus choice between "induction" and "deduction" is PATHOLOGICAL -- this whole picture is part of the scientism which has led economics into fail.
The induction/deduction picture is a false picture of science.
Example -- you can't fit Darwinian biology into this picture.
In fact, you can't fit ANY science into this picture (see the work of Thomas Kuhn or Norwood Hanson or Karl Popper or F. A. Hayek).
The induction/deduction picture is the false picture of knowledge and science which economists have inherited from a failed tradition among the philosophers.
It's time the economists dumped the the failed project of the philosophers -- and it's time they picked up the successful strategy for doing science exemplified by Darwianian bioloyg, i.e. figure out what the empirical / causal problem is and focus on the empirical / causal mechanism that provides an explanation for that problem.
Posted by: Greg Ransom | September 16, 2010 at 03:48 PM
Barkley writes,
"He is for more agent-based modeling"
Big problem. We can prove that this is just more buzzing around the lamp post, like a mindless moth.
Why?
Because the central causal / explanatory element in economics CANNOT BE MODELED -- entrepreneurial learning / judgment in the context of changing prices and local conditions takes place outside of any nomological or statistical regularity, beyond any measurable physical / natural kinds, outside of the reach of pseudo-scientific formalism or statistical construct.
And then there is the whole problem of applying the logic of marginal valuation to heterogeneous production goods across time and with technological change ....
Posted by: Greg Ransom | September 16, 2010 at 03:57 PM
Barkley, Are you sure you know what SEM is? It's not the same as the structural models used in econ for ages, like the Fair model at Yale, although there are some similarities. SEM is a unique technique that incorporates many other techniques. It's part factor analysis and part linear regression, but it is based on a comparison of the actual vs the modeled covariances. And is solves equations through iterative algorithms, such as the quasi-Newton. It's the use of the covariance matrices that makes comparison of models possible and accurate.
Greg: "The induction/deduction picture is a false picture of science."
This seems to violate Hayek in "Counter-Revolution" and his Nobel speech, as well as Mises' first section in Human Action, unless I misunderstand you.
Posted by: fundamentalist | September 16, 2010 at 04:33 PM
Re: Daniel Kuehn's "forward-looking agents," i.e. entrepreneurs, Mises says in _Human Action_ that entrepreneurs do not look at the world through the eyes of an historian. I quoted the line in a paper written in college.
Maybe the smartest guys in the room at Enron did, when they were proofing their goofy, that'll-fool-'em, 2001 annual report, but no one in the real world does. There is no such thing as a backward-looking agent. As a friend of mine puts it, "climb down from your high chair."
Posted by: Bill Stepp | September 16, 2010 at 05:50 PM
Collander has made a major mistake if he thinks it is a good idea to separate pure and applied research.
"What I’m arguing is that it is most useful to think of the search for the social science policy keys as a two-part search..."
Roger Koppl might have picked up on that but I don't have time to read all the comments again.
The mistake has a long history and it was enshrined in the success stories about scientific research leading the way in civilisation and economic progress etc. This story has been subjected to devastating critique by Terence Kealey in "The Economic Laws of Scientific Research".
The bottom line is that theory and practice should never be separated, pure and applied research should synergise, as they did in Australian rural research.
http://catallaxyfiles.com/2010/09/16/4-well-spent-kealey-on-civilisation-and-free-trade/
In fairness to Colander, he may be heading in that direction with his final suggestions.
Posted by: Rafe Champion | September 16, 2010 at 06:01 PM
Greg writes,
"It's time the economists dumped the the failed project of the philosophers -- and it's time they picked up the successful strategy for doing science exemplified by Darwianian bioloyg, i.e. figure out what the empirical / causal problem is and focus on the empirical / causal mechanism that provides an explanation for that problem."
Could you share some links where this idea is elaborated in more detail?
Posted by: Daniil Gorbatenko | September 16, 2010 at 06:01 PM
Fundamentalist, read Hayek's "The theory of complex phenomena" or his Degrees of explanation" for a contrary view.
Hayek moved a long way from his early 1940s work in his 1950s publication.
Hayek started in the early Wittgenstein / Carnap / Mach / Mill world like many others -- he finished by exploding all that, and empracing folks who participated in the explosion party, e.g. Popper, the later Wittgensteinians, Ryle, Polanyi, the systems theory people, non-linear phenomena people, even recommending Kuhn to Popper (!).
Hayek ends up embracing Bartley and the whole attack in the justification / demostrative knowledge / Euclid ideal tradition.
Posted by: Greg Ransom | September 16, 2010 at 08:33 PM
Greg Ransom, it behooves someone of your caliber & scope in the field of philosophy of science to have F. S. C. Northrop among your repertoire. Would you please put on your reading list at least his _The Logic of the Sciences and the Humanities_ and _The Meeting of East and West_, and get back to us at some point your opinions of how Northrop fits into your scheme of things vis-a-vis, e.g., your two posts above (September 16, 2010 at 03:48 PM & 03:57 PM) - Darwin, induction/deduction, etc.? I trust that after getting Northrop under your belt, you would put him right alongside "Thomas Kuhn or Norwood Hanson or Karl Popper or F. A. Hayek."
By the way, I noticed that the Google page rank of the JSTOR feed to the Austrian economics-related Northrop article I cited elsewhere here (in the Comments to "Some Clarifications," September 01, 2010 at 03:40 PM) went to the very top a week after my citation, and remains there to this day - instead of references to it by other articles previously. I wonder if that Google page rank jump is attributable solely to activity from Coordination Problem readers.
Apologies in advance, Greg, if you're already quite aware of Northrop, but from your reply (September 01, 2010 at 05:15 PM) to my aforementioned Comment, it sounded like you weren't, and indeed by proxy from my characterizations were placing Northrop among the scientistic figures you so rightly criticize.
At least try-on for size that one Israel M. Kirzner-footnoted Northrop article, and see if you don't find yourself nodding your head, "Yes!":
"The Impossibility of a Theoretical Science of Economic Dynamics," Quarterly Journal of Economics, November, 1941, reprinted as ch. XIII in his _The Logic of the Sciences and the Humanities_
Posted by: George Machen | September 16, 2010 at 08:36 PM
fundamentalist,
Sorry, I knew both Sewall Wright, the inventor of SEM back in the 20s, and Arthur Goldberger, its greatest exponent in more recent years, personally. I will simply note that there are a variety of criteria that can be used to evaluate SEMs. There is no single "ah ha, this is it!" criterion. The arguments are very much ongoing.
Posted by: Barkley Rosser | September 16, 2010 at 08:39 PM
Greg, But Hayek's Nobel speech, the Pretense of Knowledge, was written in 1974 and he seems to make the deduction/induction argument there. And counter-revolution was written in 1952 and he decries the application of natural science methods to economics.
Barkley, they must have been interesting men. Yes, I use SEM regularly and use several criteria, but no matter which set of criteria you use, you will be able to compare different models attempting to describe the data. Without SEM, you're left with just comparing error in forecasting. There is no method for comparing models that has no problems. SEM is a powerful technique designed expressly for comparing models. I think if Austrians would embrace it it would go a long way toward getting people to take Austrian econ more seriously.
Posted by: fundamentalist | September 16, 2010 at 09:01 PM
Read Kealy and find how every war, starting from the civil war, was used to push the barrow of centralised state control of research. We are standing in the consequences and all of the bandaid suggestions from David Collander are just trying to keep alive a termianlly ill patient. Abolish the NSF and get with the GMU program of theoretically sophisticated, applied multi-disciplinary fieldwork.
This is not a paid advertisement for the GMU and I have no blood or pecuniary relationship with Peter Boettke or or anyone else associated with him or it.
Posted by: Rafe Champion | September 16, 2010 at 09:08 PM
Daniil, see the work of David Hull, Ernst Mayr, Alex Rosenberg and Larry White on the explanatory strategy, non-reducibility, and explanatory autonomy of Darwinian bilogy and teleological phenomena more generally.
Michael Ruse and many others demonstrated by the failure the absudity of attempting to put Darwinian biology in the nomological deductive model of the "received view".
The last person I'm aware of who really tries to vindicate whatever is left of the "received view" of science tradition of Nagel & Hempel (the induction // deduction model) along with the deep spirit of Mill/Hume empiricism is Alex Rosenberg -- and while Rosenberg's command of the literature and talent in applying it is unquestion and often pathbreaking, it's hard to find anyone in the field who anymore buys into the old project, which suffers from unending counter examples and pathologies.
Posted by: Greg Ransom | September 16, 2010 at 09:56 PM
The Counter-Revolution essays were written in the mid 1940s.
"counter-revolution was written in 1952"
The Pretense address is a work in persuasion using popular language and broad categories, althoughnit contains deep insights and claims.
Read the essays I've mentioned above for meatier stuff.
Posted by: Greg Ransom | September 16, 2010 at 10:01 PM
Hayek says the dominant view is a false view even of NATURAL science.
Here Hayek was influenced by Popper.
Note that Hayek came to reject even Popper's account.
Posted by: Greg Ransom | September 16, 2010 at 10:04 PM
Greg, are those essays available on the internet? I couldn't find them on your site.
Posted by: fundamentalist | September 16, 2010 at 10:36 PM
Fundamentalist,
Those essays are collected in Hayek's _Studies_ and in his _New Studies_.
Both are out of print -- which is a real shame.
Posted by: Greg Ransom | September 16, 2010 at 11:58 PM
Hayek's account of the relation of tautological formal constructs to the explanation of empirical patterns in our experience can be found in the first few chapters of his _The Pure Theory of Capital_, with supplemental material in his essays, "Economics and Knowledge", "The Use of Knowledge in Society", "Degrees of Explanation", and "The Theory of Complex Phenomena".
Note that this account is specific to economics, but Hayek suggest something similar is applicable to global brain theory and Darwinian biology.
Posted by: Greg Ransom | September 17, 2010 at 12:03 AM
Interesting read. I'm not sure I'm following him on his spotlight metaphor and his explanation of "pure research", but I like it.
I think the second proposition could be a great opportunity for AE, but is rather unlikely. The guys developing these models know too well that anyone would come to the conclusion that these models are irrelevant. No honest macroeconomist would pretend that rocket trajectory math, and in general papers without a single line of text, can really teach us anything about economics. It's about perfecting the models for the sake of perfecting the models. It seems unlikely they would give grants to people to discredit their equations and basically tell them what they already know deep down inside.
Posted by: Mathieu Bédard | September 17, 2010 at 03:25 AM
Many thanks, Greg!
Posted by: Daniil Gorbatenko | September 17, 2010 at 08:25 AM
Daniel:
My impression is that public choice scholars of the Virginia School have not fared well at NSF. I know that Gordon Tullock and Jim Buchanan were unsuccessful there. As Editor of Public Choice for a long period, NSF grant recognition was restricted to public choice scholars more favorable to government:- some from Maryland, some from Harvard, some from Cal-Tech, some from California universities. But those known to be critical of government never cited NSF. Now what I cannot be sure about, other than for Buchanan and Tullock, is whether many of them applied. Mostly, they would favor private monies, if available, although NSF, at least in principle, might have been viewed as more neutral than the various federal government departments.
Actually, answering your question might be a worthwhile public choice research topic.
Posted by: Charles Rowley | September 17, 2010 at 09:05 AM
Some overall comments: (1) Colander's statement is very good,especially as a statement submitted to a Congressional Committee. This needs to be a "politically astute" document; (2) The public choice aspect of NSF funding is quite important. These grants are one way the government keeps the "top" intellectuals happy -- whatever they do is "fine." We (the State) cannot afford an alienated intellectual class.
Consider for a moment that there is no way that the *theoretical* research done by economists has any likelihood of producing "social benefits" and yet it is funded.
My view is that the NSF will fund whatever is done at the top universities -- and their satellites -- just to maintain good relations with the intellectuals with the most influence.
Posted by: Mario Rizzo | September 17, 2010 at 09:44 AM
I'd also recommend Nassim Taleb's testimony:
http://democrats.science.house.gov/Media/file/Commdocs/hearings/2009/Oversight/10sep/Taleb_Testimony.pdf
Taleb discusses the pseudo-science of "measured risk" which came out of the Grad Schools and took over Wall Street.
Posted by: Greg Ransom | September 17, 2010 at 12:33 PM
For anyone near Harrisonburg, Taleb is speaking this afternoon on the JMU campus at 3:30 PM in Grafton Stovall theater on "Towards a Black Swan Proof Society."
Posted by: Barkley Rosser | September 17, 2010 at 01:05 PM
Mario, Collander seems to think that the system that created the problem could turn around and fix it (essentially by folding up its tent and going away). That is like expecting the Welfare System to reform itself from the inside.
If a project can't get funding from a successful entrepreneur (a market entrepreneur, not a political operator), send it to the maths school and see if they find it interesting.
Posted by: Rafe Champion | September 17, 2010 at 05:31 PM
Collander has a point. Anyways, I have to tell you, I really enjoy this blog and the insight from everyone who participates. I find it to be refreshing and very informative. I wish there were more blogs like it. Anyway, I felt it was about time I posted, I’ve spent most of my time here just lurking and reading, but today for some reason I just felt compelled to say this.
Posted by: Kristine | September 18, 2010 at 09:02 AM
Sure, it's all about the funding; but let's just maintain the funding the way it is (i.e., NSF controlled) with some superficial tweaks an interested party suggests (i.e., Colander). What was the Einstein definition of experimental insanity? It seems to me that economics is clinically as well as experimentally insane. There seems to be an easy first step here: (1) try something else, and/or (2) stop doing the same thing and expecting a different outcome.
Posted by: John Kihn | September 19, 2010 at 04:40 PM
The state-funded research industry is too big to fail.
Posted by: Rafe Champion | September 19, 2010 at 07:26 PM
Feel Immediate,prison ensure earth form tax battle difference indeed pull record need broad data previously assume effort alternative push regional concern serious training regulation course require shop store light motion response location afraid no-one strength scheme thank enjoy myself character however procedure component breath normally staff engine fair telephone affair contrast rich factor talk although physical there practical standard report inside relief throughout also violence life step curriculum early animal become same youth amount corporate direct fashion future admit yet journey birth relief judge health error construction
Posted by: newsdayinsider | October 20, 2010 at 02:50 AM