September 2022

Sun Mon Tue Wed Thu Fri Sat
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30  
Blog powered by Typepad

« Kevin Dowd on a Free Market Banking Reform | Main | New NBR Post »


Feed You can follow this conversation by subscribing to the comment feed for this post.

You might want to look at this article that has some suggestions for the evolution of peer-review process incorporating blogs etc.

Gus di Zerega made the same argument a few years ago at the Liberty and Power blog. I didn't believe it then, and I don't now.

Did peer review keep bad science, like Keynesian economics, from gaining a big mindshare of the academic economics Establishment?
Peer review is no better than the peers doing the reviewing. It's potentially weaker in my view, to the extent it excludes outsiders, who might well have good criticisms of the insiders. Why are they, the self-annointed ones, any better than anyone else?
In the age of bloggers, the argument for peer review is even weaker.

Polanyi explicitly said that it did not matter whether funding comes from the government or from private parties, as long as these don't try to interfere with research, as long as they allow scientists to function as a Free Republic of Science.

But this abstracts away from a crucial institutional form, namely the source(s) of funding, and what the influence of this economic form is on the way such institutions as peer review in fact function.

Polanyi relied on an invisible hand argument, but he failed to see that the outcome of invisible hand processes is determined in part by the economic order in which they operate.

Specifically, different economic orders (of science) create different incentives or disincentives, and opportunities for entrepreneurship in science (either by individual scientists or by property-owning scientific entrepreneurs). Some economic orders for example may be more conducive to political rather than market entrepreneurship, or destructive or unproductive rather than productive entrepreneurship (for the terms see Boettke and Coyne 2009 "Context Matters" in 'Foundations and Trends and Developments in Entrepreneurship')

(Relative) self-government of scientists under public funding will result in different outcomes than (relative) self-government of scientists under private funding.

(shameless self-promotion: I'm trying to substantiate these intuitions in my MA thesis)

Thanks Pete.

Perhaps I have had a bit of a false impression.

I think there are still some open questions here, questions highlighted by the climate science scandal. First among them is the question of whether science operates the same in the sciences of simple phenomena -- like Polanyi's own field chemistry -- as it does in the essentially complex sciences, such as climate science, Darwianian biology, psychology / global brain theory, or economics. For one thing, the relation between "data" and "modeling" is VERY different.

Similarly, "testing" plays very different role.

One question is whether peer is a reliable engine of scientific progress in the sciences of complex phenomena the way Polanyi and others see it to be in the simple sciences.

There is some evidence that peer review -- in conjunction with the guild structure of the acadamy -- has served insulate and replicate bad or regressive science from criticism and competition, within a number of the most important complex sciences.

For example, we have seen in the universities of continental Europe how Darwinian biology was excluded from science even into the 1970s by the dominant "peers" in the biological labs of France and other major countries.

We have seen behaviorism exclude other rivals in American research labs across decades -- only for this paradigm to come tumbling down decades after criticism from OUTSIDE of the the dominant scientific peer group had exposed fatal scientific problems with the paradigm.

And this fact of the significance of criticism and development from "outside" of the peer group is significant -- we've seen this even in economics with the work of e.g. Kahneman and Tversky, publishing in non-economic journals. Ostrom's work might fit in this same category.

Of course, I don't expect peer review to be perfect, and I do see it at playing various roles, some of them positive.

But we might ask, is does peer review in economics work more like that in physics and chemistry journals or more like English journals or climate science journals or sociology journals. Or perhaps somewhere in between.

It's at least an open question whether peer review serves the same function in economics as it does, say, in chemistry or physics.

I'm not advocating against peer review, I'm pointing to questions about the use of peer review or Polanyi's "Republic of Science" model as a criterion for identifying good "science" or good scientific practice when we look outside of chemistry, physics and the other sciences of simple phenomena.

As someone actively involved in dealing with the peer review process on a busy and ongoing basis just a few remarks.

1) Much of economics journal refereeing has been double-blind for some time. If anything, people are urging that we go back to single-blind, although I have resisted doing so, because with the web it is now so easy to find out who the authors are. Most papers are up on websites somewhere or other, and I have even had fellow editors argue that if a paper is not up publicly available somewhere, it is probably not worth publishing. However, I continue to think that double-blind is still better, if only for the appearance of anonymity.

2) I think that there is a difference between economics and harder sciences (although climatology is a somewhat softer "hard science"). We rarely have slam dunk falsifications of hypotheses that simply do the trick and everybody agrees and goes home and shuts up. There is usually an out, so that when ideology is involved, as it so frequently is, the apparent "losers" can make alternative arguments or claims for exceptions and keep on keeping on with whatever their claims may be. In the hardest of the hard sciences, facts have a better chance of eventually winning out, even when games are being played, as they have been in the climatology debates on both sides.

Oh, as a final note, Pete, please do not link to John Lott ever again as a commentator on how peer reviewing or anything else involving ethics in publishing or research should be done. This sock puppeteer is one of the most ethically challenged of all economists, with a long track record of suing people who disagree with him over utter garbage, and who has been challenged on numerous occasions by others for having attempted various kinds of data manipulations to satisfy ideological ends. Especially on this climategate business, there are simply gobs of other sources who can provide relevant info on this without going to someone as utterly unreliable as him.

Dr. Bob Higgs - prescient as always:

Notice the date, Sept., 2007.

In my field (I'm an electronic engineer, like Garrison :-D and got the PhD in 2008) peer reviewing works quite fine.

In engineering there is no politics (no ideology), and reality checks are easy because an oscilloscope can falsify a theory in a way that a vector autoregression will never do.

Despite this, there are three problems.

1. Peer reviewing is free. Busy people will not waste time in making good reviews. I did my first official ones recently (one was translated by a software and I haven't understood anything :-D), and it is a time-consuming process. No surprise if occasionally reviewers say things that are obviously wrong, or fail to read the text. This is a minor problem in that there is probably some "reputation" problem in being serious in reviewing, and probably some psychic gains in terms of fulfilling moral duties.

2. Peer reviewing tends to persitently impose strange biases in the way research is done. For instance, in electronic engineering manufacturability plays NO role in publications. Many papers (the majority?) will never work in a real design (other than computer simulation). There are instances in which this is not important (I'm ready to say that a real novel idea can't be ready for production, but it's rare), but when a paper says "I can do better than the state of the art, but only if the wheather is good" there is something weird. The cognitive "bias" of electronics peer reviewing is industrial airheadedness. Probably in economics is fetishism for econometric tests or formal models, I'm not sure.

3. Sometimes peer reviewing becomes a power struggle. This occurs when there are big players (ok, I haven't read Koppl's paper, just kidding) in a small niche. Reviewers may be a closely knit group which try to impede outsiders to gain a reputation by publishing, and I think sometimes they reject papers to publish similar ideas afterwards (this is something I've heard but I haven't seen).

Despite these shortcomings, I have no idea of how to do without peer reviewing.

Convergence to truth is slow, at least in economics, probably because evidence is rare. However, by the same reason convergence will be slow whatever the selection mechanism. If the truth were evident, organized science (as a sociological entity) would be useless. It's because it is difficult to discard the chaff that some selection mechanism is needed. The more difficult the process of discarding, the less effective the selection mechanism. And this is true for every mechanism: to choose the least bad is a matter of "comparative institutional analysis" :-)

in what ways is the market for knowledge distorted by the institutional structure of state-funded research? does hayek's observation about the process by which the 'worst get on top' in the domains of bureaucracy and politics have no application whatsoever to the 'market process' of rivalry within academe?

do the moral beliefs of senior scientists shape the results produced, such that peer review can produce very different results in a world of pure competition for prestige rather than one constrained by the holding up of truth as a transcendent value?


"in what ways is the market for knowledge distorted by the institutional structure of state-funded research?"
it may result in different incentive structures that affect choices made at the margin. Productive entrepreneurship may for example become more difficult if there are only a small number of funding agencies that, moreover, tend to be populated by entrenched (although possibly perfectly sincere, noble and respectable) interests.

"does hayek's observation about the process by which the 'worst get on top' in the domains of bureaucracy and politics have no application whatsoever to the 'market process' of rivalry within academe?"

i think it does have such an application, although I think the Polanyian transcendental ideal of Truth (a cultural norm or institution) does work to moderate this to some extent, which makes science perhaps somewhat less vulnerable to such problems than politics per se.

"do the moral beliefs of senior scientists shape the results produced, such that peer review can produce very different results in a world of pure competition for prestige rather than one constrained by the holding up of truth as a transcendent value?"

I am not sure about the distinction you make there: ideally, the competition for prestige goes hand in hand with (interesting) truth as a transcendent value. People give you credit for a contribution in part if they can use that contribution in their own pursuits. The question is what kinds of pursuits tend to be rewarded more in a given institutional environment. Certain environments may for example be more conducive to change, to challenges to established beliefs or paradigms, than others. Sometimes this may be a good thing (the wholesome role of dogma in science should not be underestimated) and sometimes not a good thing (entrenched interests may be sincerely committed to their own ideas vs. an upcoming but still very underdeveloped challenge and use their power to not fund such a competitor, while a scientific entrepreneur may in fact believe in the new paradigm and support it to receive credit later from perhaps a new generation of scientists)

This is about al gore and ban-ki-moon stating openly that this is about one world governance. This is about having a global non elected coorperate conglomerate tax you on anything and everything you do, your travel, the food you eat (cows fart methane, they'll tax the farmers for that too) every mammal on the planet expells carbon dioxide, every green thing consumes it. It's plant food. Three words. READ THE TREATY!! Stop being socially driven bleeding heart idiots and WAKE UP! The treaty clearly states there will be taxes imposed on travel, gas, flying, money transfers, and a new system of carbon credit derivitives that you will be forced to buy to offset your carbon footprint will be forced on you! Copenhagen will solve nothing, you can keep polluting all you want as long as you buy market driven carbon derivitives to "offset" your breathing. But what if your local power plant can't afford the extra credits? What if you can't? Well the treaty also calls for an "administrative body to ensure complyance" as stated under their structure for this new world government. So you will be without power and transportation because of it. This has nothing to do with the earth! They are openly trying to rob you blind and controll every aspect of your life and you think it's about co2.

Ok, a friend of mine confirmed my point (3): there is a conference in a small niche of electronic engineering in which it's years that only "insiders" get published and there appears to be a risk of being plagiarized.

I further add, as some has underlined, that climatology is epistemologically more similar to economics than engineering: objective facts are scarce, theories are exceedingly complex, political considerations are important, and ideological tenets are strong.

However, these problems would be the same under whatever alternative selection processes, not specific to peer reviewing.


Last I checked "the treaty" focuses on cap and trade, not taxes, and, no, they are not the same thing, even if they eventually have some equivalences. You sound like Sarah Palin simultaneously declaring in yesterday's WaPo that climategate has "proven" that "global warming is a hoax" while also declaring that she dealt with "real global warming" in Alaska, which she knows is due entirely to "natural cycles."

Regarding the refereeing business, much depends on the specific area or topic. In some there is an in-group that dominates everything and cites itself, approving papers by other members of the in-group, while excluding any wannabe outsiders. In others, there are sharp and even nasty conflicts, sometimes having become personal, with people in the opposing groups trying their darnedest to block publication of papers by people in the other group, while encouraging their guys. Some other fields are more neutral and, well, scientific.

Now, much depends on how much the editor in question knows about the field and its players. Editors, especially of general journals, are regularly encountering papers in areas that they are not expert on, which is why they need coeditors and associate editors, although of course these can be parties to conflicts (as can the editor his or herself).

But, when an editor is knowledgeable about the area that a paper is in and knows that there are disagreements of whatever sort, and the editor is not a partisan of one side or the other (or at least wishes to play fair, even if actually favoring one side or the other), then an obvious strategy is to invite three referees, one presumed to be somewhat favorable, one presumed to be somewhat critical, and one more in the middle. This makes it easy if they all agree one way or the other about the paper. Of course, if they disagree, well, that is what makes life interesting for editors...

Gatekeepers, incentives and guild structure in science -- some things to think about:

It's worth considering that the formal process of peer review was trashed and essentially set aside during the publication process of Debreu's landmark 1954 paper. The paper didn't pass peer review. It was rejected by one of two reviewers. The other reviewer admitted he was not competent to
pass judgment on the paper -- and hadn't spent enough time with it to determine if it was mathematically sound. Let alone economically sound. A third reviewer was brought in. He also admitted he wasn't competent to evaluate the paper, and hadn't spent enough time with it anyway. He also advocated publication.

Everyone knew the paper was written by Debreu.

On Debreu's mere mathematical reputation the editor decided to publish.

That was peer review in 1954.


Arrow was also a coauthor of the 1954 paper, who is also is and was a very smart guy (and still alive). Yes, the handling of that one is a very bizarre case indeed, not the standard then or now. Of course, Lionel MacKenzie published a paper proving the same thing independently at the same time, in a much more easily understood way (indeed the way it is shown in many textbooks), but rarely gets any credit for it (it is always "Arrow-Debreu general equilibrium," unless Roy Weintraub is writing), and has never gotten a Nobel, in contrast to both Arrow and Debreu.

Regarding what has happened in climatology, the dominance of the "warmers" over the "coolers" was established by the mid-1970s, with a nearly even debate going on in the early part of that decade (when global temperature was still falling). Part of the reason the victory of the warmers has been so strong since is that indeed global temperature has risen since then, as they predicted. In this debate, unfortunately, neither side has behaved well, however.

Barkley -- right, but is was trust in Debreu's mathematical ability which mattered in the "peer review" process.

Weintraub tells the story in his essay "Equilibrium Proofmaking" in his _How Economics Became a Mathematical Science_.

Barkley - I think you're talking about two things. First peer review and second anonimity in that process. Why not have full disclosure of referees to ensure that the peer review is undertaken properly?

Part of the point, isn't it, is that Debreu really didn't have any "peers", did he? He was working in a domain of overlap where there no one else who shared his combined abilities.

In a world of narrow specialization, and in a world of entrenched research programs, this is very often the case -- and especially true when you are dealing with revolutionary or breakthrough work at the margins of understanding.

One difficulty with the idea of "peer review" is that there are often few true "peers" competent or interested in a new paradigm busting research effort.

How many "peers" did Hayek have for his _The Sensory Order_? Joaquin Fuster in 2009 has a good appreciate on what Hayek was up to and what his achievement is.

How many really "got it" in 1952, or worried about the issues Hayek was working on?

Maybe -- sort of -- a handful, working on all sorts of alternative or very narrow projects that really only touched in partial ways on Hayek's global project.

When Wittgenstein provided his response to Russell, Russell abandoned the field of technical logic, and confessed he couldn't operate on the same intellectual level.

For his later work, Wittgenstein had to create his "peers" -- and they early "Wittgensteinian" weren't such great "peers" in many ways, lots of them didn't "get it" so well, or understand what the point was.

How may "peers" did Darwin really have? All sorts of "Darwinians" understood his work differently than Darwin -- Darwin was lucky in having Wallace as a genuine "peer" and independent discoverer.


Oh, that one is easy. Suppose the author is very powerful and prominent? How many referees are going to stand up and criticize rather than kiss ass, if their names will be revealed? (And, don't kid yourself, there are plenty of prominent figures who can be quite vengeful.) That said, there are certainly costs to blind refereeing, although good editors can catch some of them. Of course, one of the worst vices is referees demanding that authors cite lots of papers by them, which is sometimes justified, but of course also tends to give away their identities. Excessive amounts of this one make me want to puke, and I see it a lot (pass the barfbag, please).


For once you and I are on the same page (it does happen occasionally, :-)). I published a review essay in a symposium on Weintraub's book in the JPKE some years ago, with him rejoindering. Roy makes a point that the evolution of use of math in econ reflects similar evolution of math itself. In the 1950s, the ultra-formal Bourbaki School in France was at the height of its influence and prestige, and Debreu was the only economist who was a student directly of the top Boubakists in Paris, and that indeed did put him in a position of essentially having no peers, although John von Neumann, who was the first to use fixed point theorems in economics some time earlier, was more than a peer. But they did not send it to him, and he was by then busy working for the US military and inventing automata.

I can tell you that indeed for an editor one of the hardest situations is when a paper does come in that is simply by itself, without anybody obvious around to serve as a capable referee. These are some of the times when editors must simply use their own judgment, even if they do not entirely know what they are doing. Sometimes these decisions are gut calls in the end.

Barkley - that is the argument I realise. But given the Climategate fiasco, and it is a fiasco, the peer review process is now under a lot of scrutiny - I am not as sanguine as Pete.

"And, don't kid yourself, there are plenty of prominent figures who can be quite vengeful" - I agree and understand, but this is exactly the corruption that climategate has exposed.

On a related matter, but not due to climategate, there was this proposal in the Australian newspaper last week.

Not only von Neumann was working for the military --so were most of the math economists ....


Regarding von Neumann, he simply did it at a level that was way orders of magnitude more than any of the regular math economists (and keep in mind that economics was pretty low on his totem pole; he was really a mathematician who also dabbled in physics, econ, computer science, and so on). Even as he lay dying, there were something like four top military officers hovering over him with his stomach cancer getting the last little tidbits out of him.

Yes, climategate has put the spotlight on peer review, and it is showing what goes on. But as I have indicated, it is really up to editors to keep the game players in line, if they can.

The link is not very informative. We already have the internet, and arxiv is clearly an important outlet, especially in math, phys, comp sci, and even quant fin. What really matters is citations, and if people start citing your paper, it really does not matter if or where it is published. So, this option is already there.

Journal publishing is not solely in the hands of the commercial publishers such as Elsevier and Springer. Many societies publish their own journals, some of which are the very top journals, such as those published by the AEA, the Econometric Society and the Royal Economic Society, as well as publishers tied to universities, such as the University of Chicago (JPE), Harvard (QJE), not to mention such outfits as Oxford University Press, Cambridge Unviersity Press, and MIT.

What the commercial publishers did, or the better ones anyway, was to create specialty journals that did not prevously exist and which have since become quite prestigious, such as the Journal of Econometrics and the Journal of Public Economics, rumored to be the targets of the four new journals being put out by the AEA (which so far do not appear to be doing all that hot, frankly). Eventually their monopoly profits will get bid away and other outlets will prosper, but some of those journals remain important.

There are also new electronic journals, such as those put out by the Berkeley Electronic Press. However, I think they blew it by putting out too many journals. I only see citations happening to papers in a few of their journals.

The higher level game here is the role of published journal articles in getting people tenure. As long as that stays in place, then journals of whatever stripe, will still play a role, and many universities insist that it is only peer-reviewed journal articles that count. So, there is another game going on here.

BTW, some years ago Ted Bergstrom pointed out that referees are the more seriously ripped off of agents than taxpayers in providing free services to the commercial publishers. He called for a boycott of the most expensive ones, but my observation is that this went basically nowhere, although once in a long while I will encounter someone who will refuse to referee for JEBO because it is published by Elsevier.

You've simply identified more pathology, Barkley.

Academics are narrowly specialized and are asked to evaluate for tenured positions people they aren't competent to evaluate. Publication becomes a substitute metric for evaluating "good science" -- and what generates easy publication comes to dominate any sub-specialty. This in part explains
how economics is driven by fashion in a way more similar to philosphy than chemistry. Hard problems and reseach programs that don't spit out endless publications get dropped.

You even see this in biology. Almost no one does Ernst Mayr / David Hull
type fundaments work on the global Darwinian conceptual framework -- everyone does easily publishable field reseach or lab work. A committee was actually set up to fight this problem. As far as I'm aware, nothing really came of the effort.

In philosophy a dead end mistake can generate thousands of easy publications, meanwhille for generations most of these publicaoon generators ignored the work of Wittgenstein, which is deep and seminal but which produces no simple crank em' out publications.


Now now, you know that I do not do things "simply" but "complexly" :-).

There are fads in physics and chemistry as well, although they may be more tied to "real" immediate problems. While there is a push to publish a lot, in most departments, and certainly in the higher ranked ones, publishing in higher ranked journals counts for more than sheer numbers of pubs, and in those departments, citations will also count, also with ones in higher ranked journals counting more.

There have indeed been some cases of people getting tenure in the very top departments who have published very little, but have had lots of high-powered citations of major unpublished papers, although this is pretty rare. One example within the last few decades in economics would be Michael Woodford, who bounced back and forth between Chicago and Princeton (where he now is) on the basis of a fairly small set of very long and heavily cited papers, finally getting one of them published in Econometrica ("Learning to believe in sunspots," 1990). What is odd about his case, is that those papers were really out there and heterodox, but that after he got tenure he "went conventional" and became one of the most important godfathers of the now-in-ruins-but-recently-dominant "New Keynesian Synthesis Rational Expectations DSGE" models that are still pretty much deeply entrenched in the basements of most central banks.

Thanks Barkley - that's a good reply.

Barkley -- all true but mostly complexity which easily fits my story in a non-blog post length version.

To use the economics profession as an example of the influence of special interests on the professionalism of a science, consider Stigler’s 1976 essay.

Stigler pondered that economists whose skills and experiences are congenial to interest groups become leaders of opinion. Those economists whose skills and experience are not congenial to interest groups become writers of letters to provincial newspapers.

The same goes for the so called hard sciences. Members of that profession are no more blessed with integrity that any other scientist.

Competition drives out policy views for whom there is insufficient interest group demand. No jobs; no grants.

The economics profession also shows that sciences can take long detours away from the path to greater truth.

The ascendancy of Keynesian macroeconomics up until the 1970s stagflation showed it to be a empirical failure on a grand scale is an example.

Peer review and other professional safeguards of standards did not help neither leading monetarists nor others get published or get a fair hearing in textbooks.

The socialist calculation debate, the resistance to the emergence of public choice, and a more realistic analysis of the performance of communist economies are all other examples.

The global theorizing / explanation issue is a real one. Watch Olstrom's Nobel lecture, esp. at about 15:00 in the talk. She points out that the academic community cranked out thousands of detailed studies of spontaneous orders / common property problems but no one looked at them systematically to make sense of them at a global theory / explanation level. It seems only research grants provided by the National Research Council provided academics with the incentive and institutional support to pursue this work -- the Ph.D / tenure / peer review process provided the incentive for the endless thousands of specialized individual studies, but failed to provide any incentive for anyone to read this material and use it for theorizing / explanation spontaneous order and common property at a global level.

The tenure / peer review / Ph.D process incentivized endless unread research and in did not lead to cumulative knowledge -- in Ostrom's view.

It is ironic that peer review is coming under fire when the errors arose from the use of a non-peer reviewed article. What this suggests is that we need to be even more careful of using any data that has NOT been peer reviewed.

when i was young, i have more and more money, but some times is lost. ~~~

The comments to this entry are closed.

Our Books