Where Should We Put Economic Empiricism on the Hubris-Humility Spectrum?
by Peter Dorman (originally published at Econospeak)
Where Should We Put Economic Empiricism on the Hubris-Humility Spectrum?
A bit of a kerfuffle has broken out over the claim that, as economics gets more empirical, it also gets more reliable. Russ Roberts says that, in the name of empiricism, economists are trotting out contested results to adjudicate questions that are vastly more complicated than their methods can allow for, and that they should acquire a bit more humility instead. Balderdash, says Noah Smith: whatever lack of humility is evinced by researchers who jump the gun in empirical work is swept away by the tsunami of hubris issuing from those who have only vague, unsubstantiated assumptions about how the world “really” works. Cowen rebuts, arguing that motivated, hubristic reasoning can seize on empiricism just as readily as any other academic raw material. Smith retorts, going halfway to meet Cowen, but expressing optimism that empirical methods carry their own antibody against motivated bs and will pull knowledge in a more realistic direction over time.
So what do I say? (I thought you’d never ask.) First, Roberts is simply recycling, for a forgetful age, the now-ancient claim of Hayek regarding the Fatal Conceit. I agree that it is desirable to not be Fatally Conceited, although it is widely recognized, I think, that, as he drew it, Hayek’s circle of unknowability is both too wide (not respecting degrees of evidence) and too narrow (sweeping assumptions about market process). Roberts can challenge me on this if he wants, but I see Fatal Conceit-ism in just about every paragraph of his post.
Meanwhile, I think really empirical economics would place intrinsic barriers to hubris and ideological cherry-picking, since its methods would be inherently self-critical. For instance, cataloging all plausible explanations for a possible relationship between X and Y and identifying potential markers for them in the evidence, as I suggest here, would be a prophylactic against overconfident “empirical” claims about a researcher’s favored theory. Similarly, serious attention to issues of proxy measurement would give pause to those eager to read in a sweeping interpretation of what is often just indirect evidence. I have also expressed concern over methods that are based on the assumptions that impose a priori uniformities on the economic relationships one tries to estimate.
So my position is that Roberts and Cowen are more right, and Smith less, than I would like, mainly because what passes for empiricism in economics at present is often deficient in an empiricist, self-critical spirit and methodology. At the same time, the debates over topics like the minimum wage, the effects of charter schools on educational outcomes and the like are on a vastly higher plane when they are about data sets and analytical assumptions than the certitude of my unquestioned beliefs against the certitude of yours. It’s also a cheap and not altogether forthcoming dodge to respond to econometric disputes with a flip “There is never a clean empirical test that ultimately settles these issues.” (Roberts) That’s a Merchants of Doubt epistemology.
by Peter Dorman
The most all-pervasive factor in US economics is non-measurable because it is totally missing: not just organized labor — but more trenchantly (never used that word before so I hope I got it right) the missing criminal (and for the most part civil) enforcement mechanism protecting organized labor from being beaten to death by employer market mechanisms (read firing organizers and joiners).
The only way to measure the impact of the absent enforcement mechanism (union busting has been illegal in this country for 80 years) might be to compare outcomes (political as well as economic) with economies in which labor is free to collectively bargain without a firestorm of resistance to same.
I’m thinking continental Europe foremostly where collectively bargained contracts are universal — including the all important centralized bargaining so Walmart cannot come in and gut good supermarket contracts for the race to the bottom.
Walmart in point of fact closed 88 big boxes in Germany where it could not compete successfully on quality alone.
French Canada right next door is a good example — having sector wide labor agreements; as do Argentina and Indonesia I believe.
Anecdote: For a couple of weeks at San Francisco’s Marriott Hotel on Fourth Street there was a union demonstration on the sidewalk. The leader would chant: “Hotel Marriott, you’re no go; sign that contract like you should — San Francisco should beware; Hotel Marriott is unfair.”
You can imagine how much good this did.
A concierge I was taking to work further up Market Street one Sunday morning in my cab told me that part of the deal with the city for allowing the Marriott to build there was that the union would be allowed.
WHY, OH WHY, DID A UNION NEED TO BE ALLOWED? Why cannot people simply collectively bargain if they simply wish — AS FEDERAL LAW HAS SPECIFIED SINCE 1935 Oh I get it; no penalty for destroying the only process by which the majority of Americans can exercise economic and political strength.
Measure US wages and democracy v. Germany — to get back on topic.
This is why we have economists — to make the meteorologists look good.
It appears that statistics are a at least part of our problem.
See February 2014: http://www.nature.com/news/scientific-method-statistical-errors-1.14700
This article from Nature discusses p-hacking but if the data is statistically valid then what is the problem? Obviously because not all statistically perceived valid data is true.
See April 2015: http://thelancet.com/journals/lancet/article/PIIS0140-6736(15)60696-1/fulltext
From the Richard Horton editor in chief of the Lancet April 2015:
“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
“One of the most convincing proposals came from outside the biomedical community. Tony Weidberg is a Professor of Particle Physics at Oxford. Following several high-profile errors, the particle physics community now invests great effort into intensive checking and re-checking of data prior to publication. By filtering results through independent working groups, physicists are encouraged to criticise. Good criticism is rewarded. The goal is a reliable result, and the incentives for scientists are aligned around this goal. Weidberg worried we set the bar for results in biomedicine far too low. In particle physics, significance is set at 5 sigma—a p value of 3 × 10–7 or 1 in 3·5 million (if the result is not true, this is the probability that the data would have been as extreme as they are). ”
That is quite a condemnation coming from the editor of a major medical magazine. Is any field other than physics using a p-value of .0000003?
Or is the rest of the scientific community content with a p-value of .05 or 5 in one hundred or 5%?
No wonder the truth wears off! See: http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off
But the explanation behind the data may be just as flawed. After all, some part of our leadership believes that ‘supply creates its own demand’.
taasd as f a major medical magazine. Is any field other than physics using a p-value of .0000003?
Welcome to Angry Bear “thethe”