This is unusual. I find myself disagreeing with both Noah Smith and Mark Thoma. Further they argue something which I have thought myself. To be frank, I think the cause of my disagreement is Noah Smith’s provocatively strong language.
It’s best to click the link and read the whole post
The reason macroeconomics doesn’t work very well
But Smith makes the main claim very clearly and concisely (in noted contrast to this verbose lead up to the snappy quote)
“Fundamentally, I think the problem is: Uninformative data.”
Uninformative data is certainly a problem. It is true that there aren’t many macroeconomic data and none of them is the product of a controlled experiment. But I think that stressing that problem incorrectly absolves the macroeconomic profession.
The data are few in partly because macroeconomists have decided to ignore almost all data. First models are confronted with post world war II data. This isn’t because no prewar data are available. It is because the pre war data overwhelmingly reject the models. The models are confronted with data only from a few developed countries — 2008 was treated as unprecedented because the precedents before world war II and from the third world were ignored. Only a few time series are considered relevant even though the models have implications for other time series. For some reason only quarterly data are used even though many series are available at higher frequency.
The problem just isn’t that there aren’t enough data to test the models. The problem is that the models are grossly massively false.
Some excerpts and comments
“the imaginative construction of theoretical models is not the biggest problem in macro – we can build reasonable models to explain just about anything. The biggest problem in macroeconomics is the inability of econometricians of all flavors (classical, Bayesian) to definitively choose one model over another” -Thoma
I disagree with “reasonable”. I think the models (definitely including mine) are all clearly unreasonable. Actually, I think this is agreed. The models are models and assumed to be false. The assumptions are assumed to be false. It is hoped that the models are useful. Importantly, this means that unless and until the data show the model is useful, there is no reason to think it is. But the default is that if I can argue that a model might be useful then I can use it until the data prove that it is grossly misleading — the burden of proof is on the economy to prove beyond reassonable doubt that our models aren’t useful.
I disagree with “just about”.
Finally I disagree that the role of empirical work is to “choose one model over another”. I think there is a stage in science when it is best not to have a working hypothesis — to look at the world with an open mind and try to come up with a hypothesis. I think macroeconomics is at that stage.
OK over to Smith. He wrote, among other things
“2. Macroeconomists should try to stop overselling their results. Just matching some of the moments of aggregate time series is way too low of a bar. When models are rejected by statistical tests (and I’ve heard it said that they all are!), that is important. When models have low out-of-sample forecasting power, that is important. These things should be noted and reported.”
to which I say Yes Yes Yesssss.
Then he went on “Plausibility is not good enough. We need to fight against the urge to pretend we understand things that we don’t understand.”
I don’t quite agree with that statement. I do agree with the following paraphrase “Implausibility is not good enough. We need to fight against the urge to pretend we understand things that we don’t understand.”
I don’t think anyone says that standard economic models are plausible. I think the conventional view is that they are false by definition, but might be useful approximations.