Wren-Lewis on Romer on Solow on Lucas & Sargent
People who know more than I do are discussing the rational expectations revolution and the division of macroeconomics into hostile fresh-water and salt-water schools of thought.
I am very favorably impressed by this post by Simon Wren-Lewis which you should probably just read. ).
Wren-Lewis quotes Paul Romer.
“Lucas and Sargent were right in 1978 when they said that there was something wrong, fatally wrong, with large macro simulation models. Academic work on these models collapsed.”
The question I want to raise is whether for this strand … reform rather than revolution might have been better for macroeconomics.
First two points on the quote above from Paul. Of course not many academics worked directly on large macro simulation models at the time, but what a large number did do was either time series econometric work on individual equations that could be fed into these models, or analyse small aggregate models whose equations were not microfounded, but instead justified by an eclectic mix of theory and empirics. That work within academia did largely come to a halt, and was replaced by microfounded modelling.
Second, Lucas and Sargent’s critique was fatal in the sense of what academics subsequently did (and how they regarded these econometric simulation models), although they got a lot of help from Sims (1980). But it was not fatal in a more general sense. As Brad DeLong points out, these econometric simulation models survived both in the private and public sectors (in the US Fed, for example, or the UK OBR). In the UK they survived within the academic sector until the latter 1990s when academics helped kill them off.
I am not suggesting for one minute that these models are an adequate substitute for DSGE modelling. There is no doubt in my mind that DSGE modelling is a good way of doing macro theory, and I have learnt a lot from doing it myself. It is also obvious that there was a lot wrong with large econometric models in the 1970s. My question is whether it was right for academics to reject them completely, and much more importantly avoid the econometric work that academics once did that fed into them.
It is hard to get academic macroeconomists trained since the 1980s to address this question, because they have been taught that these models and techniques are fatally flawed because of the Lucas critique and identification problems. But DSGE models as a guide for policy are also fatally flawed because they are too simple. The unique property that DSGE models have is internal consistency. Take a DSGE model, and alter a few equations so that they fit the data much better, and you have what could be called a structural econometric model. It is internally inconsistent, but because it fits the data better it may be a better guide for policy.
What happened in the UK in the 1980s and 1990s is that structural econometric models evolved to minimise Lucas critique problems by incorporating rational expectations (and other New Classical ideas as well), and time series econometrics improved to deal with identification issues. If you like, you can say that structural econometric models became more like DSGE models, but where internal consistency was sacrificed when it proved clearly incompatible with the data.
These points are very difficult to get across to those brought up to believe that structural econometric models of the old fashioned kind are obsolete, and fatally flawed in a more fundamental sense. You will often be told that to forecast you can either use a DSGE model or some kind of (virtually) atheoretical VAR, or that policymakers have no alternative when doing policy analysis than to use a DSGE model. Both statements are simply wrong.
There is a deep irony here. At a time when academics doing other kinds of economics have done less theory and become more empirical, macroeconomics has gone in the opposite direction, adopting wholesale a methodology that prioritised the internal theoretical consistency of models above their ability to track the data. An alternative – where DSGE modelling informed and was informed by more traditional ways of doing macroeconomics – was possible, but the New Classical and microfoundations revolution cast that possibility aside.
Did this matter? Were there costs to this strand of the New Classical revolution?
Here is one answer. While it is nonsense to suggest that DSGE models cannot incorporate the financial sector or a financial crisis, academics tend to avoid addressing why some of the multitude of work now going on did not occur before the financial crisis. It is sometimes suggested that before the crisis there was no cause to do so. This is not true. Take consumption for example. Looking at the (non-filtered) time series for UK and US consumption, it is difficult to avoid attaching significant importance to the gradual evolution of credit conditions over the last two or three decades (see the references to work by Carroll and Muellbauer I give in this post). If this kind of work had received greater attention (which structural econometric modellers would almost certainly have done), that would have focused minds on why credit conditions changed, which in turn would have addressed issues involving the interaction between the real and financial sectors. If that had been done, macroeconomics might have been better prepared to examine the impact of the financial crisis.
I agree with Wren-Lewis that the complete scrapping of large macro simulation models implied tossing out the baby with the bath water. I don’t agree with everything written in the post. There is no doubt in my mind that DSGE modelling is a bad way of doing macro theory and that a theoretic VARs are a bad way of forecasting.
Current DSGE models are too complex to grasp intuitively and do not fit the data well. They don’t yield good forecasts and don’t clarify thought. I agree with the implied negative answer to Brad’s rhetorical question “is there a case for investigating models we (a) do not understand that (b) do not fit the world?”
Yet extreme measures are used to keep modern DSGE models much simpler than the old Keynesian large macro models (which didn’t work and were rightly abandoned). In particular, models with no housing sector are used to try to understand the great recession. The profession has agreed that the issue to be studies is finance not house prices. Why ? Dean Baker’s argument that the key issue is the housing bubble not the financial crisis has been ignored not refuted.
Here I think larger DSGE models in which residential investment and non residential investment weren’t lumped together could be written. Since the models can’t be grasped and are solved by computers, the only cost would be more work programming.
In contrast VAR models absolutely must have few variables. The apparent absence of theory is based on the extremely strong common factor restrictions. I think more explicit assumptions motivated by common sense would work better.
Wren-Lewis notes that his favored approach is rejected based on appeals to the Lucas critique. he also said that UK Keynesians responded to the Lucas critique by assuming rational expectations. I must insist that the assumption of rational expectations is not an appropriate response to the Lucas critique. By itself it can impose no intellectual discipline — any behavior at all is consistent with utility maximization. It is not combined with the requirement that assumptions about tastes and technology be plausible, realistic or consistent with micro data as would be absolutely necessary for it to do any good. It would not have prevented the Phillips curve error which, in fact, is almost entirely a myth. The standard case for assuming rational expectations has no merits.
I can’t resist noting that Wren-Lewis presented his example of the costs of insisting on microfoundations — that it made macroeconomists ignore the anomalous increase in UK and US consumption — in response to the question “what have micro foundations done for us?” and “What have Keynesians learned since Keynes ?”
“As the decade progressed, UK consumers started borrowing and spending much more than any of these equations suggested. Model based forecasts repeatedly underestimated consumption over this period. Three main explanations emerged of what might be going wrong. In my view, to think about any of them properly requires an intertemporal model of consumption.”
So the episode which was presented as showing the need for micro foundations is now presented as showing the cost of insisting on micro foundations. I agree with Wren-Lewis’s recent post. One of his two examples of what microfoundations have done for us has become “blinded macroeconomists to the importance of credit markets.”
In the older post Wren-Lewis considers an alternative explanation of the anomalous consumption
3) There was also much talk at the time of the ‘Thatcher miracle’, whereby supply side changes (like reducing union power) had led to a permanent increase in the UK’s growth rate. If that perception had been common among consumers, an increase in borrowing today to enjoy these future gains would have been the natural response given an intertemporal perspective. Furthermore, as long as the perception of higher growth continued, increased consumption would be quite persistent.
Somewhere he notes that subsequent data showed that there wasn’t a Thatcher miracle. But this doesn’t show that consumers didn’t believe there was one. Here the assumption of rational expectations sneaks in. I don’t think we can make sense of the economy without assuming irrational exuberance followed by panic. I am alarmed that both Romer and Wren-Lewis respond politely to such an view but then assume that they should assume rational expectations.
The idea that what happened in 1990-1, 2001 and 2006-now was there was a bubble and it burst is widely accepted by non-economists and also by economists when not doing research, but not allowed even within the broadened academic research program proposed by Wren-Lewis.
Why not ?
I think the recent debate over “what went wrong with macro” misses an important point. While the academic economic profession may have largely walked away from large models, not everyone did. First, I would note that Ray Fair at Yale continued to pursue the modeling strategy laid out by the Cowles Commission. His strategy was driven by the view that practical modeling of the economy had to be tied to both the data and theotretical foundations. Ray talks about that approach in link below.
http://fairmodel.econ.yale.edu/rayfair/blog1.htm
I would also point out that the practical world never gave up on this appraoch. The Federal Reserve and other central banks have continued to use large macro models. In addtion, economists working in the private sector have continued to pursue this modeling strategty.
I would note that those using this modeling strategu have taken account of the the “Lucas Critiques” but trying to take account of “micro foundations” in how they have specified such models. Reasonable people can disagree on whether or not the Lucas critique has beejn addressed adequately in these efforts.
But I don’t think its accurate to suggest that large models were wholely abandoned, even by the academic economics profession.
Lewis:
First post always goes to moderation! Thanks for joining us.