Internal Consistency, rational optimizing agents and the Lucas Critique

I do’t want to be rude to anyone in particular, so I will just note that I have the impression that two people who should know better are willing to suggest that logically consistent models populated by rational optimizing agents have policy invariant implications, that is are not vulnerable to the Lucas critique.

This is obviously false.

I will concede two points. First the the objectives of the agents in the model are the same as the objectives of all agents in the real world and the constraints are the same as real world constraints, then the implications will be optimal predictions no matter what policy is implemented. That is the models would work fine if they weren’t models but were rather the truth the whole truth and nothing but the truth. No one claims this, so this concession amounts to nothing at all.

Second, models can be useful approximatiions to reality. A model is a useful approximation to reality if it yields approximately accurate predictions and conditional predictions. This is to say that models which are not vulnerable to the Lucas critique are not vulnerable to the Lucas critique. Noting this tautology, I have conceded nothing at all.

Beyond these two non concessions I will not go. Internally consistent models with optimizing agents are not in general presumptively useful for policy analysis. If one has two models neither of which fit the facts one of which is an ad hoc reduced form model and one of which is a consistent model with optimizing agents, there is no reason to use the internally consistent model for policy analysis. If you have two models both of which fit the facts one of which is an ad hoc reduced form model and one of which is a consistent model with optimizing agents, there is no reason to use the internally consistent model for policy analysis. Fitting the facts is no more reliable a guide to usefulness in policy analysis for an internally consistent model with optimizing agents.

The reason is that a fundamentally invalid model (meaning one which is not a useful tool for policy analysis) can fit the data well for one policy regime and poorly for another. This is just as true of a model with optimizing agents but fundamentally false assumptions about objectives and constraints as it is for an ad hoc reduced form model. I will attempt to argue that it is just exactly precisely as true and not one iota less true, but I am not sure I can measure truth down to one iota.

Some examples. It is not clear from the data whether people don’t want to do something or if they are prevented from doing it.

Let’s say it is 1990. I could write down a model in which people of different types choose to marry if and only if this increases each spouses utility. Looking at the data, I note that same gender couples don’t appear in marriage registers. I conclude that gay people are not the marrying kind. I predict this will remain true no matter how policy changes. I am a total fool. But the model had optimizing agents and was internally consistent.

I can write down a model in which the labor market clears. I can estimate labor supply parameters using micro data. I note that men’s labor supply is extremely inelastic compared to women’s. Based on this perfectly fine micro model with optimizing agents, I predict that employment of men will fluctuate much less from 1987 on then employment of women. I am a fool. The assumption that the labor market clears can be consistent with my micro data (I can assume the self declared unemployed are lazy and lying). But I can’t explain the recent fluctuations.

If I had used US macro data from 1984 through 2007 to check my model, it would have looked OK.

OK enough joking around. Let’s consider a famous case of a bad model: the old expectations unaugmented Phillips curve. This is, as noted by Krugman among others, the one and only example used for decades to justify the quest for consistent micro founded macroeconomics. Here the current view is roughly that the true relationship between unemployment and inflation is that unemployment is related to inflation minus the best forecast of inflation conditional on information. In the original now long abandoned Lucas model the information was all information available at the time unemployment and inflation are measured. In New Keynesian models the relevant conditioning information must be older — in fact typically the statement is only approximately true and a better and better approximation the older the conditioning information). The view from 1959 through 1974 (that is the whole Keynesian era) was that there is a stable relationship between inflation and unemployment. The interpretation is that expected inflation was roughly constant in the times and places analysed by old Keynesians and it was implicitly assumed that something which happens to be roughly constant is constant. This lead to terrible predictions in the 1970s.

Then the profession shifted to belief in a natural rate of unemployment. The expected value of unemployment conditional on long ago information is constant (this is implied by the expectations augmented Phillips curve described above — otherwise it has no content). There was a huge debate over whether the long term was a month (as required by Lucas) roughly a year or so (as interpreted by Barro who made Sargent lose his cool that way) or a few years.

Guided by this insight, European policy makers decided that it would be a good idea to fight inflation at the cost of a few years of high unemployment. Unemployment remained very high for a decade and a half. It was decided that there was something wrong with European economies (basically trade unions and labor regulations) and that, if they were what economies are supposed to be, the models would have worked. When the model didn’t fit reality, the conclusion was that the scientific thing to do was to reject reality.

But note this means that the first confrontation with reality should have demolished the natural rate hypothesis. The idea that the long term conditional average of unemployment is constant is similar to the idea that the long ago subjectively forecast rate of inflation is constant. It might be true in general (“might” makes right) but it might also be that it just happened that policy had been such that it was true over the time period studied. Anchored inflation expectations might exist only if policy makers do not tolerate persistent inflation. A constant medium term average unemployment rate might exist only if policy makers do not tolerate long lasting fluctuations in unemployment. The story about subjectively expected inflation was more appealing, because no one had good models of unemployment (while the rational expectations hypothesis is elegant and simple). But the data were equally cruel to each assumption.

The result is that macroeconomics is based on models in which there isn’t a stable Phillips curve and there is a natural rate of unemployment. Evidently some data are more equal than others.

The really shocking thing is that the same economist (0.J. Blanchard) can switch from a model explaining why there is not a natural rate of unemployment to an estimation strategy which uses the assumption of a natural rate of unemployment for identification within two years without blushing. The rules of what may and may not be assumed are respected, but they have nothing to do with evidence.

Back to history.

In any case, it was argued that reflation would cause intolerably high inflation (that means inflation) so there was nothing to be done but reform labor markets. Then the stock market crashed in 1987. This caused Margaret Thatcher to panic and force the Bank of England (not then independent) to reflate. For a couple of years, the UK unemployed were criticized more harshly than ever, because there were many vacant jobs and unemployment didn’t fall quickly (evidently top economists were not familiar with the subtle concepts of a “stock” and a “flow” or of the relative plausibility of a stable Beveridge curve and a stable matching function — well I’ve had to explain that to them again recently so I shouldn’t be surprised).

The the UK unemployment rate fell sharply. There was a temporary increase in inflation. The unemployment rate remained far below the Continental average. The inflation rate converged back to very low. The view that fluctuations in inflation are permanent and fluctuations in unemployment temporary was not damaged at all by the episode. Instead the UK became one of the good Anglophone economies instead of being (as it had been) a key example of Eurosclerosis.

Notably which countries are good and which are bad changes every decade, but the free market system can never fail, it can only be failed. Following shocks a country can switch from being key examples of free markets working to key examples of government intervention failing and back within a few years. That country is called South Korea (while on the peninsula I note that North Korea shows that some forms of government intervention are catastrophic).

Comments (9) | |