What is a Good Approximation ?
What is a good approximation ?
The word good is important but very general. One (long long ago) I pulled a hair out of my scalp and told Elisabetta Addis (we’re married) “look my first white hair.” She replied “you have a very good wife and a very bad mirror.” Odd the same event, failure to point out my many older white hairs, can be both good and bad.
The same thing goes for approximations.
Click the link for my latest anti economic theory diatribe.
What is a good approximation ?
The word good is important but very general. One (long long ago) I pulled a hair out of my scalp and told Elisabetta Addis (we’re married) “look my first white hair.” She replied “you have a very good wife and a very bad mirror.” Odd the same event, failure to point out my many older white hairs, can be both good and bad.
The same thing goes for approximations.
Click the link for my latest anti economic theory diatribe.
Rajiv Sethi: The Invincible Markets Hypothesis: .
There has been a lot of impassioned debate over the efficient markets hypothesis recently, but some of the disagreement has been semantic rather than substantive, based on a failure to distinguish clearly between informational efficiency and allocative efficiency. Roughly speaking, informational efficiency states that active management strategies that seek to identify mispriced securities cannot succeed systematically…. Allocative efficiency requires more… is satisfied when the price of an asset accurately reflects the (appropriately discounted) stream of earnings…. If markets fail to satisfy this latter condition, then resource allocation decisions (such as residential construction or even career choices) that are based on price signals can result in significant economic inefficiencies.
[skip
The critics concede that informational efficiency is a reasonable approximation, at least with respect to short-term price forecasts, but deny that prices consistently provide “accurate signals for resource allocation.”
This post is long so I will try to put the punch-line up here. Economists make terrible errors when they say a statement “a reasonable approximation” of reality. Two very different meanings are conflated. One is that direct evaluation of the statement shows that rejection is subtle so the statement is approximately true. The other is that all claims which would be true if the statement were absolutely true are approximately true.
There is a general reliance on a sort of smoothness assumption, so that a model based on approximately true assumptions must have approximately true implications.
It is absolutely known and positively proven that this idea is totally false. The result that it is totally false is old and very well established (google “epsilon equilibrium”) People who study the implications of the assumptions must know this. Yet the idea that there is some general smoothness property in the mapping from assumptions to conclusions absolutely dominates economic theory.
Yes I just said that economic theory is based on assuming something which is known to be a false statement in mathematics. The reason is simple. Without the demonstrably false assumption that approximately true assumptions imply approximately true conclusions, economic theory would lead to no conclusions. The results of theories would be mere hypotheses to be rejected if they didn’t fit the data.
Economists absolutely claim that this is how they use theory. This is absolutely false. Standard models have implications which are rejected by the data and yet they remain standard models.
OK the one and only efficient markets hypothesis.
I’d say that, in a standard general equilibrium model, informational efficiency does imply allocational efficiency. So to the extent that one accepts such models as guides, one should believe that, for practical purposes informational efficiency implies allocational efficiency.
The problem is that approximate informational efficiency does not imply approximate allocational efficiency.
This is a well known theoretical result. When the assumption of informational efficiency is used to draw conclusions (e.g. of allocational efficiency) it is necessary to assume that you can never beat the market. Predictions about the present absolutely require that assumption that the market can be beaten at 10:00 AM EST on March 12 AD 1 million. This hypothesis can’t be tested now obviously.
Anomalies in risk adjusted returns on the order of 1% per year can’t be detected. We can’t be sure of exactly how to adjust for risk. However, they can make the difference between allocative efficiency and gross inefficiency.
For policy makers there is a huge huge difference between “markets are approximately informational efficient” and “markets are informational efficient.” The second claim (plus standard false assumptions) implies that markets are allocatively efficient. The second implies nothing about allocative efficiency.
What does the phrase “a reasonable approximation” mean ? Does it mean A) rejection of the claim is subtle and controversial or B) rejection of all implications of the claim are subtle ?
I think standard usage would be B) a reasonable approximation is one that doesn’t lead us astray. In economics the phrase is used in sense A, that is, only some implications of the claim can be used to test the claim because … because we said so.
“There is a general reliance on a sort of smoothness assumption, so that a model based on approximately true assumptions must have approximately true implications.
“It is absolutely known and positively proven that this idea is totally false. The result that it is totally false is old and very well established (google “epsilon equilibrium”) People who study the implications of the assumptions must know this. Yet the idea that there is some general smoothness property in the mapping from assumptions to conclusions absolutely dominates economic theory.”
Sigh!
“Economists absolutely claim that this is how they use theory. This is absolutely false. Standard models have implications which are rejected by the data and yet they remain standard models.”
Well, let me defend that. (On philosophical, not economic, grounds.) It is all well and good to be a falsificationist. However, to be atheoretical, even if that is attractive, is not an option. Theories compete. Data are equivocal.
As a practical matter, I have made money betting against the EMH. But if I were an economist, well, . . . Faute de mieux, on couche avec sa femme.
My old econ professor, bless his heart, when backed into a corner would often end up saying, “Well, it’s complex.” After awhile, I began to feel I should just get him a T-shirt with that phrase emblazoned thereon.
I tend to see economics like currents within a river, and of course they are complex. But quite a lot of the movements are amenable to kitchen-table sense, in my experience. This is complex, but it ain’t quantum mechanics.
When employment taxes and benefits are mandated by law, of course business will try to do an end run by redefining their staff as consultants, by hiring under the table, or even by flat-out cheating employees too poor to have the leverage to complain. Just as flood waters push to infiltrate and undermine the sandbags, so some elements of the business world will try to alter or circumvent the features of the riverbed they find inconvenient.
Of course, one person’s kitchen-table sense is the other’s teabag rhetoric, I guess the difference being the willingness to hold the rough predictions up to comparison with the real world and revise or scrap them as needed.
Noni
Four approximately true assumptions say .85 when combined = 0.52. Errors do not remain constant they get magnified when combined.
This is among the most intellectually engaging posts here in months. Just lovely.
Travis,
Errors may compound, and will when they are not random. However, random errors will tend to cancel. However, I don’t think the point here has to do mostly with multiplying. It’s more subtle and more fun than that. The nature of the divergence of an approximation in a model from reality can make the implications of the model simply untrue, rather than poor approximations. That’s a really good thing to keep in mind. It is also a great thing for kids looking for dissertation material. Having a catalog of implications that are simply and strictly not true, a catalog of implications that are at best ambiguous, and a catalog of implcations that are reasonable approximations, for any given model, would be a nice heuristic to have. The world mostly runs on heuristics, after all.
Kharris,
Here is a thought experiment. Start with a tower foundation 15% off level. Now go up to 16 feet and let the angle adjust randomly (15 degrees) iterate 5 times. Do you really think at 80 feet randomness cancels out its own effects? Do you think once the randomness was evenly distributed around 360 degrees at the nth iteration my structure would be stable?
The kind of randomness you are talking about is the law of large numbers. So if I have a one foot ruler that is randomly 15% too short and too long the closer the length of the board I want to cut approaches infinity the closer it will be to the board I want. That is fine as long as either the accuracy I need is low or the board length I want is infinite as the two are inversely related.
My point was that most economic models involve several of the inference types like approximately informatively efficient = approx allocative efficient. Put four of those in one model and they are more like the tower mind experiment above.
Kharris,
Here is a thought experiment. Start with a tower foundation 15% off level. Now go up to 16 feet and let the angle adjust randomly (15 degrees) iterate 5 times. Do you really think at 80 feet randomness cancels out its own effects? Do you think once the randomness was evenly distributed around 360 degrees at the nth iteration my structure would be stable?
The kind of randomness you are talking about is the law of large numbers. So if I have a one foot ruler that is randomly 15% too short and too long the closer the length of the board I want to cut approaches infinity the closer it will be to the board I want. That is fine as long as either the accuracy I need is low or the board length I want is infinite as the two are inversely related.
My point was that most economic models involve several of the inference types like approximately informatively efficient = approx allocative efficient. Put four of those in one model and they are more like the tower mind experiment above.
Kharris,
Here is a thought experiment. Start with a tower foundation 15% off level. Now go up to 16 feet and let the angle adjust randomly (15 degrees) iterate 5 times. Do you really think at 80 feet randomness cancels out its own effects? Do you think once the randomness was evenly distributed around 360 degrees at the nth iteration my structure would be stable?
The kind of randomness you are talking about is the law of large numbers. So if I have a one foot ruler that is randomly 15% too short and too long the closer the length of the board I want to cut approaches infinity the closer it will be to the board I want. That is fine as long as either the accuracy I need is low or the board length I want is infinite as the two are inversely related.
My point was that most economic models involve several of the inference types like approximately informatively efficient = approx allocative efficient. Put four of those in one model and they are more like the tower mind experiment above.
Fascinating stuff.
So am I right in concluding that the final step in a typical economist’s use of a formal model is nothing more than an argument by analogy – a decidedly invalid form of inference.
“There is a general reliance on a sort of smoothness assumption, so that a model based on approximately true assumptions must have approximately true implications. “
Engineering would be very easy if this were true.
A common practice is that if you are using approximately true assumptions (for treasons of tractability) you do not indulge in long chains of reasoning. Such long chains are almost guaranteed to be false. Instead one moves step by step, at each step testing the veracity of the implications of the model with data.
Could you expand this post? Add some examples.
Another place where I feel economics is lacking is in it obsession with efficiency.
What about robustness? There is an obvious tradeoff between efficiency and robustness. Robustness requires redundancy and degeneracy, which efficiency abhors. The internet protocols, for example, are highly robust but they hardly epitomize efficiency.
Travis,
First, I don’t see that “most economic models” necessarily pile up misapproximations in the way you suggest. I’m willing to see a demonstration over a number of models, but there is no reason to think your architectural analogy is apt. I am aware that the point I made had to do with the whole large numbers thingee, but that was in direct answer to your first effort at making your point. I’m pretty sure it was a reasonable answer to your first point.
All of which misses a good bit of the point of the initial post. My impression is that the fallacy behind arguing that the implication that the implications of a hypothesis must be approximately true if the hypothesis is approximately true is not just a question of the data gradually wandering away from where you want them over a series of iterations. The problem is that in some cases, the implications drawn from an hypothesis that is approximately true may be strictly false. There are cases in which the premise must be strictly true in order for the implication to be even approximately true. If the premise isn’t strictly ture, the implication will be utterly wrong. It will be far worse than something that is merely outside an arbitrarily chosen level of statistical significance, more than wrong by an order of magnitude.
Arguments about regulation come to mind. The notion that market solutions will tend to produce the best outcome in terms of resource use and welfare. The conditions necessary to allow a logical demonstration of that conclusion are quite strict. The tend not to hold. If, as the initial post argues, approximate truth in the initial hypothesis does not imply approximate truth in the implications, we may have put things in the wrong order. The “second best” solution may be the first best. Market solutions may be suboptimal.
I think its becoming more and more absurd not to give Eugene Fama the Nobel prize. Thus, I think he’ll win it next year or the year after. And I’m willing to bet $1 that he will win.
On the efficient market hypothesis the market only becomes efficient because of the actions of people looking for inefficiency and the opportunity to set arbitrages to exploit the inefficiency. But individuals would not bother looking for inefficiency if they never found any, they could do better flipping burgers at McDonalds. Since markets only get made more efficient by arbitrage seeking activity, and people will only seek arbitrage if there is some efficiency to exploit, thus markets need to be at least a little bit inefficient for this mechanism to work. So I think you can pretty much rule out the strong form of the efficient market hypothesis.
Of course the real funny part is to think that government could come anywhere close to making markets as efficient as the mechanisms that exist in the free market. In the market those seeking arbitrage are rewarded with money, in government after 40 years of service they might give you a going away party and a gold plated pen.
Kharris,
“I am aware that the point I made had to do with the whole large numbers thingee, but that was in direct answer to your first effort at making your point. I’m pretty sure it was a reasonable answer to your first point.”
I thought I clarified why it was not. Ok forget that. Take the model demonstrating the superiority of free trade. It has any number of initial premises which are totally false in terms of reality: total autarky, full employment equilibrium, perfect information, zero transportation costs; and some premises that are approximately true: capital is mobile although not perfectly so, labour not really; etc., etc. These deviations or errors in approximations do not cancel each other out. They may in fact be compounding in the real world of policy.
Yet liberal economists support free trade as a policy because they believe the conclusion, although derived from a melange of better and worse premises (as measured against the real world), to be generally true. Apparently comparative advantage is the law and everything else turbulent gasses that may effect the actual trajectory but not the direction.
You further write :
“The conditions necessary to allow a logical demonstration of that conclusion are quite strict. They tend not to hold. If, as the initial post argues, approximate truth in the initial hypothesis does not imply approximate truth in the implications, we may have put things in the wrong order.”
The two substantive sentences in the above quote are not the same, and I think indicates where we have been cross-talking. “Conditions being so strict as to allow a logical demonstration of the conclusions derived there from” is not the same as stating “approximate truth in the initial hypothesis does not imply approximate truth in the implications.” The initial conditions simply lead to the conclusion. Where the error comes in is in insisting that the initial conditions are in fact a good (approximately true) description of the initial conditions in the real world. As long as it stays on paper as a model it is just fine. It is when it gets transposed onto the real world where the strictness of the conditions is at best approximately true that errors get compounded.
The second sentence is a different matter. It says approximately true conditions may not (often do not) lead to approximately true in the implications. Or that from approximately true initial conditions approximately true in the implications cannot be deduced. This would be further compounded should said model make off the page into policy.
Robert,
Isn’t this why economists rely upon the concept of equilibrium? It’s based on faith in the idea that approximately true propositions will converge towards truth. It isn’t necessary that the economy ever actually achieve equilibrium, only that it tend towards it. In the world of econometrics you could think of random walks held together by some abstract cointegrating vector that holds things together.
Could it be that the market is entirely inefficient in the short term (that is, prices are set almost entirely by emotion) and becomes efficient gradually over time?
That would explain why P/E10 levels predict long-term returns while not voiding the idea that the market must set prices properly at some point.
Is there any reason to believe that this is not so (other than that it hurts our pride to believe that millions of us are acting from emotion rather than reason when we make investing decisions)?
Rob