Zombie Rationality

In this excellent post, John Quiggin somewhat understates his claim

The key passage

“the same fundamental confusion I pointed out last time, trying to claim that rationality assumptions are both important and unfalsifiable.”

and later at greater length

“Williamson wants to have his cake and eat it. Most of the time he wants to help himself to the strong implications of rationality as represented in standard micro texts, and to demand that macro be built on this basis. But, when this model is challenged on empirical grounds, he retreats to a concept of rationality that is tautologically true. This is a classic example of John H’s “two-step of terrific triviality”.”

My problem is that, while the concept of rationality is tautologically true, Quiggin is reluctant to stick to that claim. Instead he flirts with qualifications involving consistency or regularity.

“a brief, purely mathematical point. Any consistent pattern of choice among objects (of any kind) that we can observe, can be represented as optimization, that is, as the maximization of a function. “

So far the truth but not the whole truth, since the claim is true without the qualifier “consistent.” The reason is simple. If there is no limit on the class of functions to be maximized then calender time can be an argument of the function. In other words if I have to assume that the function of X being maximized changes over time, I can just interpret it a one function of time and X.

I beat a dead Buridan’s ass after the jump.

Quiggin knows this. He writes “Even choices that are inconsistent in various ways can be represented by more general notions of optimization.” I’d go a bit further. This claim means that *some* apparently inconsistent choices can be represented as optimization, but, in fact, all choices can be represented as optimization.

The proof by contradiction is elementary. Assume that there is a non-empty set S of series of actions which is inconsistent with maximizing any utility function at all. Consider a hypothetical agent whose sole desire in life is to make things difficult for rational choice modelers. The agent has a utility function which yields one util if the agent chooses an series of actions in S and 0 otherwise. It is optimal for the agent to chose a series of actions in S. But S was the set of series of actions which are not optimal for any utility function. This is a contradiction. Thus I have proven that S is empty.

There is no series of actions which can not be reconciled with utility maximization.

There can be no doubt that this claim is true. You have just read a rigorous proof.