# Diagnostic Expectations, Anchoring, and Actual Expectations

First, there is the excessive reliance on diagnostic characteristics (called diagnostic expectations by economists). A classic example is the room with 90 lawyers and 10 engineers. Jim is quiet and hardworking and likes model trains. It is human nature to conclude he is an engineer (fits the stereotype perfectly). However, the data do not in fact outweigh the 9 to 1 ratio. Another is a guy has red hair. How likely is it that he is Irish (not very in the actual human population). In each case there is some diagnostic information. In each case we put too much weight on it compared to the ratio of the number in each of the 2 groups.

One way to interpret this is that people put too much weight on new information and think a symptom is more diagnostic (useful) than it is. In the economics literature “diagnostic expectations” is used to mean over-reaction.

A different explanation (favored by Kahneman and Tversky) is that the problem with the base frequencies is that they are boring not that they are old. Frequencies are numbers and numbers do not interest people or fit well in our minds. The diagnostic signals are words, which evoke images, and confrom to stereotypes. Here there is a simple ranking numbers< words < stories < images ( a picture is worth a thousand words) < video.

There is another apparently opposite K-T discovery– anchoring. If people make an initial forecase and are given new information, they under-react so the forecast is somewhere in between the unbiased forecast and the old forecast. The amazing thing is that this occurs even if the first number to be interpreted as a probability in percent is evoked by saying “pick a random number between 0 and 100”. At first this seemed to contradict the diagnostic heuristic (and by at first I mean a long painful period for me). However, there is no contradiction if we use Kahneman and Tversky’s story about the diagnostic. In the case of anchoring it is numbers vs numbers with no stories, stereotypes, or images.

How does this happen? The forecasts used in the studies are typically forecasts of numbers (next quarter’s average 91-day interest rate or next month’s CPI, percent of vote won by Kamala Harris. Forecasts are made using other numbers (polls). Why is there over-reaction?

Today, after some decades, I thought of something. The signals may start out as boring numbers, but they are rapidly joined by stories. Another human trait is coming up with causal explanations (instantly automatically and extremely unreliably). The new number is immediately joined by many people’s explanation of why it shifted from the old number (partly human nature and partly because this is many peoples’ job).

My current guess is that the over-reaction is an over-reaction to the many vivid interesting causal explanations and not to the initial boring number.

My current forecast is that I am demonstrably wrong and will soon understand why my instant intuitive and unrealiable causal explanation is nonsense.