Wang and Silver on electoral projections
Sam Wang explains why he reports a 99% probability of an Obama win and fivethirtyeight.com has only a 62.4% probability.
I learned a lot from his post due to my incredible ignorance. I go to www.fivethirtyeigth.com often enough that Firefox proposes it first when I type www, but I had never bothered to read the description of the method used to calculate the probabilities.
In case others are as lazy as me (unlikely) or have lives (likely) I will discribe my ignorance below after discussing issues of interest to the non pathetically ignorant.
Today I’d like to outlline the basic contrasts between this calculation and a popular resource, FiveThirtyEight.com. That site, run by Nate Silver, a sabermetrician, is a good compendium of information and commentary. However, both our goals and methods differ on several key points. The biggest difference is that this site provides a current snapshot of where polls are today, while he attempts a prediction. His approach also has a conceptual problem…
I think the conceptual problem is that Silver calculated probabilities from 10,000 simulations and Wang uses an analytic formula.
Silver’s approach is to carry out thousands of simulations, then tally the simulations. That method reflects the fantasy baseball tradition, in which individual outcomes are often of great interest. However, such an approach is intrinsically imprecise because it draws a finite number of times from the distribution of possible outcomes. The Meta-Analysis on this site calculates the probability distribution of all 2.3 quadrillion possible outcomes. This can be done rapidly by calculating the polynomial probability distribution, known to students as Pascal’s Triangle.
Wang claims that Poblano (AKA Nate Silver) should have obtained a normal distribution for electoral college votes. I don’t agree. This is only true if there is no correlation between shifts in support for Obama and McCain in different states. As usual, I argue using an extreme example. Assume no sampling error (each poll is of the whole population) and perfect correlation of changes in support in different states. If this were true then the ranking of states by Obama minus McCain would not change and there would be only 50 different possible outcomes in the electoral college. That’s not a normal distribution. I think that the argument is valid unless changes in support in different states are independent. This is a very implausible assumption. (note young Ezra who is neither a statistician nor a political scientist made this argument before I did).
Now Wang also argues that 10,000 simulations aren’t enough. I agree. I recently calculated something using 1,000,000 simulations for each of several different parameters (actually just 2 sample sizes). This was a distribution which I think I derived analytically. The millions of simulations were to check my reasoning, my algebra and, especially, my typing when writing the program which calculates the analytically exact distribution (the fact that I fail to reject the null that it is accurate with 1,000,000 simulations convinces me that I typed write for wunce).
The convention that simulations are repeated 10,000 times is a historical artifact of the slow pc age. I would like to ask Silver how long his computer takes to simulate. I would guess that his simulations are quicker than some of mine and waiting for 1,000,000 simulations was barely a nuissance.
Just to go back to my other obsession. I blame microsoft. I don’t think people fully realise just how much faster cheap pc’s have become, because microsoftware is designed to run intollerably slowly on any but the latest generation computers so computers take as long as ever to boot up, open a word file, open excel or well all that stuff even though they also take as long to do 1,000,000 simulations as they used to take to do 10,000.
I’d guess that 1,000,000 simulations won’t change Poblano’s calculated probability much and I would bet that he does them and reports the result.
OK what I should have known already.
I knew that Poblano (AKA Nate Silver) used old polls as well as the latest polls. His success during the primaries shows that true shifts in opinion were of limited importance. I did not know that he used a weighted average with the weight decreasing exponentially so that they fall by half after 30 days (weight = 0.5^(age/30days)*(other stuff) and that in past elections this calculation predicts better than others he tried. I see that just as I finally read the old faq (the link above) Silver wrote a new one (still doing 10,000 simulations)
Worse I didn’t even know that he considers the correlation of future changes in support for different candidates in different states.
It can reasonably be argued that I’m essentially double-counting the amount of variance by accounting for both state-specific and national movement. That is, some of the error in state-by-state polls is because of national movement, rather than anything specific within that state. However, I have chosen to account fully for both sources of error, because (i) this is the more conservative assumption, and (ii) I suspect that 2004, where voters divided into Bush and Kerry camps early, was inherently a more stable sort of election than 2008 is likely to be.
I had assumed that he did something like what Wang did, so the 67% was a snapshot not a forecast. I am pleased and reassured.