Thursday, November 5, 2020

Polling Errors and the Rationality of Voting

All of the major models of the rationality of voting--Barnett; Edlin, Gelman, and Kaplan; and Brennan and Lomasky's version of the binomial model--say that the expected value of your vote (in terms of its propensity to affect outcomes, not its consumption value) depends upon the probability of being decisive. In turn, all of them say that the probability of being decisive depends in part on the expected margin of victory or degree of support the candidates have. Even Barnett's and Edlin, Gelman, and Kaplan's views imply that voting in California was a waste of time, because there was no chance a vote would make a difference.

But it's becoming clear that something is really wrong with our polling information. People were worried after 2016, but then, just a week ago, we were told on FiveThirtyEight and elsewhere that the errors of the past would not be repeated, that shy Trump voters are a myth, and so on. Today, we see the election is far closer than most of the major polling agencies and forecasters predicted. Gelman says, "Don't kid yourself; the polls messed up".

So, here's the problem for all three models. To know whether it's rational to vote requires, on their model, knowing the odds. Knowing the odds requires applying their model, which requires in turn knowing the split, the margins, and/or the chances a particular candidate will win in a given state. If our polls are misleading or wrong, or if they are systematically unreliable, then you don't know that. 

If Barnett's paper is right, then on the day of the election, given the prevailing polling data, it would be rational for highly altruistic, well-informed people in maybe six or seven places to vote (and even then only if they vote the right way). But now after the fact, it looks like the data was wrong, so maybe the real number of states should have been higher. But that doesn't vindicate voters in those states--if you buy a lottery ticket thinking that the odds are 1 in 100 billion, but later discover you were wrong and the odds were more like 1 in 1 billion, you still made a bad choice given what you knew. 

This brings up an even deeper problem with these arguments, which I will get to tomorrow: Does the typical person have any idea what the expected utility is, what the odds are, or how to determine the odds? Do they have an ability to read and understand the papers assessing the odds, or to determine which expert to defer to in assessing these odds? Nope.