Your humble blogger is headed to the United Kingdom this week to give a few talks and generally escape the election and post-election frenzy. Blogging will be light. However, before departing for the land of scones and Devonshire cream, there's one last election-related issue that's worth some words.
As I briefly discussed a few weeks ago, there's a brewing conflict about how to read the polls for the U.S. presidential election. This has crystallized into some latent, not-so-latent, and pretty damn blatant hostility towards' FiveThirtyEight's Nate Silver. Now, some interpret this as simply a part of a larger War on Numbers. As Brendan Nyhan notes, Silver's analysis lines up with all of the other analytic forecasters.
But let's try to be fair here. I think there are a couple of different criticisms going on here from different quarters of the public sphere, and it's worth evaluating them on their own terms.
The first and simplest one is Matt K. Lewis, who points out why conservatives aren't keen on Silver's analysis:
Silver comes out of the baseball statistics world, and his defenders like cite sports and gambling analogies when defending him. But there is a key difference. If Silver says the Giants have only a 5 percent chance of winning the World Series again next year, it is highly unlikely that would impact the outcome of games. Umpires won’t begin making bad calls, the fans won’t stop attending games, etc.
But when the public sees that a prominent New York Times writer gives Barack Obama a 70 percent chance of winning, that can become a sort of self-fulfilling prophesy. It has consequences. It drives media coverage. It dries up donations. Whether Silver likes it, or not, people do interpret his numbers as a “prediction.” They see this as election forecasting.
This sounds about right, in two ways. First, it does highlight the ways in which forecasters can actually affect the outcome. Second, it's actually a compliment to Silver and his peers, because it reflects the belief that their assessments carry weight with the money. One does wonder whether it would have been liberal operatives pushing back if the forecasters were unanimous that Romney was the favorite at this point.
The point is, however, that part of the criticism is simply raw politics. That's fine, and can therefore be dismissed pretty quickly.
The second critique is more substantive, and rests on the notion that the assumptions that pollsters and forecasters are using when they crunch their numbers are flawed. Dan McLaughlin at Red State offers up a decent version of this critique:
Nate Silver’s much-celebrated model is, like other poll averages, based simply on analyzing the toplines of public polls. This, more than any other factor, is where he and I part company....
My thesis, and that of a good many conservative skeptics of the 538 model, is that these internals are telling an entirely different story than some of the toplines: that Obama is getting clobbered with independent voters, traditionally the largest variable in any election and especially in a presidential election, where both sides will usually have sophisticated, well-funded turnout operations in the field. He’s on track to lose independents by double digits nationally, and the last three candidates to do that were Dukakis, Mondale and Carter in 1980. And he’s not balancing that with any particular crossover advantage (i.e., drawing more crossover Republican voters than Romney is drawing crossover Democratic voters). Similar trends are apparent throughout the state-by-state polls, not in every single poll but in enough of them to show a clear trend all over the battleground states.
If you averaged Obama’s standing in all the internals, you’d capture a profile of a candidate that looks an awful lot like a whole lot of people who have gone down to defeat in the past, and nearly nobody who has won. Under such circumstances, Obama can only win if the electorate features a historically decisive turnout advantage for Democrats – an advantage that none of the historically predictive turnout metrics are seeing, with the sole exception of the poll samples used by some (but not all) pollsters. Thus, Obama’s position in the toplines depends entirely on whether those pollsters are correctly sampling the partisan turnout....
Let me use an analogy from baseball statistics, which I think is appropriate here because it’s where both I and Nate Silver first learned to read statistics critically and first got an audience on the internet; in terms of their predictive power, poll toplines are like pitcher win-loss records or batter RBI.
Oh, snap. I've read enough sabermetrics to know a diss when I see it.
Now I don't think Silver and his ilk would agree with McLaughlin's reasoning -- see Nick Gourevitch for a useful counter. But I do I think Silver agrees with McLaughlin's on the source of their disagreement. As Silver's latest post title suggests: "For Romney to Win, State Polls Must Be Statistically Biased":
The pollsters are making a leap of faith that the 10 percent of voters they can get on the phone and get to agree to participate are representative of the entire population. The polling was largely quite accurate in 2004, 2008 and 2010, but there is no guarantee that this streak will continue. Most of the "house effects" that you see introduced in the polls — the tendency of certain polling firms to show results that are consistently more favorable for either the Democrat or the Republican — reflect the different assumptions that pollsters make about how to get a truly representative sample and how to separate out the people who will really vote from ones who say they will, but won’t.
But many of the pollsters are likely to make similar assumptions about how to measure the voter universe accurately. This introduces the possibility that most of the pollsters could err on one or another side — whether in Mr. Obama’s direction, or Mr. Romney’s. In a statistical sense, we would call this bias: that the polls are not taking an accurate sample of the voter population. If there is such a bias, furthermore, it is likely to be correlated across different states, especially if they are demographically similar. If either of the candidates beats his polls in Wisconsin, he is also likely to do so in Minnesota....
My argument... is this: we’ve about reached the point where if Mr. Romney wins, it can only be because the polls have been biased against him. Almost all of the chance that Mr. Romney has in the FiveThirtyEight forecast, about 16 percent to win the Electoral College, reflects this possibility.
Here we have a pretty simple and honest disagreement. Silver thinks the pollster's models for what the electorate and turnout will look like are pretty accurate; McLaughlin doesn't. They agree that if Romney wins it will be because practically all of the state polls are biased against him.
The final critique is the one that fascinates me -- the notion that traditional pundits can look beyond the polls at more ineffable factors like "momentum" and "crowd sizes" and "closing arguments" and "energy" and "early voting" other kinds of secret sauces to deternmine who will win. These guys rely on numbers but also the political instincts they've hones for decades as pundits. This is basically what Michael Barone has done, for example, in his prediction of a Romney blowout.
In some ways this mirrors the "scouts vs. stats" divide that ostensibly existed in baseball as Silver was developing PECOTA and Michael Lewis was writing Moneyball. And a lot of commentators are setting it up that way.
I'd tend to agree that this is the most bogus line of criticism... but a few things prevent me from rejecting this analysis entirely. First, there is the crazy possibility that pundits really do possess "local knowledge," as Hayek would put it, that forecasters lack. I'm not sure I really buy this hypothesis, but it's possible.
Second, as Silver himself observed in The Signal and the Noise, scouts get a bum rap. Over time, the evidence suggests that the scouts who worked at Baseball America actually outperformed the sabermetricians at Baseball Prospectus. As Silver acknowledges, just because something can't be quantified doesn't mean it's unimportant. Maybe pundits like Barone have picked up on these "intangibles." Or maybe they have an implicit theory of the election that turns out to be superior to what is, at this point, a strictly poll-driven model. To put it another way: polls at this point are merely the intervening variable between the causal factors that the pundits like to talk about (the economy, the candidate's narrative) and the outcome (the election).
To be honest, I doubt that any of this is true. But the great thing is that come Wednesday, we'll know which group is more right. And then let the taunting commence!!
Daniel W. Drezner is professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University.