Why polls don't work: after decades of gradual improvement, the science of predicting election outcomes has hit an accuracy crisis.

AuthorSlade, Stephanie

ON OCTOBER 7, 2015, the most famous brand in public opinion polling announced it was getting out of the horse-race survey business. Henceforth, Gallup would no longer poll Americans on whom they would vote for if the next election were held today.

"We believe to put our time and money and brainpower into understanding the issues and priorities is where we can most have an impact," Gallup Editor in Chief Frank Newport told Politico. Let other operations focus on predicting voter behavior, the implication went, we're going to dig deeper into what the public thinks about current events.

Still, Gallup's move, which followed an embarrassingly inaccurate performance by the company in the 2012 elections, reinforces the perception that something has gone badly wrong in polling and that even the most experienced players are at a loss about how to fix it. Heading into the 2016 primary season, news consumers are facing an onslaught of polls paired with a nagging suspicion that their findings can't be trusted. Over the last four years, pollsters' ability to make good predictions about Election Day has seemingly deteriorated before our eyes.

The day before the 2014 midterms, all the major forecasts declared Republicans likely to take back the Senate. The Princeton Election Consortium put the odds at 64 percent; The Washington Post, most bullish of all, put them at 98 percent. But the Cook Political Report considered all nine "competitive" seats to be tossups--too close to call. And very few thought it likely that Republicans would win in a landslide.

Conventional wisdom had it that the party would end up with 53 seats at most, and some commentators floated the possibility that even those numbers were biased in favor of the GOP. The week before the election, for example, HuffPollster noted that "polling in the 2006 and 2010 midterm elections and the 2012 presidential election all understated Democratic candidates. A similar systematic misfire in 2014 could reverse Republican leads in a small handful of states."

We soon learned that the polls were actually overstating Democratic support. The GOP ended up with 54 Senate seats. States that were expected to be extremely close calls, such as Kansas and Iowa, turned into runaways for the GOP. A couple of states that many were sure would stay blue--North Carolina, Louisiana--flipped to red. The pre-election surveys consistently underestimated how Republicans in competitive races would perform.

The following March, something similar happened in Israel. Both pre-election and exit polls called for a tight race, with the Likud Party, headed by Prime Minister Benjamin Netanyahu, and the Zionist Union Party, led by Isaac Herzog, in a virtual tie. Instead, Likud easily captured a plurality of the vote and picked up 12 seats in the Knesset.

The pattern repeated itself over the summer, this time in the United Kingdom, where the 2015 parliamentary election was roundly expected to produce a stalemate. A few polls gave the Conservative Party a slight lead, but not nearly enough of one to guarantee it would be part of the eventual governing coalition. You can imagine the surprise, then, when the Tories managed to grab 330 of the 650 seats--not just a plurality but an outright majority. The Labour and Liberal Democrat parties meanwhile lost constituencies the polls had predicted they would hold on to or take over.

And then there was Kentucky. This past November, the Republican gubernatorial candidate was Matt Bevin, a venture capitalist whom Mitch McConnell had trounced a year earlier in a Senate primary contest. As of mid-October, Bevin trailed his Democratic opponent, Jack Conway, by 7 points. By Halloween he'd narrowed the gap somewhat but was still expected to lose. At no point was Bevin ahead in The Huffington Posfs polling average, and the site said the probability that Conway would beat him was 88 percent. Yet Bevin not only won, he won by a shocking 9-point margin. Pollsters once again had flubbed the call.

Why does this suddenly keep happening? The morning after the U.K. miss, the president of the British online polling outfit YouGov was asked just that. "What seems to have gone wrong," he answered less than satisfactorily, "is that people have said one thing and they did something else in the ballot box."

'To Lose in a Gallup and Win in a Walk'

Until recently, the story of polling seemed to be a tale of continual improvement over time. As technology advanced and our grasp of probability theory matured, the ability to predict the outcome of an election seemed destined to become ever more reliable. In 2012, the poll analyst Nate Silver correctly called the eventual presidential winner in all 50 states and the District of Columbia, besting his performance from four years earlier, when he got 49 states and D.C. right but mispredicted Indiana.

There have been major polling blunders, including the one that led to the ignominious "DEWEY DEFEATS TRUMAN" headline in 1948. But whenever the survey research community has gotten an election wrong, it has responded with redoubled efforts to figure out why and how to do better in the future. Historically, those efforts have been successful.

Until Gallup burst onto the scene, The Literary Digest was America's surveyor of record. The weekly newsmagazine had managed to correctly predict the outcome of the previous four presidential races using a wholly unscientific method of mailing out postcards querying people on who they planned to vote for. The exercise served double duty as a subscription drive as well as an opinion poll.

In 1936, some 10 million such postcards were...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT