The 2012 presidential election witnessed Democratic incumbent Barack Obama triumph over Republican challenger Mitt Romney by 332-206 in the Electoral College and by 51 to 47% in the popular vote. Turnout among eligible voters was 58.2%, a slight but noticeable decline from the 61.6% participation rate in 2008 (McDonald 2013). Obama captured a second term with fewer electoral votes and fewer popular votes than he had accrued in the historic 2008 election cycle but became only the third Democrat to win a majority of the popular vote more than once. Most polls conducted prior to the November 6 election predicted such a result, with most showing a lead for Obama.
Overall, polls showed support for Obama remained fairly stable (relative to support for Romney) over spring 2012 and into the summer and began, in mid-August, to climb steadily for about a month before dipping downward until the final 10 days or so of the campaign when support strengthened anew (Panagopoulos 2013). Some analyses suggest events, including the conventions, presidential debates, and Hurricane Sandy, likely left lasting imprints on voter preferences, helping to explain the pattern of campaign dynamics observed over the course of the election cycle (Panagopoulos 2013). As Election Day approached, most reputable national polls showed a tight race between the two contenders, although most polls projected an Obama victory. Only a handful of polls suggested Romney would ultimately win, despite postmortem reports that internal polling for the Romney campaign had been much more optimistic even at this late stage (Silver 2012a).
In this report we help to assess accuracy and bias in the preelection polls conducted during the 2012 general election cycle. We summarize the accuracy of the final presidential, gubernatorial, and U.S. Senate preelection polls, conducted on both the state and national levels. We also place these findings in historical context.
National Presidential Preelection Polls in 2012
Polls were conducted nearly daily in the 2012 election cycle. A variety of different organizations conducted polls, ranging from newspapers and other media groups to think tanks, universities, and advocacy groups, as well as dedicated polling houses. Other polls were undoubtedly conducted by the parties and candidates themselves, but these were not made public. The poll aggregation website Pollster.com tracked 589 polls conducted between January 1, 2012, and Election Day (November 6). This made for an average of nearly two polls per day over this period, though most of these were concentrated toward the end of the cycle. In addition, thousands of state-level polls were conducted over the same period, asking not only about presidential preferences in each state, but also about gubernatorial and U.S. Senate contests.
We begin by assessing the accuracy of 21 final, national preelection polls for president conducted in the last week of the election cycle. We judge accuracy by three metrics: Mosteller et al.'s (1949) M3 and M5, and the A measure proposed by Martin, Traugott, and Kennedy (2005). M3 is the average absolute difference between the poll estimate and the final election result for each candidate. So, if a poll gave Romney 45% and Obama 50%, and the actual election result was 47% to 51%, then M3 would be 1.5 for this poll. M5 compares the polled margin between the two leading candidates to the eventual outcome margin between the same candidates and returns the absolute value of the difference between these margins. In the example above, M5 would be 1.
An alternative method for assessing poll accuracy was developed by Martin, Traugott, and Kennedy (2005). The new measure of predictive accuracy (A) is based on the natural logarithm of the odds ratio of the outcome in a poll and the actual election outcome (see Martin, Traugott, and Kennedy 2005 for a complete description). Among several advantages associated with this measure is the ability to compare accuracy across elections and polling firms and to detect the direction of bias because a signed statistic is produced (not an absolute value). A positive sign indicates a pro-Republican bias, while a negative sign indicates a pro-Democratic bias (Traugott 2005). (1) In the example above, A would be--0.12.
Table 1 presents the values for the three measures discussed above for each of the final, national 2012 polls we evaluate. The average value for the Mosteller Measure 3 is 1.72 for the 21 polls included in the analysis, while the average value for Mosteller Measure 5 is 2.72. Table 2 helps to situate poll accuracy in 2012 in historical context by presenting summaries of Mosteller's Measures 3 and 5 for elections since 1956 (see Panagopoulos 2009; Traugott 2005). The evidence reveals 2012 polls overall performed better than average against both the Mosteller Measure 3 (the average error for the 1956-2008 period was 1.9) and the Mosteller Measure 5 (the average error for the 1956-2008 period was 3.2) indicators. Still, both indicators imply 2012 polls were somewhat less accurate than polls conducted in the final days of the previous (2008) campaign.
Although there is no comparable time series of values for the measure of predictive accuracy developed by Martin, Traugott, and Kennedy (2005), the authors computed the average values of the statistic for 1948, 1996, 2000, and 2004. Martin Traugott, and Kennedy (2005) report that the average value of A for final preelection polls conducted in 1996 was--0.0838, suggesting a slight Democratic bias that overestimated Clinton's margin over Dole. In 2000, polls overestimated Bush support and the average value for A was +0.0630. For the 2004 election, the...