Polls Underestimated Trump Again. Here's What We Know So Far.

  • Oops!
    Something went wrong.
    Please try again later.

Polling on the 2020 presidential election once again appears to have substantially underestimated President Donald Trump and other Republicans’ performance in key states, delivering another body blow to public trust in an industry still bearing the bruises of Trump’s unexpected 2016 victory.

It’s still too early to know the exact size of the error. Once all the votes are counted, polls may be off by only several points nationally. That would be a far cry from an unusual occurrence: In the years between 1972 and 2016, national polls of the presidential race have missed by an average of about 4 percentage points. But it’s clear that polling in multiple states this year again overestimated presidential rival Joe Biden’s lead, as well as Democrats’ chances in House and Senate races.

Those state errors appear to run in the same direction as they did in 2016, when polls suggested Democratic presidential candidate Hillary Clinton had an edge in Michigan, Pennsylvania and Wisconsin. That’s especially concerning given that several of the accepted contributing causes to the 2016 error ― a large number of late-deciding voters, a lack of late state polling to catch that movement and the failure of some surveys to balance the educational level of their samples ― don’t appear to be at play this time to the same extent. Additionally, the error this year appears to have occurred broadly (though not universally) across different categories of surveys, including both those conducted by phone and online, both public and many private polls.

In Wisconsin, for instance, where Biden eked out a tight win, final polls showed Biden up by an average of roughly 7 or 8 points. In Texas, averages showed a tight race with Trump less than 2 points ahead, well below his actual margin of victory.

That’s been enough for critics to declare election polls useless or catastrophically broken, even as industry practitioners urge patience.

“The issue is how the polls collectively performed in describing the official results of the 2020 election,” the American Association for Public Opinion Research, an organization for survey researchers, said in a statement Thursday. “It will take weeks for election officials to carefully count all early, absentee, in-person and provisional ballots. As such, it is premature to make sweeping judgments on the polls’ overall performance before all the ballots are counted. Patience is necessary.” (Full disclosure: The author of this story is a member of AAPOR and is serving on a task force to review the performance of this year’s polling.)

It’s also still far too early to pinpoint causes for the error or even to say definitively how much of it stems from underlying problems not fixed after 2016 versus a different set of causes. The outcome is certainly likely to stoke the theory that polls are missing a subset of “shy Trump voters.” The most specific iteration of that theory ― that a vote for Trump is considered socially undesirable, so some of his supporters will lie to pollsters about it ― still doesn’t seem to square entirely with what we know about the results. Trump was hardly alone in outperforming his poll numbers. Other GOP politicians, such as Sen. Susan Collins in Maine, presumably few analysts’ idea of an unusually controversial candidate, did so, too.

The continuing and growing problem of nonresponse is something that we have to look at quite closely. Don Levy, director of the Siena College Research Institute

But a related idea stands out: nonresponse bias, in which certain groups simply don’t answer the phone, in a way that’s not corrected by weighting on demographics characteristics including age and educational level. Under one theory, polls are specifically missing a group of “low social trust” voters who have little faith in the country’s civic institutions and who break heavily toward the GOP.

This would be a serious cause for concern. Despite public perceptions to the contrary, the accuracy of both election polling and non-election telephone polling has remained relatively stable over the past few decades, even as response rates to those polls have plunged. Once responses were weighted on key demographic categories, those who did answer resembled those who didn’t sufficiently enough to compensate (with a few exceptions, including ― perhaps notably ― levels of civic and social engagement). If that fragile balance has been disrupted, pollsters will have a lot of work to do to find out if there’s any way to compensate.

“The continuing and growing problem of nonresponse is something that we have to look at quite closely,” Don Levy, the director of the Siena College Research Institute, told Politico.

There are other possibilities, too. This year’s coronavirus pandemic reshaped the way the nation voted, with unprecedented numbers voting early or by absentee mail. It also introduced a newfound partisan divide in voting methods, with Democrats far likelier to vote by mail. These shifts may have made it more difficult for pollsters to correctly gauge turnout ― a problem that could resemble a reversal of 2012, when turnout models underestimated Barack Obama’s support.

“Outside of a few historically vote-by-mail states, pollsters do not have any experience modeling the vote in this type of election,” Courtney Kennedy, Pew Research Center’s director of survey research, told USA Today. “The number of unknowns this year are greater than in any election in recent memory.”

The results of this election cycle are likely to raise fresh questions about how the media should report on horserace polling and the predictions based on those polls. In 2018, for instance, The Associated Press announced new guidelines that “poll results that seek to preview the outcome of an election must never be the lead, headline or single subject of any story.” In announcing its decision, the news service cited the results of the 2016 election as a reminder that polls are “unquestionably a piece of the story, but never the whole story.”

Beyond the issue of data quality, the aftermath of the 2020 election also suggests there remains an ongoing issue of communication about the uncertainty levels inherent in election polls, something many pollsters, polling analysts and forecasters have attempted to stress over the past four years.

“Could this happen again?” Mark Blumenthal, a pollster and former HuffPost editor who helped to analyze the 2016 polling miss, said a year later. “Hopefully, if it happens again, we will all do a better job warning everyone that it’s possible.”

Not all polls, though, are election polls. One immediate question about this year’s miss is its implication for other kinds of public opinion polls, such as those designed to suss out Americans’ beliefs and behavior on issues from anti-racism protests to the coronavirus pandemic. As we wrote in 2017: “While horse-race surveys may command the bulk of attention, polls that gauge the national mood on issues of policy serve at least as important a role in the democratic process. Writing off their results as intrinsically unreliable would potentially leave much of the nation voiceless in the years between elections.”

These types of polls have a few significant things working to their advantage. Unlike election polls, they don’t have to worry about turnout modeling. Instead of trying to define a population ― the group of people who’ll end up voting in a specific election ― that doesn’t yet exist, they can instead rely on benchmarks like census data. And polling errors that would loom large in the context of a closely fought election may be less consequential when it comes to interpreting public opinion. Differences of 2 percentage points in election surveys can change the outcome, but a 2-point difference in opinion on an issue isn’t usually as substantial, especially if such results are correctly characterized as estimates.

Still, if some or all of this year’s election error is caused by missing or misinterpreting the voices of specific blocs of the public ― and if the relative accuracy of the national-level results comes in part from averaging out biases in opposing directions ― that presents a troubling sign for non-election polls’ ability to fully capture public sentiment.

“Stop treating pollsters as oracles and treat polls as imperfect estimates of public attitudes,” political scientist Rich Clark, who has worked as a pollster in Vermont, tweeted on Wednesday. “In a democracy, measuring public attitudes is important, and, unfortunately, polls are still the best means for doing so.”

(Photo: Onypix via Getty Images)
(Photo: Onypix via Getty Images)

Related...

How Ranked-Choice Voting In Maine Could Decide Sen. Susan Collins' Fate

Democratic, Republican Legal Armies Ready If Cloudy Election Outcome Heads To Court

Veterans Would Be 'Happy' To Escort Defeated Trump From The White House, Zaps Ad

Pennsylvania Ballot Count To Take 'Days' After Election, Official Says

Donald Trump Is A Down-Ballot Burden — Even In Districts He Won

Love HuffPost? Become a founding member of HuffPost Plus today.

This article originally appeared on HuffPost and has been updated.