How to read polls like a pro

 Illustration of a hand holding magnifying glass up to a polling chart, showing detail of arrows pointing up and down.
Illustration of a hand holding magnifying glass up to a polling chart, showing detail of arrows pointing up and down.

Before the 1936 U.S. presidential election, an infamous Literary Digest poll predicted Republican Alf Landon would trounce incumbent Democratic President Franklin Delano Roosevelt. A market researcher named George Gallup realized that the poll, taken via a mail-in survey by the magazine's disproportionately wealthy readership, would be wildly off. Gallup's own surveys showed that Roosevelt would win. These insights not only made him the country's foremost soothsayer, they also launched Gallup, Inc. and the entire field of modern polling and survey research. "His most important innovation," said G. Elliott Morris in his history of polling, "was to 'sample' a smaller number of poll respondents" who would be representative of the country's demographic makeup as a whole.

Today, a dizzying array of survey research firms use this basic framework to ask people all over the world how they feel about hot-button political and social issues, as well as who they plan to vote for in upcoming elections. And that can be overwhelming for ordinary people struggling to make sense of the often-conflicting signals they receive from these surveys. There is also widespread skepticism about the accuracy of polls after they missed the mark in 2016 and 2020. But with a little bit of knowledge about how to interpret polls, what to expect from them, and especially how not to over-interpret any individual survey, it is possible for polling to illuminate, rather than confuse.

Navigating controversies

In recent years, pollsters have struggled with issues that can affect the accuracy of their work. For starters, people are simply much less likely to answer the phone (any kind of phone) for a survey researcher than they once were. Pew Research shows that telephone response rates fell from 36% in 1997 to 6% in 2019, forcing researchers to rely on multiple methods, including online survey panels, to get a representative sample. Some researchers, like The New York Times, started identifying respondents who might be unlikely to answer a poll and began making deliberate attempts to reach them, to address "non-response bias" that could affect one candidate's supporters disproportionately.

When they don't get a sample representative of the population they are seeking to survey, researchers will give more "weight" to respondents from certain demographic subgroups, especially where those groups tend to have significant partisan splits. For example, college degree holders are more likely to answer the phone or open a text message from a pollster. Not only are Americans with a college degree a minority of the population, they also now tend to vote for the Democratic Party in disproportionate numbers. So researchers have to "weight" for education to ensure that enough non-college-educated respondents are represented in the survey. Other organizations adjust for different variables. "There is immense pressure to get it right," said former YouGov president Peter Kellner. He also said that engaging in too much adjustment can make an accurate poll into an inaccurate one, as happened with polls of the U.K.'s general election in 2017.

Many, though not all, survey researchers publish their "cross-tabs" along with the main results of the survey. Cross-tabs drill down to see how, for example, 18-29-year-old Latino respondents are planning to vote in a particular election. Polls with cross-tabs that seem unusual or unlikely are often subject to skepticism from critics, who are deemed "cross-tab truthers" by survey researchers. "By design, breaking out survey results into subgroups results in smaller sample sizes, and thus larger margins of sampling error," said Adam Carlson for Split Ticket. The cross-tabs on any individual poll can be all over the place. Be wary of polls whose cross-tabs show enormous shifts in subgroup support within a single election cycle, he suggested, but don't throw them out either.

Informed poll consumption

Being an informed consumer of polls hasn't changed much in recent years. Polling experts recommend not giving too much weight to or overreacting to any single survey because it could be an outlier, an approach popularized by former FiveThirtyEight data journalist Nate Silver. One good illustration of this principle is the October 2020 Washington Post poll that gave President Joe Biden a 17-point lead over Donald Trump in Wisconsin, a state that Biden went on to win by less than one percentage point. Many pollsters, especially those affiliated with major media organizations like CNN, breathlessly report their own surveys without noting the context of competing research. "Horse race numbers should not be taken as gospel by pundits or politicians," said Vanderbilt University political scientist Joshua Clinton.

Instead of trusting any one survey, pay attention to averages of recent polls, such as those compiled by RealClearPolitics or FiveThirtyEight, which includes polls as well as other variables that many researchers believe are important. Some of these groups might exclude certain surveys produced by "partisan pollsters" — survey research firms that work for a specific political party or appear to have a clear agenda. It is best to at least be skeptical of research produced specifically by or for one political party, as well as of "internal polling" leaked by campaigns.

Historically, polls have tended to be more accurate as Election Day approaches. Don't put too much stock into the polls until "after both parties have had their conventions and we're headed into September," said National Journal's Natalie Jackson. But that doesn't mean surveys taken further out are useless because they have been fairly close to the final result in U.S. presidential elections this century.

Political scientists will also insist that many events treated as consequential by journalists don't actually seem to have a major impact on election outcomes. That includes the televised debates between presidential candidates and both parties' summer nominating conventions. For the conventions, candidates have typically received a modest "bounce" or increase in their polling numbers in the days following their official nomination. For debates, the candidate deemed the winner by viewers and journalists may see an ephemeral improvement in their poll standing, which has tended to disappear over time. "You can accurately predict where the race will stand after the debates," said political scientist John Sides in 2012, "by knowing the state of the race before the debates."

Be careful, as well, to distinguish between polling averages and something called an election forecast. The latter uses non-polling variables to make an informed prediction about election results, typically in percentage form well in advance of Election Day. Be sure not to assume that a polling forecast that gives one candidate a 75% chance of winning is the last word on the matter. Anyone who has played poker knows that an event with a 25% probability happens all the time. Forecasts can also skew behavior. "Quite possibly," said historian Jill Lepore, some voters looked at forecasts confidently predicting a Clinton victory in 2016 "and decided not to vote."

Despite these controversies, polls remain a highly accurate predictor of election outcomes, and FiveThirtyEight's Nathaniel Rakich found that polls correctly called 78% of races between 1998 and 2022. That's about as much certainty as anyone can hope for in politics.