Americans' increasing reliance on electronic sources of information and methods of communication led some pollsters and media outlets to embrace online polling this year, and its fairly accurate performance in predicting the results of the election is reigniting a longstanding feud within the survey-research field.
To be sure, the methodology remains controversial — National Journal is among the news organizations that does not normally report on the results of Internet polls — but many Web pollsters compiled strong records this year. The performance of Internet polls, along with the profound and rapid changes in the way Americans communicate, is forcing pollsters and news organizations to confront anew the viability of surveying voters via their computers.
At a recent event organized by the Washington chapter of the American Association for Public Opinion Research, Doyle McManus, a columnist for the Los Angeles Times and that paper's former D.C. bureau chief, neatly summed up the mainstream political media's view of Internet polling.
"I was brought up in a very strict Irish-Catholic household, where we never looked at online polls," McManus quipped.
But all that may be changing. In the days after the presidential election, Nate Silver of The New York Times wrote that "some of the most accurate firms were those that conducted their polls online," singling out Internet pollsters like Google Consumer Surveys, Ipsos Public Affairs, Angus Reid Public Opinion, and YouGov for producing surveys whose results jibed with election returns. Online polls appeared to outperform traditional phone polling modestly, and they were also better than automated telephone polling, which performed poorly on the whole.
The majority of media outlets continue to spurn Internet polling, however. Despite Silver's findings, The New York Times' polling standards do not permit reporting of online polls because participants for these surveys are not randomly selected, and roughly one-in-five Americans lack Internet access. "In order to be worthy of publication in The Times, a survey must be representative, that is, based on a random sample of respondents," according with the Times' policy, which was shared with National Journal. "Any survey that relies on the ability and/or availability of respondents to access the Web and choose whether to participate is not representative and therefore not reliable," the policy reads.
Distrust of web polling is not just limited to some legacy media outlets. At that recent postelection AAPOR gathering, a confrontation subtext was on clear display when a questioner from the Internet polling website SurveyMonkey asked a biting question of the panel of traditional pollsters.
Online polls differ from telephone surveys in some basic ways. As the nomenclature suggests, respondents to Internet polls complete the surveys using their computer, tablet, or smartphone. But within the field of online polling, there are important differences in the way respondents are selected, known as sampling.
Traditional phone polling holds as one of its founding principles the idea of probability sampling; for the universe that is being surveyed, each member has a defined and equal likelihood of being selected to participate in the survey. When nearly every American lived in a household with a landline phone, it was easy to design a sampling frame for a basic political survey. Now, a dual-frame sample, combining landline and cell phones, is considered by phone pollsters to be closest to a true probability sample.
Internet polls, in most cases, use nonprobability sampling. They exclude households without Internet access; these tend to be older and lower-income Americans. Most online polls are also completed by people who opt to participate. Some participants sign up to complete online polls on websites that offer prizes such as gift cards to chain restaurants and movie theaters. Others are responding to ads placed on other websites that may or may not be related to the poll's subject, a technique known as river sampling.
Some online polls do use more-traditional sampling methods. GfK Knowledge Networks assembles its panels using address-based sampling, and those potential respondents who lack internet access but want to participate are provided with the hardware and Internet connection necessary for them to do so. This adds considerable expense for the pollster, but it helps to correct for what is known as noncoverage bias — the effect of not including certain segments of the population in the poll.
Breaks in the Dam
Some news organizations did begin to experiment with Internet polling during the 2012 cycle. CBS News dipped a toe in the water, partnering with YouGov for the CBS News/YouGov Electoral Vote Tracker, which in September showed President Obama with 332 electoral votes, identical to the numbers of votes he is likely to receive when the Electoral College convenes next month. But CBS also conducted national live-caller telephone polls, and they partnered with Quinnipiac University on battleground-state polls (most of which were cosponsored by The New York Times).
Reuters more enthusiastically embraced the new technology. It joined forces with Ipsos to launch national and state-level surveys for Reuters' American Mosiac project, which consists of tens of thousands of interviews this year, in addition to tracking polls leading up to Election Day. Previously, Ipsos had conducted more traditional, live-caller phone polls for the wire service, but as phone polling has become more expensive, Web-based surveys stand out as a more affordable alternative.
"We had a client in Reuters come to us at this time last year and say we want to do everything online because we want more data," said Julia Clark, vice president of Ipsos.
"One of the things we're seeing in the marketplace is that because of this new world of information, there's this expectation that you'll always have data," added Clifford Young, Ipsos' senior vice president and managing director. "There's this expectation of continuous data streams" by clients like Reuters that can more effectively be met by internet surveys, he said.
Young told National Journal that he thinks his firm's surveys "did very well overall."
"I would say that it was a learning experience over the course of the year," said Young. "I think we got better and more confident in our estimate as we neared the election. I would feel comfortable now doing another electoral cycle online."
YouGov's Doug Rivers also says his organization "did very well" this election cycle. The organization's work consisted of Web surveys in more than half the states, testing election matchups for presidential, Senate, and gubernatorial races. Responding to the charge that online polls can't represent the overall electorate because respondents opt in, rather than being randomly selected, Rivers told National Journal: "We've been doing this for a number of years, and the record's pretty good."
"It does depend on having the correct set of variables to select people from," continued Rivers. "We've changed that over the years, and we've improved it."
Rivers added that YouGov's opt-in surveys are not necessarily more self-selecting than telephone polls, considering the sharp drop in response rates over the past 15 years. "The difference is, do you hide behind the claim that it's a probability-based sample" with response rates that low, Rivers asked.
Many in more traditional survey research remain skeptical of online polling. Democratic pollster Mark Mellman said at the AAPOR event that despite the industry's strong overall performance, "there were also online polls in the particular states that were far off."
"Has it proven itself in my mind?" Mellman asked rhetorically. "Not yet."
Gary Langer, who produces polls for ABC News, is a leading evangelist for traditional, live-caller polling methods. In a phone interview with National Journal about 2012 election polling, Langer pilloried online polling, calling their sampling frame "a club of people who signed up to take point-and-click surveys for points redeemable for cash and gifts."
Noting that the final ABC News/Washington Post tracking poll of the presidential campaign accurately predicted the election result — the poll showed Obama leading Romney, 50 percent to 47 percent — Langer called it "a silly place to claim credit," for pollsters of all stripes, including those he sees as using less rigorous methodologies. "Horse-race accuracy in preelection polls is probably one of the worst ways to measure accuracy in polling overall," Langer said.
Langer's work, he said, focuses more on how Americans feel about the issues and the candidates on which the election is focused. "To boil that all down to the silly horse-race," he said, "it really devalues our enterprise and dumbs down our coverage in it."
A 'Silver' Bullet
Online pollsters hope 2012 proves to be a watershed election for their methods, and that the performance of Internet polls, along with other changes in political coverage, will lead to greater acceptance of their work.
"The New York Times, and the AP, and The Washington Post, and people like that: Will they report these polls?" asked YouGov's Rivers. "I don't know. I think it's silly [that they don't]. They should report the data that's out there and properly characterize it."
Ipsos' Young said that he is optimistic that the influence of predictive models on political news coverage — from the Times' Silver, to the HuffPost Pollster model, to the more simplistic Real Clear Politics Average — will also lead pollsters, media, and consumers to be more accepting of Web polling. He and Clark referred National Journal to an Ipsos white paper comparing their efforts to account for the non-probablistic sampling of online polls through Bayesian modeling with the decisions Silver and others make in assembling their election-prediction models.
"I think our profession has moved a lot. I think that we've shifted a lot. [But] I think ultimately, there's still a lot of hesitancy," said Young. "There is a distinct link between testing new approaches and the perspective that Nate Silver and other models are bringing."