Exit polls: Can you trust them in 2012?

With Election Day here, the airwaves and blogosphere are fraught with last-minute guesswork and freestyle analysis: Are we seeing Mitt Romney's push to attract the female vote paying off in Ohio? Are Catholic voters departing from their 2008 patterns? Did President Barack Obama's Lena Dunham ad make a difference to 18- to 24-year-olds in the Midwest?

In these last moments of an exceedingly tight race—not to mention, the first presidential race with Twitter in full swing—no stone will likely go unturned.

A good deal of this frenzy is created by exit polling, a staple of election coverage since the 1980s. But in recent years, the authority of exit polls has come into question after several high-profile failed predictions. This year, improved methodology and a narrower focus on battleground states may reverse that trend and revitalize exit polls as a humbler and more reliable resource.

The basic groundwork has always remained the same: As voters spill out of the booths, they meet pollsters from the National Election Pool (NEP), which is a consortium made up of the TV networks, the Associated Press, and Edison Media Research (the designers of the survey), and which is dedicated to making projections for each state and to breaking down the voting patterns of individual demographics. The NEP's interviewers briefly quiz fresh voters on their choice of candidates, their demographic information, and their opinion on a range of politically sensitive issues. Later on the data is handed over to the networks to enrich the election coverage.

"The biggest value of exit polls is the analytic value," Joe Lenski, the executive vice president of Edison, told Yahoo News. "This year we're going to have even more data to supply that in the national questionnaire."

Ideally, the returns on this questionnaire add a lot to the discussion: On election night, exit polls can provide a way to navigate the subtler trends in voting as Americans decide in real time. And if done right, they provide a peek into the likely margin of victory of a given state.

When they first appeared on the scene, this predictive capacity appeared to be a kind of clairvoyance: NBC scored a major victory on Nov. 6, 1980, when the magic of exit polling allowed it to broadcast Reagan's victory hours before any of the final votes were in. And John Edwards shuffled out of the Democratic primaries in 2004 not after certain electoral defeat—the polls had closed in only three states when he called John Kerry in congratulations—but merely after consulting some discouraging exit polling data.

Do exit polls really warrant this much deference? Though NBC's Reagan declaration cemented the polls' presence in election coverage, they haven't duplicated that kind of predictive feat in recent years. In fact, their reputation has run the other way: The NEP itself was formed as a successor to the collapsed Voters News Service, which, due to a computer crash, failed to offer up any exit polling for the 2002 midterms after it had already offered a bit too much information in 2000, flipping back and forth between Gore and Bush.

These days, the networks no longer report a state's exit poll data before the polls have closed there, because their predictions may not only be off the mark (as in 2004's general election, when they indicated, wrongly, that the key states of Ohio and Florida would go for Kerry), but could also send would-be voters back home once they hear their state has already decided.

Jack Shafer, the media columnist for Slate Magazine, thinks that the question isn't whether or not one can trust exit polls, but what we trust them to do: Rather than expect them to offer up unwavering forecasts, he says it's best to treat them as one tool of a larger kit: "If you have reliable precinct data from past elections, conduct a rigorous exit poll and build a good model, you can accurately predict winners," Shafer says. "But there are so many provisos that it doesn't take a lot to make a prediction go south now and again."

Perhaps this is why this year Edison Research has retooled its survey, drastically reducing the data gathering in 19 "non battlegrounds" so that more resources can be poured into the 31 states that make up the rest of the country, providing sturdier figures from the more contested states.

"We'll be increasing sample sizes from 300 voting precincts to 350," Lenski says. To account for the growing pool of early voters, the company is not only increasing telephone sampling from 20 percent to 30 percent but is also calling cellphones for the first time, as well as less frequently used landlines.

All of this costs more, and so while those 19 "safe" states won't be getting the standard diagnostic breakdown, Edison hopes that the increased sample size and representation in the rest of the country will ensure that a more robust set of data ends up on the desks of the networks. (The states that will be left out are Alaska, Arkansas, Delaware, Georgia, Hawaii, Idaho, Kentucky, Louisiana, Nebraska, North Dakota, Oklahoma, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, West Virginia and Wyoming, and the District of Columbia.)

Terry Madonna, the director of the College Poll at Franklin and Marshall's Center for Politics and Public Affairs, is happy to see a more representative sample coming in. Having conducted exit polls himself on a smaller scale in Lancaster, Pa., where the college is located, he's aware of the incredibly sensitive nature of some of the demographics and issue categories that can be blown up in the countdown to the election.

"College students, 18 to 24, [they've been] difficult to discuss," Madonna notes. "They're so small—maybe 30 or 40 people in a given precinct sample—but significant. And I get very nervous when we make real extrapolations about those types of groups on TV."

Another example group would be undecided voters: Their lack of commitment is significant for the election night narrative, but strangling the data for an across-the-board explanation is a tenuous task.

"You know, when 3 to 5 percent of people sampled turn out undecided, we have to be careful about that," Madonna said. "The motivations behind declaring oneself 'undecided' are so endless that while one reason in particular may be true for those 40 or 50 people who've been sampled, it may not be so representative—the variables are huge."

The gap between Edison's raw data and the multiple narratives on TV begins to widen after around 4 p.m. on Election Day, when the researchers from Edison present the data to the grand council of network representatives—from ABC, CBS, CNN, Fox News and NBC —briefing their them on the nuances of the statistics.

What happens next is up to the broadcasters, and how they choose to weight, interpret and contextualize the data (the AP wire reports only the actual counted votes). This explains why some of the networks predict winners independently of one another.

Once again, this year it won't only be the broadcasters who apply what Shafer calls "the special sauce" to Edison's raw ingredients; when bloggers and the Twitterati get a hold of the original numbers, expect the diversification of 11th-hour prophecies to intensify.