On Super Tuesday, the Signal got 9 out of 10 election predictions correct, following up on an Oscar night where we got 3 of 4 award picks right. Not bad, but we really wish we didn't have those two annoying blemishes, right?

In fact, we're delighted! When the markets are functioning properly, we *expect* to get a certain percentage wrong every time. (The exact amount depends on how certain the probabilities are for each election on a given day; on Oscar and election nights we actutally expected to get about one wrong each.) If we had swept all 14 predictions, it would have meant our confidence was too low, or what statisticians call "miscalibrated." It sounds strange, but it's actually possible to get too many predictions right.

Think of it like rolling a six-sided Monopoly die and predicting the odds that you'll roll a one, two, three, four or five. On any given roll, you can say with high confidence—83 percent—that you will succeed. But do it 100 times, and you would expect to roll a six about 17 times. If you got your one-through-five all 100 times, there's probably something wrong with your dice.

At the Signal, we don't call up-and-down outcomes. Like a weatherperson, we assign it a likelihood or a percentage chance of happening. Our predictions range from high confidence—we felt Mitt Romney had a 98.7 percent chance to win the Virginia primary, for example—to more up-in-the-air estimates—we thought Viola Davis had a 58.1 percent chance of taking home the Best Actress Oscar. (Meryl Streep won instead, whom we had given a 39.9 percent chance.)

Over time, we should get about half of our 50-percent predictions right, three-quarters of our 75-percent predictions right, and so forth. This is an important difference in what we're trying to do at the Signal compared to the usual practice of picking outcomes outright. The only way to judge us is over time, measured over the course of several predictions. For any single prediction by itself, you can't actually say that it was right or wrong.

Perhaps this sounds defensive—"if we're wrong, it's because of the system!"—but that's the way probability works. At the same time, you *absolutely* *should judge us over the course of weeks and months* as our record of predictions grows, and we'll be sure to publish that track record for all to see.

For an example, let's look at our Super Tuesday predictions. This table shows what we were projecting as of 2:52 p.m. ET on Monday, March 5.

Virginia: Romney | 98.7% |

Massachusetts: Romney | 99.6% |

Vermont: Romney | 97.5% |

Idaho: Romney | 95.7% |

Georgia: Gingrich | 98.0% |

Ohio: Romney | 82.6% |

Oklahoma: Santorum | 88.6% |

North Dakota: Romney | 72.1% |

Alaska: Romney | 82.6% |

Tennessee: Santorum | 57.1% |

total | 872.5% |

expected number correct | 8.725 |

*Sources*: Betfair and Intrade

To compute how many predictions we expected to get right, just add up all the percentages and then divide by 100 to convert from a percentage to a number. In this case, we expected to get 8.725 out of 10 correct.

David Pennock is a Principal Research Scientist at Yahoo! Research. Follow him on twitter @pennockd.

**More popular Yahoo! News stories:**

• As go the prediction markets, so goes Ohio

• Sarah Palin crashes Super Tuesday party on Twitter

• Online, voters still getting to know Santorum

*Want more? Visit The Signal blog, connect with us on Facebook, and follow us on Twitter. Handy with a camera? Join the Yahoo! News Election 2012 Flickr group to submit your photos of the campaign in action.*