If today’s tech headlines follow the pattern of the rest of this month’s news, we’ll be able to celebrate the sixth anniversary of Data Privacy Day with a report that yet another company has seen its customers’ information exposed through some massive, preventable data breach.
Fortunately, strong federal laws ensure that we know about these incidents in time to protect ourselves—and ensure that retailers, banks and other organizations can share secrets about threats and vulnerabilities.
Oh, wait, that last sentence is from the 2024 version of this post. This, however, is the 2014 edition, and so it must report that no such nationwide legal umbrella covers you and your various digits. You often have to hope that companies’ own self-interests lead them to do the right thing.
Most of the time, nothing too bad happens if they don’t. Your credit-card firm refunds phony charges and sends you a replacement, the free credit monitoring offered to make up for the breach doesn’t reveal subsequent mischief, and life goes on.
But for an unlucky few, identity theft becomes an expensive and prolonged problem. Third parties can suffer too: A community theater in Redlands, Calif., saw its site used to test stolen credit-card numbers from across the country and then ate almost $30,000 in service fees levied by its payment-processing service after it refunded the bogus transactions.
And we all wind up paying a little extra when poor security in credit-card terminals, subscriber databases and Web servers—none of which you have any power to fix on your own—increases the cost of doing business everywhere.
Washington’s rules on the subject largely consist of privacy laws governing the health-care and finance industries. That leaves out a mall’s worth of companies—Target, Neiman Marcus and Michael’s, to name the last few big cases of retailers that had their networks hacked.
Firms that do an especially bad job safeguarding people’s data risk an investigation and fine by the Federal Trade Commission. But one of the highest-profile FTC targets, Wyndham Hotels, is questioning the commission’s authority in court—and considering the last legal challenge of a regulator acting on less-than-clear authority, it could win.
In effect, Washington has outsourced this work to the states. “Most kind of run-of-the-mill data-breach reporting obligations are driven by state regulations,” said Jim McCullagh, a partner with Perkins Coie in Seattle and co-chair of its privacy and security practice group.
But the problem with state laws is that there are so many of them. Forty-six states have passed laws with varying definitions of “personal information” and requirements for disclosure (the holdouts being Alabama, Kentucky, New Mexico and credit-card hub South Dakota), and companies doing business in more than one must figure out how to comply with all of them.
That’s not an easy exercise. The usual course, McCullagh explained, is that “the state that has the most stringent standard is the one that controls”—which leads to distant firms having to familiarize themselves with California or Massachusetts laws.
In theory, it shouldn’t take the threat of legal action to get companies to prevent breaches and notify customers promptly if they happen. They represent an expensive habit, at an estimated average cost of $204 per customer back in 2009, and customers can flee if they think a company’s careless with their data.
But not all companies are so easy to fire—try canning your cable company if no alternative runs to your house.
Sometimes that’s for good reasons, such as not hindering a law-enforcement investigation. Sometimes you have a case like WellPoint.
How to keep customer safety at bay
In 2011, that Indianapolis-based insurance company waited five months to notify customers that poor security at its site exposed their information—then let Indiana’s state attorney general find out about this breach from a newspaper report. This sloppiness earned the company a fine from the state… of $100,000.
The Target debacle has renewed Congressional interest in the topic, in the form of bills such as the Personal Data Privacy and Security Act of 2014, introduced by Sen. Pat Leahy (D-VT) and the Data Security Act of 2014, put forth by Sens. Roy Blunt (R-MO) and Thomas Carper (D-DE).
It shouldn’t be that hard to adopt the best practices of the states and set a national standard—it could be a rare opportunity for Washington to lighten the regulatory burden for many companies without cutting customers loose first.
But don’t count on Congress switching into high gear and quickly resolving its differences. Wrote National Consumers League public-policy vice president John Breyault, an advocate for a nationwide disclosure law: “I hesitate to say that we’re any closer to resolving those disagreements today than we were before the Target breach.”
Last year showed how thoroughly Congress could screw this up. After months of effort to craft a cyber-security bill that would encourage companies to share confidential security details with each other and with the government, the House passed a bill called the Cyber Intelligence Sharing and Protection Act.
CISPA had a number of issues, but the biggest one was the in-retrospect laughable provision that gave companies blanket immunity to share information about threats and vulnerabilities with the National Security Agency—as in, the agency that was actively subverting security standards at the time.
(Such prominent members as Sen. Dianne Feinstein [D-CA] still see the NSA’s overreach as not a bug but a feature, but that’s an issue for another column.)
CISPA stalled out in the Senate, but the problem of companies not comparing notes about vulnerabilities remains. As my former Washington Post colleague Brian Krebs, the foremost reporter on this subject, observed in an e-mail two weeks ago about Target’s troubles, “It’s a month out from the breach, and we still don’t have official details on what happened. That’s inexcusable in my mind, and very short-sighted.”
But it does fit in with the political history of this issue.