Questions about what social networks mean for personal privacy and security have been brought to a head by research at Carnegie Mellon University that shows that Facebook has essentially become a worldwide photo identification database. Paired with related research, we're looking at the prospect where good, bad and ugly actors will be able identify a face in a crowd and know sensitive personal information about that person. These developments mean that we no longer have to worry just about what Facebook, Google+, LinkedIn and other social sites do with our data; we have to worry about what they enable others to do, too. And it now seems that others will be able to do a lot. As reported in various privacy and security outlets like Kashmir Hill’s Forbes blog and Paul Roberts at ThreatPost, and demonstrated at last week’s Black Hat conference, the CMU researchers relied on just Facebook’s public profile information and off-the-shelf facial recognition software. Yet the CMU researchers were able to match Facebook users with their pictures on otherwise anonymous Match.com accounts. The researchers also had significant success taking pictures of experimental subjects and matching them to their Facebook profiles. Drawing upon previous research, they were also relatively successful at guessing individuals’ Social Security numbers. From there, of course, it is just an automated click to your Google profile, LinkedIn work history, credit report, and many other slices of private information. (See the FAQ to the research here.) (Note that this research is independent of the controversy around Facebook’s own facial recognition technology, which it recently unveiled to automatically tag users in pictures—and which authorities in Germany think might violate its privacy laws. The CMU researchers didn’t even have to log into Facebook to get to the photos there; they accessed profile information through Facebook’s search engine APIs.) The researchers have declined to make their system for matching widely available. But, now that they've shown that it is possible, the capabilities will no doubt be replicated. And you don’t have to stretch too far to imagine intrusive and unacceptable scenarios in retail settings, advertising venues, secured environments, social spots, protest rallies, dim lit streets, and so on. There’ll be an app for that.
There has been, of course, much debate about privacy. Facebook, to my mind, has tarnished its brand through its insensitivity, as evidenced by its repeatedly expanding what information is public by default. (It also made the auto-tagging feature under question in Germany a default feature.) Google hasn’t won many accolades, either. Eric Schmidt, when he was CEO of Google, famously said that “Google policy is to get right up to the creepy line and not cross it.” He was talking about chip implants, but his statement has been widely interpreted as describing Google’s general approach to balancing its interest with users’ privacy. Google has scored points recently, however, with controls in Google+. Now, the CMU research raises the stakes. It demonstrates that privacy issues go beyond what the social networking giants themselves do. Now the question includes what they’ve enabled others to do. The research also neuters the conventional retort, “you can opt out,” because the results are based on public information. As of now, the only way to opt-out is to not participate (or stay home). The problem will surely get worse. The technology will get better, and the information that feeds it will grow. Should Facebook, Google and others just pursue their own purposes and let the chips fall where they may? Will they step up to address the larger risks to which their customers are being exposed? Can they?
* * *
I'd love to hear what you think. Please share your comments below.Follow me on Twitter@ChunkaMui