Facebook Admits It Gave Netflix, Spotify Special Access to User Messages

In a cold twist, a push by Silicon Valley’s internet giants to curb undesirable speech could potentially undercut their legal immunity.Legislation introduced last month would revamp Section 230 of the Communications Decency Act — the legal shield that safeguards tech stalwarts, such as Facebook, Twitter, Google and YouTube against being sued for what its users post.Sen. Josh Hawley, R-Missouri, one of tech’s loudest critics in Washington, D.C., proposed an amendment that essentially would strip these companies of their protection unless their content moderation is deemed “politically neutral.” The consequences of curtailing Section 230 would be dire for Facebook, Twitter and the like.Hawley did not respond to TheWrap’s request for comment.“It’s a very simple explanation: it opens them up to liability for information that’s published on their platform,” Matt Bilinsky, a tech and media-focused attorney in Los Angeles, told TheWrap.Today, if a user defames another on Facebook, erroneously claiming that person gave them a venereal disease, for instance, the social network is under no legal threat. As Bilinsky explained, the law differentiates between publishers and platforms.“But if you remove Section 230 protection, it opens up Pandora’s box. [Facebook and other tech giants] are going to start getting those lawsuits,” Bilinsky continued. “And where they land, who knows? The expense for Facebook and Twitter to find out would be costly.”The new legislation would task the Federal Trade Commission with certifying that tech companies approach moderation in a neutral fashion. It would apply to any company with more than 30 million monthly active users in the U.S., 300 million monthly active users globally or $500 million in global revenue.What’s more, a supermajority vote would be required every two years. A company could potentially lose its immunity and be subject to legal liability.How’d we get here?Also Read: LinkedIn Live Video: More Engagement Than Video on DemandSection 230 dates back to 1996 — the Dark Age of the internet. Prior to its enactment, “there was an interpretation of the rules that, if you exercised any editorial control over content, you incurred more liability for it,” Jeff Kosseff, a law professor at the U.S. Naval Academy and author of “The Twenty-Six Words That Created the Internet,” a new book on Section 230, told TheWrap.Legislators, according to Kosseff, adopted Section 230 to give sites the “breathing room” to moderate content, rather than shy away due to fears of being slapped with a lawsuit.“Congress passed it because they did not want the platforms to be hands-off,” Kosseff continued. “They gave platforms the tremendous flexibility to moderate objectionable content. That was a policy choice that Congress made.”But rather than encourage heavy regulation, the act was intended to foster growth, Bilinsky said. And for two decades, it did just that, allowing companies like Facebook and Google-owned YouTube to flourish behind user-generated content, without fear they’d be sued for something a user said. It was taken as “gospel,” Bilinsky said, “that more was better: more sharing, more speech, more information.”Fast forward 23 years, and that gospel isn’t as readily accepted by the tech faithful. Now, moderation and fighting “hate speech” — a term the major platforms can’t agree on — has become a way to build trust with an increasingly wary audience.In May, Facebook banned a number of far-right commentators, including Alex Jones and Milo Yiannopoulos, as well as Nation of Islam leader Louis Farrakhan, for violating its policies on “dangerous individuals and organizations.” The company policy says that it “does not allow any organizations or individuals that proclaim a violent mission or are engaged in violence” to have Facebook accounts. But Facebook did not share specific infractions that led to the bans — making the decision appear capricious.Twitter also has spent much of the last year purging its platform, after chief Jack Dorsey said the company needed to improve the “health” of public conversation. But the company’s arbitrary enforcement of its arcane rules has led to confusion from both the left and the right. When pressed by Joe Rogan on his podcast earlier this year, Dorsey, on several occasions, said he didn’t know the “exact specifics” behind a number of bans.The result is no one is satisfied with the current arrangement.“On one side you have the Ted Cruz, Josh Hawley argument that there’s too much moderation of particular points of view,” Kosseff said. “Then you also have a whole other criticism, which is that the platforms don’t moderate enough.He continued: “The platforms really have two different, almost polar opposite criticisms of them, and they’re caught in the middle.”This was especially clear last month, when YouTube decided to demonetize, but not ban, conservative commentator Steven Crowder, after Vox reporter Carlos Maza shared a video compilation of Crowder calling him a “lispy queer” and “gay Latino,” among other slights, while critiquing his political opinions. (Maza said many of Crowder’s fans had also harassed him and texted him to debate the commentator.)Crowder, who hosts a right-leaning comedy show, had his entire channel demonetized for a “pattern of egregious actions” that — despite YouTube stating that he didn’t violate its policies — “harmed the broader community,” according to the Google-owned video service. That decision came a day after YouTube had initially said it wouldn’t take action against Crowder because he hadn’t violated its rules. Ultimately, YouTube’s move to demonetize Crowder did little to placate Maza, who tweeted that the company “made this problem worse than it already was” by not banning Crowder altogether.Republicans — including President Trump on several occasions — have ridiculed Silicon Valley’s attempts to moderate speech. And now, with Sen. Hawley’s “End Support for Censorship Act,” lawmakers are at least considering yanking tech’s ability to lean on Section 230 when making content decisions.“With Section 230,” Sen. Hawley said in a statement last month, “tech companies get a sweetheart deal that no other industry enjoys–complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship. Unfortunately, and unsurprisingly, big tech has failed to hold up its end of the bargain.”Also Read: FITE TV Eyes Addition of Music and Motorsports Verticals (Exclusive)This is where it gets hairy for Silicon Valley. Operating “commonly understood and reasonable community guidelines” is completely kosher under Section 230, Bilinsky explained, but moderation in moderation is key.“The more extensive you make the guidelines, the more selectively you enforce them, the more discretion you start to exercise over your platform, you start to move away from what seems like a platform and more like what looks like a publisher,” Bilinsky said.That move is what has put Section 230 up for debate — and has put companies like Facebook, Twitter and Google in increased danger. Without their current protections, these companies are looking at bloated legal fees and the potential abandonment of the user-generated content that helped build them up in the first place.Read original story How Facebook and Twitter’s Content Moderation Could Open a Legal ‘Pandora’s Box’ At TheWrap

Facebook admitted it gave major tech platforms like Spotify and Netflix access to millions of private user messages in a blog post late on Tuesday, confirming key details from an illuminating report from The New York Times.

The Times reported Tuesday evening that Facebook gave privileged access to user messages, allowing companies like Spotify and Netflix to read, write or delete conversations on the social network. Facebook, in its response, said it only allowed partner companies to view private messages once users had signed into Facebook using the partner’s app — granting them permission to their messages in the process.

“Did partners get access to messages? Yes. But people had to explicitly sign in to Facebook first to use a partner’s messaging feature,” Konstantinos Papamiltiadis, director of developer platforms and programs at Facebook, wrote in the blog post.

Also Read: Facebook Gave Netflix, Amazon, Spotify Secret Access to User Data (Report)

“Take Spotify for example. After signing in to your Facebook account in Spotify’s desktop app, you could then send and receive messages without ever leaving the app. Our API provided partners with access to the person’s messages in order to power this type of feature.”

Special access wasn’t reserved for a select few companies: Facebook shared private information with more than 100 companies, according to the NYT. Amazon, for instance, was able to grab user contact information, as well as Chinese tech giant Huawei — viewed with scrutiny by the U.S. intelligence community for its cozy relationship with the Chinese government — to gain “deeper insights” into user relationships, according to the report. Most of the clandestine deals discontinued by 2017, although some still remain active.

Spotify, in a statement to TheWrap, said the company had “no evidence that Spotify ever accessed” private Facebook messages.

Also Read: Facebook Shares Rally After Social Network Announces $9 Billion Stock Buyback

“Spotify’s integration with Facebook has always been about sharing and discovering music and podcasts. Spotify cannot read users’ private Facebook inbox messages across any of our current integrations,” a Spotify spokesperson told TheWrap. “Previously, when users shared music from Spotify, they could add on text that was visible to Spotify. This has since been discontinued.”

Papamiltiadis wrote that “none” of Facebook’s partnerships violated its 2012 settlement with the Federal Trade Commission, which had argued the company misled users into sharing more information than they believed. The special access, according to Papamiltiadis, was granted merely to help connect users.

“To put it simply, this work was about helping people do two things,” Papamiltiadis said. “First, people could access their Facebook accounts or specific Facebook features on devices and platforms built by other companies like Apple, Amazon, Blackberry and Yahoo. These are known as integration partners.”

Also Read: Mark Zuckerberg Says Stepping Down From Facebook 'Not the Plan'

He continued: “Second, people could have more social experiences – like seeing recommendations from their Facebook friends – on other popular apps and websites, like Netflix, The New York Times, Pandora and Spotify.”

In a statement provided to TheWrap, Netflix said it discontinued this particular Facebook partnership three years ago, and that it never accessed users’ private messages in any way.

“Over the years we have tried various ways to make Netflix more social. One example of this was a feature we launched in 2014 that enabled members to recommend TV shows and movies to their Facebook friends via Messenger,” the statement said. “It was never that popular so we shut the feature down in 2015. At no time did we access people’s private messages on Facebook or ask for the ability to do so.”

Amazon, in a statement to TheWrap, said it used its access to better “sync” Facebook users with its products.

“Amazon uses APIs [application program interface] provided by Facebook in order to enable Facebook experiences for our products,” an Amazon spokesperson told TheWrap in a statement. “For example, giving customers the option to sync Facebook contacts on an Amazon Tablet. We use information only in accordance with our privacy policy.”

Facebook shares dropped about 1.9 percent in early trading on Wednesday, hitting $141 per share.

Read original story Facebook Admits It Gave Netflix, Spotify Special Access to User Messages At TheWrap