Will Google Be the Next Tech Giant to Face Off with Washington?

YouTube’s extremist content is a red flag for advertisers and viewers alike, but Google’s real problem might be how much data it shares with advertisers.

As Facebook does its best to evade impending regulations hastened by the Cambridge Analytica scandal, Google has flown largely under the radar. Yet by all accounts, Google may be an equally likely target for regulation when it comes to user privacy. As News Media Alliance president David Chavern told Bloomberg this week, “Google, in every respect, collects more data [than Facebook]. Google, in every respect, has a much bigger advertising business.” And the larger conversation about privacy in Silicon Valley, fueled by Mark Zuckerberg’s testimony before Congress last week, may be turning toward its biggest competitor.

Google, after all, was in the business of mining user data long before Facebook came on the scene. The company’s DoubleClick system allowed advertisers to target and measure ads using a combination of web-tracking data and myriad pieces of information available about users from Google—things like credit-card information and rich location history (though a Google spokeswoman told Bloomberg that consumer card data isn’t used for ad personalization). Including YouTube, Gmail, and other Alphabet properties, the many-headed Google leviathan controls nearly 40 percent of the U.S. digital advertising market, compared to Facebook’s 20 percent.

Facebook has taken the brunt of the criticism for the increasingly creepy ways that it spies on users to turn their likes, dislikes, photos, relationships, and deeply held personal and political beliefs into grist for advertisers, as well as its well-documented struggle to tamp down on “fake news.” But YouTube—the largest revenue driver in Google’s multi-billion-dollar profit machine, outside its core search business—has its own problems, which may come to overshadow Facebook’s, once the psychic trauma of the 2016 election begins to fade into the background. As I reported earlier this year, eliminating conspiratorial videos on YouTube has become a game of content-moderation whack-a-mole for the video platform, with the most inflammatory and scandalous content frequently rising straight to the top of the site’s Trending tab. And though those at the site have acknowledged the problem (“We’re working to improve the applications of [our] policies to ensure that videos containing hoaxes both in the video and in the title and description do not appear in the Trending tab again,” a person familiar with the tool’s functionality told me), the fundamental fact remains: there is an alarming amount of conspiracy theories, misinformation, and plain, old-fashioned hate lurking just under the veneer of makeup tutorials and music videos on YouTube—a phenomenon enhanced by the site’s algorithm, which is designed to suck viewers down the rabbit hole.

YouTube’s teeming underbelly is a long-standing issue for advertisers; on Thursday CNN reported that, despite settings meant to protect advertisers, ads for more than 300 organizations and companies ran on YouTube channels that promoted Nazism, conspiracy theories, and pedophilia. This is not the first time YouTube’s safeguards have failed, raising questions about whether advertising on the site will always carry a certain amount of risk. So far, Google has avoided Facebook’s woes. But through YouTube, it appears ill equipped to deal with the consequences of inflammatory content on its platform, as well as missteps inherent to its advertising model. As an inter-generational platform, Facebook was an easy first target for lawmakers, many of whom use it themselves, or have children or relatives who do. Google’s complexity, and YouTube’s relative youthfulness may shield both from scrutiny for now. But in light of Washington’s push to bring down the hammer, their reprieve may be temporary.