YouTube thought a giant American flag wasn’t 'advertiser friendly'

A 15-second YouTube clip titled “The Flag at FedEx 9/11/2011” isn’t exactly video for the ages. This shot of a giant American flag being unfurled across FedEx Field on the 10th anniversary of the Sept. 11 terrorist attacks shakes a bit, and the audio is too muddy to make out the announcer’s words.

YouTube’s content screeners, however, had a different problem with that video of a patriotic pregame ritual: They judged it not “advertiser friendly” and therefore disqualified it from featuring ads that might make its author some spare change.

That Alphabet, Inc. (GOOG, GOOGL) subsidiary fixed the mistake after I inquired about it. But the underlying problem remains: It’s hard to screen the stuff random people upload to YouTube.

Advertiser anxiety

Historically, YouTube has been a money-making machine for its corporate parent. Google doesn’t break out its share of its total ad revenues, but it’s long touted that video-sharing site as a strong contributor to those revenues.

In February, however, the TImes of London reported that YouTube was pairing mainstream ads with videos uploaded by jihadists, neo-Nazis and other extremist elements. That represented a massive failure of YouTube’s “programmatic” ad-matching software, which is supposed to fit ads to the interests of the expected audience of a clip.

More than 250 brands quickly responded by pulling their ads from YouTube. Over April, the ad-analytics firm MediaRadar estimated that 5% of YouTube’s U.S. and Canadian clients had fled the service.

Google apologized and said it would implement stronger safeguards against ads showing up next to videos that would embarrass or horrify ad clients.

That’s what David Heyman ran into with his video of a giant American flag. Heyman is a D.C. sports fan whose most-viewed video is a 2008 clip of President George W. Bush throwing out the first pitch at Nationals Park. He was surprised to get a “Your video can’t be monetized” e-mail from YouTube.

YouTube’s rules

That message explained that Heyman’s video “may not be advertiser friendly.” A Google support document says that term covers “sexually suggestive content,” “violence,” “inappropriate language,” “promotion of drugs and regulated substances,” and “controversial or sensitive subjects and events.”

Heyman was annoyed more by the principle of the thing than the potential lost revenue, since he says he’s “never received a dime” from YouTube ads. He requested a review of that decision and got the same answer.

“My guess is since my video has ‘9/11’ in the title, that is it,” he wrote in an email. “There is no reference or voiceover to the attack.”

I e-mailed Google to ask what had happened and got a vague mistakes-were-made response: “sometimes we make the wrong call on content​.“ The YouTube spokesperson who sent that said Heyman’s video now had ads enabled.

So we can still only guess whether mentioning 9/11—a widely discussed topic seared into many American’s memories—will to continue to upset YouTube’s sensibilities.

(I can’t rule out Washington’s hapless, apparently hopeless NFL franchise being the sore subject here.)

A hard job for men or machines

But there’s no mystery about the degree of difficulty of reviewing this content. With some 400 hours of video uploaded to YouTube every minute, the site has to make snap judgments on a flood of new content that never stops.

This is the same basic problem that Facebook (FB) faces when it tries to ensure that Live streaming doesn’t get abused to broadcast crimes.

Both companies have spent heavily on human screeners and on artificial-intelligence filters. Google’s apologetic post cited plans to hire “significant numbers of people” and develop “new tools powered by our latest advancements in AI and machine learning.” But the people and the software can still make mistakes.

“This story does not surprise me,” e-mailed Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia.

“My sense is that Google hopes the algorithms get more sensitive and precise as they gather more data and learn the difference between a patriotic expression and a paranoid expression,” he said. “But video content remains the hardest to read for a computer.”

In this case, the system failed in about the least damaging way possible. The video had no risk of being taken down, the income at stake was theoretical at best, and a single query from a journalist got things squared away. But more and uglier errors are inevitable as politicians invoke the American flag and 9/11 in their speeches.

Email Rob at rob@robpegoraro.com; follow him on Twitter at @robpegoraro.

More from Rob:

Advertisement