Can Facebook Prevent Violent Videos Like The Cleveland Murder From Being Published Or Shared?

Following the publishing of a video of a murder on Facebook, questions arise about the company's ability to monitor content.

In the wake of the murder of Robert Godwin, Sr. that was published by his killer on Facebook, questions have arisen about how Facebook can prevent such graphic and disturbing events from being broadcast on its platform.

In a timeline of the events published by Facebook, the company made note that there were no user reports of the content for at least an hour after its publishing. The first video from the user was uploaded at 11:09 a.m. PDT, followed by the video of the shooting at 11:11 a.m. That video was not reported until 12:59 p.m. according to Facebook, and was removed 23 minutes after being flagged.

Read: Facebook Live Suicides: Deaths On Social Media Are A Growing Crisis

“This is a horrific crime and we do not allow this kind of content on Facebook,” a spokesperson for Facebook told International Business Times. “We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.”

And while Facebook emphasizes the user reports of the content—and its relatively quick reaction to those reports once they were filed—it remains unclear why Facebook’s artificial intelligence was unable to catch the troubling video before it went viral.

Last December, the company first began talking about an algorithmic system it was developing that would be able to flag offensive live videos.

The project aimed to tap into the company’s machine learning division, which developed algorithms that could detect nudity, violence and any content that doesn’t comply with the company’s policies. It was implemented after a number of violent videos, including the broadcast of user suicides, were appearing on Facebook Live.

That system, which had reportedly been in development since at least June 2016, has been used to spot and remove extremist content uploaded on the platform.

Read: Facebook Reinstates Iconic Vietnam War Photo After Censorship Outrage

While the man who killed Godwin, Sr. did use Facebook Live to broadcast a confession of his crime, the killing itself was not broadcast live—it was recorded and uploaded by the user, meaning it theoretically should have been spotted by the company’s algorithms.

For Facebook and other video content hosts, this process isn’t easy. Even with the wealth of information and video content on Facebook that it can feed to its algorithms to help it learn how to identify certain content, teaching a computer how to view a video the same way a human does is a challenge.

According to a blog post in response to the tragedy in Cleveland, Facebook said it counts on its automated system to process the millions of reports about potentially offensive content it receives each week, but still sends those videos to its team of thousands of reviewers around the world who can make the subjective decision on what violates the company’s policies.

“We prioritize reports with serious safety implications for our community, and are working on making that review process go even faster,” the company said.

But the video of the murder shouldn’t have required much of a review—there’s very little subjective about whether that video contains content that violated Facebook’s rules and presented potential safety concerns for others.

It’s especially surprising given Facebook’s occasionally overzealous removal of content, including the iconic image of a young girl fleeing from a Napalm attack taken during the Vietnam War.

There are a number of research projects outside of Facebook that have attempted to achieve the effect of an algorithmic method for identifying violent content, including an effort from the School of Information Security Engineering at Shanghai Jiao Tong University in Shanghai that would sniff out violent content by audio signatures and researchers at the Universidad de Castilla-La Mancha that attempts to detect violent actions in videos using computer vision techniques.

These types of solutions may present alternative options to Facebook’s own system, which the company has not shared much information about. But even as the company and outside researchers work to create a way for computer systems to stop the spread of violent and offensive content, new types will continue to crop up. Perhaps the next attempted murder published on Facebook will be caught because of the footage from Cleveland, but another horrific attack may sneak through the cracks.

What Facebook seems to tacitly admit without directly saying as much is this: it is still reliant on user reports to perform quality control on some of its content—and its users may not be the most effective system for spotting these issues, as they tend to share the offensive content and spread it with more frequency than reporting it.

Related Articles