How Social Media Algorithms Hurt Black Girls

After studying the effect that viral images of police killings had on Black girls, Dr. Tiera Tanksley decided to research further. The results were shocking.

<p>Capuski/Getty Images</p>

Capuski/Getty Images

In my early twenties, I was deeply involved in the #BlackLivesMatter movement online, and I regularly turned to social media to stand up against racial injustice and get involved in grassroots activism. However, my experience with social media drastically changed when images of Black people dead and dying started to go viral.

First, it was Eric Garner. Then Sandra Bland. Then Philando Castile.

Like many other social media users, I encountered these gruesome videos accidentally, because social media platforms were auto-playing them without censorship or content warning. For weeks, my social media timelines were rife with the gruesome images, and to make matters worse, the responses often included racial slurs, insensitive jokes, and accusations that the victims deserved to die.

As more and more videos went viral, I could feel something in me starting to change. I was having a hard time focusing on my graduate coursework; I was reluctant to drive to class for fear of encountering police; and I started having panic attacks whenever a class discussion referenced the killings.

While research from Common Sense Media recently highlighted how social media impacts girls’ mental health in general, I started to wonder how viral police killings were affecting Black girls in particular. After all, Black girls sustain some of the highest rates of social media use and are more likely to encounter race-related content than other groups online. After conducting a study with nearly 20 Black girls (ages 18-24) across the US and Canada, my initial fears were confirmed: Black girls were reporting unprecedented levels of anxiety, depression, fear, and chronic stress from encountering Black death online.

Many of the girls struggled with insomnia, migraines, nausea, and long stretches of “numbness” and dissociation following high-profile killings.

For Natasha, “seeing Black people get killed by police, that does something to you psychologically.” Evelyn feels similarly, noting, "When people die or when Black males are shot by the police and people post the video, that’s traumatizing.” Danielle captures the collective harm of police killings on Black girls, noting, “We all have PTSD…because of social media, because of all this constant coverage…emotionally I can say it has taken a toll on me. It’s taken a toll on all of us.” Not surprisingly, these mental health concerns had a direct effect on the girls’ schooling experiences, and many of them felt too overwhelmed, emotionally triggered, or physically exhausted to engage in school.

Though viral death videos were one of the most salient causes of digitally mediated trauma, they weren’t the only source of race-related stress for Black girls online. The girls in my study were navigating a multifaceted web of anti-black racism that posed constant threats to their physical and mental wellness.

After posting about #BLM, Danielle received “hundreds of thousands people in my DM’s calling me [racial slurs]... That just takes a toll on you.” Although she tried to report the digital attacks through Twitter’s automated content moderation program, her reports were systematically denied. The same was true for other participants, whose posts about racialized grief, grassroots organizing, and the need to protect Black lives were regularly flagged and deleted as “hate speech.”

These racially disparate experiences with content moderation left me puzzled. How could algorithms designed to protect against racism and hate speech identify calls to end anti-black violence as a threat to community safety, but select videos of Black people dead and dying to be hyper-circulated for days, even weeks, on end without censorship or content warnings?

[1.1]
[1.1]
[1.2]
[1.2]

The answer to this seemingly complex query is actually pretty simple: digital technologies are designed to replicate the racial logic that produces, fetishizes, and monetarily benefits from Black death and dying offline. In a society where Black death has always been monumentally lucrative, racial justice activism poses a direct threat to the profitability and viability of white supremacy. Algorithms that identify Black death as “clickbait” and systemic critique as a “threat to collective safety” just make sense.

Research into the inner workings of virality and content moderation makes this reality increasingly clear. According to Google Trends, the state-sanctioned killing of Black Americans is among the most popular search queries in Google’s history. Whether it’s George Floyd, Philando Castile, or Eric Brown, the most popular keyword searches for victims of police brutality are always the same: death video, chokehold, shooting video, dead body [Image 1.1 & 1.2].

[1.3]
[1.3]

Here we can see the monetary incentives for viral Black death begin to emerge: when images of Black people being killed by police garner over 2.4 million clicks in 24 hours, and the average “Cost per click” for related content can range from $1 - $6 per click [Image 1.3], the virality of Black death is not only incentivized but nearly guaranteed.

Racially-biased content moderation systems help cement this racially violent necropolitics. During the time of my study, social media algorithms were programmed to protect white men from hate speech, but not Black children. As reported on ProPublica, “Facebook deletes curses, slurs, calls for violence and several other types of attacks only when they are directed at “protected categories”—based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation, and serious disability/disease.

White men are considered a protected group because both traits they possess are protected. But, female drivers and Black children, for example, like “radicalized” Muslims, are not protected, because one of their characteristics is “not protected.” By these rules, a post admonishing white men for murdering Black people would immediately be flagged as hate speech. In contrast, slurs against Black youth would be upheld as “legitimate political expression.”

With this racial calculus at the forefront, the source of racially biased content moderation becomes increasingly clear: Black girls’ posts are silenced not because they threaten the community, but because they threaten the white supremacist patriarchy.

And Black death goes viral not because it will bring about racial justice or an end to police brutality, but because Black death is Big Business.

For more Parents news, make sure to sign up for our newsletter!

Read the original article on Parents.