Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Music.ly stumbled in moderating self-harm content

It's important for a service primarily used by teens.

One issue that many tech start-ups must face is how to deal with harmful content, especially when services start amassing a loyal following. Lip-syncing app Music.ly is facing just this challenge. Writer Anastasia Basil was screening the app to see if it would be appropriate for her 10-year-old daughter. She found that the platform is rife with keywords referencing self-harm, such as #cutting and #selfhate.

BuzzFeed News took note of Basil's Medium post on the topic and contacted Music.ly. At that point, the service took steps to ban searches for keywords mentioned in the article. Music.ly told BuzzFeed that "its process for banning terms from search is always evolving."

But the question is whether that's enough. Clearly, it took a news organization reaching out before Music.ly took steps to address the issue. As a service primarily aimed at and used by teens, the company should have already considered its approach towards moderating sensitive issues like self-harm.

Back in 2016, Instagram rolled out suicide prevention tools that allowed users to report posts from people who might need help, as well as offer support options for specific hashtag searches. The company worked with the National Eating Disorders Association and the National Suicide Prevention Lifeline to craft language. Now, when a person searches for self-harm and eating disorder-focused hashtags on the service, a there's a pop-up that allows users to get support with one click. The examples are there for Music.ly to follow; let's hope the company addresses these issues as proactively as is possible.