Companies are investing in virtual spaces. Experts are already seeing signs of trouble.

  • Oops!
    Something went wrong.
    Please try again later.

Internet safety experts are raising alarms about harassment and safety in the metaverse as companies invest heavily in and promote the benefits of virtual spaces.

The metaverse, generally thought of as virtual worlds in which people can interact, has become one of the hottest topics in tech in recent months, boosted by Facebook co-founder Mark Zuckerberg’s vision for the metaverse as a place where people can do “almost anything you can imagine.” In October, Facebook rebranded as Meta (though Facebook is still the name of the company’s core social network).

A variety of virtual spaces already exist, and it’s in those worlds that experts are already seeing signs of trouble.

Researchers with the Center for Countering Digital Hate (CCHD), a nonprofit that analyzes and seeks to disrupt online hate and misinformation, spent nearly 12 hours recording activity on VRChat, a virtual world platform accessed on Meta’s Oculus headset. The group logged an average of one infringement every seven minutes, including instances of sexual content, racism, abuse, hate, homophobia anad misogyny, often with minors present.

The organization shared its data logs and some of the recordings with NBC News, depicting more than 100 incidents in total.

Meta and many other companies are seeking to capitalize on these new worlds, specifically around creativity, community and commerce, using immersive technology (often a headset worn to simulate a first-person field of view). But CCDH and other critics are worried that, similar to the company’s past, Meta is prioritizing growth over the safety of its users.

“I’m afraid that it’s incredibly dangerous to have kids in that environment,” Imran Ahmed, the CEO of CCHD, said. “Honestly speaking, I’d be very nervous as a parent about having Mark Zuckerberg’s algorithms babysit my kids.”

Ahmed specifically highlighted issues with reporting, noting CCDH was only able to flag about half of their logged incidents to Meta, criticizing the company for a lack of traceability or the ability to identify a user in order to report them, and the lack of consequences for wrongdoing.

Meta did not respond to questions about these reporting issues.

Meta’s current safety features include the ability to mute and block people, or to transfer to a Safe Zone to give a user a break from their surroundings. Upon a report being submitted, it will include a recording of the last few minutes of a user’s experience to include as evidence. CCDH researchers pointed out that this was a tedious process to file a report quickly.

These features also seem to neglect the possibility of a user not being able to enable the safety precautions quickly and easily if they’re experiencing some sort of infringement.

That was the case for Nina Jane Patel, who described being virtually groped and harassed in Meta’s Horizon Venues space recently.

“Less than 30 seconds into it, I’m suddenly surrounded by three avatars with male voices, who were saying sexual innuendos to me,” Patel said. “Before I knew it, they were groping my avatar. They were touching the upper and middle portion of my avatar and then a fourth male avatar was taking selfie photos of what was happening.”

Patel is familiar with these virtual spaces. She’s the vice president of metaverse research for Kabuni, an immersive technology company based in the U.K., and even noted that she wants to celebrate the positive potential of virtual worlds.

Despite her familiarity, she was shocked at the behavior and became flustered. She said she wasn’t able to enact blocking features quickly enough, fumbling with her controller and ultimately leaving Venues.

“They said things like, ‘Don’t pretend you don’t love it. This is why you came to this place,’” Patel said. “It was quite shocking that people are using this space to sexually harass and verbally harass people and act out their violent tendencies,” noting that when she’s been in these spaces before, she’s heard children’s voices.

Patel wrote a blog post about her experience, and it went viral — inspiring other women to describe similar interactions they had faced in the metaverse.

As a result, Meta introduced a new default “personal boundary” setting in its spaces, requiring avatars to stay nearly 4 feet apart from each other. Nkechi Nneji, a Meta spokesperson, told NBC News that this “makes it easier to avoid unwanted interactions like this one, and we are sorry this happened.”

“Horizon Venues should be safe,” the spokesperson said in an email, “and we are committed to building it that way. We will continue to make improvements as we learn more about how people interact in these spaces, especially when it comes to helping people report things easily and reliably.”

Ahmed and Patel acknowledged that platforms cannot be expected to immediately be perfect once they are launched, but also noted the broader implications for the metaverse if safety precautions are not given top priority.

“As virtual reality becomes more and more realistic,” Ahmed said, “so it becomes more damaging, because the psychological impacts of that environment feel more real and are more difficult to discern from our offline reality. And as such, they can do enormous damage to young people.”

Patel said it is important not to repeat previous errors made by tech companies.

“We have to have a zero tolerance approach to this, otherwise we will be repeating our mistakes of the internet in which anonymity has been the priority, and accountability and safety has fallen by the wayside,” Patel added.