Zoom Is Using You to Train AI. So Will Everyone Else

  • Oops!
    Something went wrong.
    Please try again later.
Zoom AI - Credit: Getty Images
Zoom AI - Credit: Getty Images

Zoom, the virtual communications platform that millions use for remote work, is facing backlash over an updated policy allowing it to train their AI products on customer data pulled from meetings. But experts say this not an isolated case — it’s a sign of how big tech plans to harvest and leverage your personal information going forward.

The change to Zoom’s terms and conditions — made quietly in March — grants the company “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights” to your content, for the purposes of “machine learning, artificial intelligence, training, testing,” and other projects. As a report in StackDiary this week drew attention to the modified conditions, healthcare, education and entertainment professionals called the expanded policy an invasion of privacy.

More from Rolling Stone

In response, Zoom CEO Eric Yuan on Tuesday admitted that the language of the new policy was the result of a “process failure.” The day before, the company had published a blog post acknowledging the updated terms and tried to assure customers that they would have the means to opt out of such data collection. But, realistically, if you’re a meeting participant in which the organizer has enabled the Zoom IQ feature (an “AI smart companion” also introduced in March), your choices are either to hang up or accept that your information from the call will be saved by the company.

Another update to their terms and conditions, which Zoom added Monday under public pressure, claims that the company “will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent” — though it also says that Zoom reserves the right to do just that. This is not terribly comforting given Zoom’s previous privacy and security failures.

But whatever the fallout for Zoom, they won’t be the last tech giant to train new AI products on user behavior — with or without the users’ knowledge. According to AI researchers, this is just the next phase of a process that is already widespread across the internet.

“The concept of technology platforms and devices collecting and analyzing your data is nothing new,” says Azadeh Williams, who sits on the executive board of the Global AI Ethics Institute. “Social media platforms and apps have been doing this for years.” Just as Facebook allowed third-party developers to scrape personal info, and Google illegally tracked the location of Android users, the major players in Silicon Valley are now primed to seize your content in order to speed development of artificial intelligence.

Nate Sharadin, a fellow at the research nonprofit Center for AI Safety, says that the controversy over Zoom’s terms is a “bellwether” for such technology. If they’ve been met with resistance at the moment, it’s in part because creative industries like film and TV are increasingly wary of how competing firms may leverage their intellectual property to train, for example, “generative machine learning models that can outcompete them in particular spaces,” he says. As these models advance, it becomes harder to detect stolen or repurposed material, which is also easier than ever to produce at scale.

Nevertheless, Zoom and other tech companies are scrambling to suck up as much available information as they can, because AI developers “see a shortage of high-quality data” in the decades ahead, says Sharadin — a sort of informational bottleneck that will slow the progress of machine learning models. “Companies used to collect data but not know what to do with it,” he adds. “Now, there’s something (profitable) they can do: sell it on to model developers, or to data curators.”

Ari Lightman, a digital media professor at Carnegie Mellon University’s Heinz College, says the decline of remote work has left Zoom and its rivals seeking ways to make online meetings “more productive and efficient,” which is why AI solutions appeal to them. While he also notes that overreaching data collection by tech companies is a familiar problem, triggering a number of lawsuits in the past, AI has prompted “more skepticism on how that data will be used in a manner that does not infringe on users’ privacy.” Suddenly, people who have always given software permission to track behavior are conscious of the kind of legalese they sign off on to use a program. Of course, even more invasive practices can be normalized over time.

“This is an example of the market causing AI to proliferate against many people’s will,” says Dan Hendrycks, executive director of the Center for AI Safety. He sees it as part of the creeping integration of machine surveillance into our ordinary routines. “It’s through many small steps like this that AI will eventually pervade all aspects of life, and humans will be subject to their decisions and constant oversight.”

Vincent Conitzer, head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI and director of the Foundations of Cooperative AI Lab at Carnegie Mellon University, observes that tech giants including Amazon and Microsoft have already warned employees not to share sensitive internal info with AI tools such as ChatGPT, for fear that confidential material could be leaked. If they’re concerned about a competitor’s AI getting a glimpse at their corporate secrets, it stands to reason that ordinary end users are vulnerable to the same exposure.

“Should we now be just as worried about sharing sensitive information with our colleagues in Zoom meetings?” Conitzer asks. “At this point, whether something is spoken during a meeting or written into a chat window makes little difference, given how good AI transcription has become. If anything, there’s more information in the spoken sentence, in the form of intonation and, if the camera is on, facial expressions.”

For Zoom, he says, the potential applications would seem harmless. He theorizes that they could “use this data to teach AI to predict who will be the next person to speak in an ongoing meeting, so that that person can be highlighted on the screen,” which “doesn’t sound so bad.” Except Zoom — and everyone else in the market — is now incentivized to hoard as much information as they can, “and once you have all that data, it’s always tempting to try to do more things with it,” Conitzer says. “Meanwhile, today’s AI systems are very much capable of remembering, and then leaking, their training data.”

Lightman agrees that Zoom will likely be focused on how AI can improve efficiency, customization and automation in the workplace — methods of combatting “Zoom fatigue.” But by pursuing these improvements through machine learning, tech companies may leave your communications vulnerable to a breach. “The question is: does the value outweigh the risk and do we really know what the risk entails?” he asks.

Williams echoes this idea of a double-edged sword, saying that tech firms with an eye toward “enhancing the product” and delivering “streamlined service” end up gathering sensitive data that “can also be hacked and turned over to the wrong hands.” In the end, Williams says, the onus of security continues to fall on users rather than the corporations analyzing our inputs and behaviors. “As individual consumers and professionals become more aware of the value of their data, and how it is being used when they ‘exchange it,’” she says, “it’s important to educate yourself on how to ‘say no’, ‘disagree’ or ‘opt out’ when you don’t want to share your data.”

Of course, as the furor over Zoom’s current conditions demonstrates, the companies behind the digital products you rely on in daily life know it can be hard for individuals to say no. This is particularly true at work, when management policy may favor opting in to AI training on digital platforms for access to cutting-edge features. If you don’t like it, there’s no guarantee you can just pull the plug. Worst case, you’ll have to find another job.

 

 

 

Best of Rolling Stone

Click here to read the full article.