Security experts alarmed at ‘incredibly dangerous’ new Google feature

Google Tech Showcase (Copyright 2024 The Associated Press. All rights reserved)
Google Tech Showcase (Copyright 2024 The Associated Press. All rights reserved)

A new Google feature aimed at alerting people to scams has led to fears from privacy campaigners.

The tool uses artificial intelligence to listen in on people’s phone calls, and try and spot if they sound like a scam. If they do, then a pop-up will show alerting people to a “likely scam”.

The feature was announced at Google’s I/O event this week, during which it announced a host of new AI tools. Like many of those features, Google did not say when it would actually arrive.

It also gave little information on how the feature would actually work, such as what kind of conversations would prompt the AI to suggest that the call could be a scam. But it said that it relied on Gemini Nano, a recently released, much smaller version of its AI that is built to run on phones.

Google stressed that all the listening and analysis of phone calls would happen on the phone itself, so that private conversations would not be sent to its servers. “This protection all happens on-device so your conversation stays private to you,” it said in its announcement.

But nonetheless security experts suggested that listening to phone calls in such a way at all was “incredibly dangerous” and “terrifying”. They noted that even if the calls stay on the device, then allowing AI to listen in on calls could lead to other problems.

“The phone calls we make on our devices can be one of the most private things we do,” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told NBC News. “It’s very easy for advertisers to scrape every search we make, every URL we click, but what we actually say on our devices, into the microphone, historically hasn’t been monitored.”

“This is incredibly dangerous,” said Meredith Whittaker, president at messaging app Signal. “It lays the path for centralized, device-level client side scanning.”

Ms Whittaker, who worked at Google for 13 years and helped organise internal protests against its policies, said that the use of the technology could quickly expand.

“From detecting ‘scams’ it’s a short step to “detecting patterns commonly associated w[ith] seeking reproductive care” or “commonly associated w[ith] providing LGBTQ resources” or “commonly associated with tech worker whistleblowing,” she said.