A new feature showcased by Google at its I/O conference, utilizing its generative AI technology to scan voice calls in real-time for patterns indicative of financial scams, has raised significant concerns among privacy and security experts. They caution that implementing client-side scanning within mobile infrastructure could herald a shift towards centralized censorship.
Google’s demonstration of this scam-detection feature, which it plans to integrate into an upcoming version of the Android operating system—currently powering approximately three-quarters of the world’s smartphones—relies on Gemini Nano, the smallest AI model in its current generation designed for on-device operation.
This technology represents a form of client-side scanning, a controversial concept over recent years particularly in the context of detecting child sexual abuse material (CSAM) or grooming activities on messaging platforms. Apple famously abandoned a similar plan in 2021 following a massive backlash from privacy advocates. Nevertheless, policymakers continue to press the tech industry to devise methods to identify illegal activities on their platforms. Should on-device scanning infrastructure become widespread, it could lead to various forms of default content scanning, whether government-mandated or otherwise commercially driven.
Meredith Whittaker, president of the encrypted messaging app Signal, responded to Google’s demonstration with a highly critical post on X, describing the technology as “incredibly dangerous” and likening it to a step toward centralized, device-level client-side scanning. She warned that it could soon be repurposed for detecting sensitive, potentially stigmatizing activities, such as seeking reproductive care or providing LGBTQ resources.
Matthew Green, a cryptography expert and professor at Johns Hopkins, also voiced his concerns on X. He warned of a foreseeable future where AI models might scan text messages and voice calls to detect and report illegal activities, necessitating zero-knowledge proofs for data to pass through service providers—effectively blocking uncompliant clients. Green suggested that the efficiency required for such technology is only a few years away, estimating it may become viable within a decade.
European privacy and security experts echoed these apprehensions swiftly. Lukasz Olejnik, an independent researcher and consultant, acknowledged the usefulness of Google’s anti-scam feature but cautioned that the technical capabilities could be repurposed for extensive social surveillance. He speculated about a future where such technology could not only detect but also warn against or block undesirable content, posing significant threats to privacy and fundamental freedoms.
In a comment to TechCrunch, Olejnik elaborated on these risks, emphasizing that while on-device detection is better for user privacy, the broader implications for using AI models built into software for monitoring or controlling human activities are concerning. He highlighted the urgency of governing these capabilities to prevent potential misuse.
Michael Veale, an associate professor in technology law at University College London, further amplified these concerns. He warned in an X post that Google’s conversation-scanning AI could establish infrastructure for client-side scanning that regulators and legislators might exploit for purposes beyond its original intent.
Particular unease prevails among European privacy experts, owing to a contentious legislative proposal by the European Union that has been under consideration since 2022. Critics argue that this proposal, which mandates platforms to scan private messages by default, could fundamentally undermine democratic rights in the region. Such legislation would likely compel platforms to adopt client-side scanning to comply with “detection orders” for both known and unknown CSAM, as well as grooming activities.
Recently, an open letter from hundreds of privacy and security experts warned that these client-side scanning technologies are unproven, highly flawed, and susceptible to attacks—potentially generating millions of false positives daily.
Despite requests, Google did not respond to these privacy concerns by the time of publication.