Google says the accelerating wave of AI-enabled fraud in India—from manipulated audio used in digital arrest scams to increasingly targeted phishing attempts—reflects how quickly bad actors are adapting to new technology. The company is urging a system-wide safety upgrade, arguing that watermarking, detection tools, real-time user alerts and cross-platform coordination must work together to contain the threat.
Google has warned that India is facing an escalating threat from AI-generated deepfakes and synthetic media, calling them one of the biggest risks emerging from the rapid adoption of artificial intelligence across the country.
Evan Kotsovinos, Vice President for Privacy, Safety & Security at Google, said the rise of deepfake-driven scams and cyberattacks demands an urgent, coordinated response from both industry and government.
Speaking to CNBC-TV18, Kotsovinos said AI is delivering massive economic potential, but its misuse is growing just as quickly. “Like any big technology change, AI is used for good and for bad,” he said. “We’re seeing several areas where there is increased risk and increased threats with the use of AI. The most obvious one is probably deepfakes.”
Digital arrest scams, phishing and cyberattacks rising
Kotsovinos pointed to the now-common “digital arrest” scams in India — where victims are coerced into transferring money after receiving convincingly doctored audio or video from fraudsters — as a clear example of how AI is supercharging crime.
“That’s clearly powered by AI,” he explained, adding that Google is also tracking a sharp rise in hyper-personalised phishing attempts and AI-enabled cyberattacks globally.
“These are multiple dimensions where we are doing some really committed, dedicated work to stay ahead of the bad actors,” he said.
Government alarmed over deepfake impact on elections, public trust
India’s government has in recent weeks raised serious concerns over the surge of synthetic content and its influence on public debate, sentiment, and electoral processes. New rules have been proposed to ensure watermarking of AI-generated content to help citizens identify manipulated material.
Google says it is already moving in that direction. “Anything you generate with Gemini will be watermarked, and we’d like to see the rest of the ecosystem do the same,” Kotsovinos said.
However, he cautioned that watermarking alone will not be enough.
‘Watermarking is only part of the answer’
While supporting the Centre’s push for provenance markers, Kotsovinos said the industry must accept that watermarking has limits, including uneven adoption across global AI models and the ease with which marks can be tampered with.
“It will take many years for watermarking to be adopted across all models — if it ever happens,” he noted.
Instead, he argued for a multi-layered approach that includes: AI-detection technologies that identify synthetic media, on-device alerts that warn users in real time — particularly during scams such as digital arrest calls, and industry-wide coordination across platforms, manufacturers, and regulators.
“All these things need to come together for the problem to actually be addressed,” he said. “It’s not something we can do alone. We need the whole ecosystem to stand behind it.”
Growing pressure on tech platforms to protect users
Kotsovinos also made a clear distinction between content moderation and user safety, urging platforms to take greater responsibility for shielding users from AI-driven harm.
“Platforms have to take accountability for the safety of their users,” he said. “We coalesce on watermarking and provenance. We coalesce on keeping users safe.”
Watch accompanying video for entire conversation.Subscribe to Chart Art
The most relevant Indian markets intel delivered to you everyday.
Read about our editorial guidelines and ethics policy