A new security report from the company states that threat actors haven't been able to use AI to develop novel capabilities to accelerate and amplify attacks.

Google Threat Intelligence Group (GITG) recently published a report analysing various attempts to misuse Google’s AI assistant Gemini.
The report explored threats posed by individual and state-sponsored attackers. These attackers sought to exploit Gemini in two ways: to accelerate their malicious campaigns or instruct a model or AI agent to take a malicious action. The majority of the activity falls under the first category.
State-sponsored cyber attacks were associated with threat actors from countries like China, North Korea, Iran, and Russia. These actors used Gemini for reconnaissance, vulnerability research, phishing campaigns, and defence-related intelligence. North Korean threat actors used AI to place covert IT workers in Western firms by creating fake CVs.
However, Google concluded the report with positive findings.
“While AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be,” read the report. Google further said that it did not see any indications of the threat actors developing any novel capabilities.
Moreover, the company added that threat actors unsuccessfully attempted to use Gemini to abuse Google’s products, involving activities like phishing, data theft, and bypassing Google accounts in products like Chrome and Gmail.
Google also observed a handful of unsuccessful attempts to use publicly available jailbreak prompts to bypass Gemini’s safety controls. In one such attempt, a threat actor tried to get Gemini to perform coding tasks, including wiring Python code for a distributed denial-of-service (DDoS) tool. In the end, Google provided the code but with a safety-filtered response stating that it could not assist.
Kent Walker, president of global affairs at Alphabet (Google), said, “In other words, the defenders are still ahead, for now.”
Safety in the Age of AI Agents
Beyond using a chat-focused AI model to accelerate malicious campaigns, an even greater threat lies in the direct exploitation of AI agents. Google highlighted this as the second kind of attack.
Google’s Secure AI Framework (SAIF) map outlines all the AI risks associated with the model creator, consumer, or both. “We did not observe any original or persistent attempts by threat actors to use prompt attacks or other machine learning-focused threats as outlined in the SAIF risk taxonomy,” the report said.
“Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini’s safety controls,” the report added.
However, this alone should not create a sense of complacency. The capabilities of these AI agents are tempting startups, big organisations, and individual users alike. It is the need of the hour to safeguard the same.
AIM spoke to Omer Yoachimik, a senior product manager at Cloudflare, one of the world’s leading cybersecurity companies. Yoachimik particularly emphasised the criticality of DDoS protection, given that these systems increasingly depend upon real-time access to external services and data.
“With the growing adoption of AI agents across industries, they become attractive targets for attackers aiming to create widespread disruption,” Yoachimik said.
He added that the approach towards AI and cybersecurity should be different from the traditional ones. “While traditional products often focus on static defenses, AI-driven systems demand adaptive, real-time security measures that evolve with emerging attack patterns to ensure resilience in a highly dynamic threat landscape,” he said.
A research study from the University of California, Davis, states that data inside an AI agent system faces risks similar to those concerning confidentiality.
“Malicious applications might manipulate the system by injecting misleading prompts as part of the instruction or manual, altering data inappropriately,” the study added.
It isn’t all about high-stakes cybersecurity threats. For instance, the research quotes an example of an AI agent booking a flight, where it could be misled to favour a less efficient option through false information.
The research also offered a few defence mechanisms against these attacks. It proposes using techniques like sandboxing to restrict an AI agent’s capabilities by limiting its consumption of CPU resources and access to file systems.
Earlier, we covered a detailed story on reports of how prompt injection in Anthropic Claude’s experimental autonomous Computer Use feature compromised its security. In an experiment conducted by Hidden Layer, Computer Use was exposed to prompt injection to delete all the system files via a command in the Unix/Linux environment.
Another study from UC Berkeley introduced methods to mitigate prompt injection. In these methods, the LLM is trained to follow only instructions from the original prompts and ignore any other instructions.
AIM also spoke to Sudipta Biswas, co-founder of Floworks, which has built an AI sales agent called Alisha. He outlined three aspects of focus for security in an AI agent: data held by the organisation building the agent, data accessed by the agent itself, and access authentication.
However, Biswas admitted that providing an AI agent with privileges such as access to a password-protected email account, critical permissions, and access is an open problem and a big opportunity for companies, and developers in cybersecurity.
“We are approaching this with a two-step process,” he added.
“When certain data needs to be entered into a system of records, we ask the users for another round of approval – ‘Hey, is this what you really meant?’,” he added, indicating that this process builds a sense of confidence among the users.
Supreeth Koundinya
Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Subscribe to The Belamy: Our Weekly Newsletter
Biggest AI stories, delivered to your inbox every week.
February 5 – 7, 2025 | Nimhans Convention Center, Bangalore
Rising 2025 | DE&I in Tech & AI
Mar 20 and 21, 2025 | 📍 J N Tata Auditorium, Bengaluru
Data Engineering Summit 2025
15-16 May, 2025 | 📍 Taj Yeshwantpur, Bengaluru, India
AI Startups Conference.
April 25 /
Hotel Radisson Blu /
Bangalore, India
17-19 September, 2025 | 📍KTPO, Whitefield, Bangalore, India
MachineCon GCC Summit 2025
19-20th June 2025 | Bangalore
Our Discord Community for AI Ecosystem.