
For years, cybersecurity experts have warned about a future where artificial intelligence doesn’t just help us write emails but also helps hackers break into secure systems. According to a landmark report released this Monday, that future has officially arrived. Researchers at the Google Threat Intelligence Group (GTIG) have identified the first known case of a “zero-day” exploit—a security flaw previously unknown to the software’s creators—that appears to have been developed using an AI model.
Google uncovers the first AI-generated zero-day exploit
The exploit in question targets two-factor authentication (2FA) on a popular web-based administration tool. Google has kept the name of the affected company private to ensure a wide implementation of the patch. However, the technical details available are truly interesting.
The hackers used a Python script to automate the attack. GTIG researchers noted that the code had “all the hallmarks” of AI-generated. The script included textbook formatting, detailed help menus, and even “hallucinated” data—false information that AI models sometimes create when they are trying to be helpful. This suggests that instead of manually hunting for bugs, the attackers used a large language model (LLM) to spot a logic flaw in the code and then write a script to exploit it.
A global acceleration
This isn’t an isolated incident. The race to weaponize AI is happening globally. Google’s report details various groups that are experimenting with the tools. The list includes those linked to Russia, China, and North Korea. For instance, the North Korean group APT45 has been seen using thousands of prompts to analyze known vulnerabilities and refine their attacks.
Meanwhile, a sophisticated Android malware dubbed PromptSpy emerged early this year. This one uses autonomous AI to completely monitor user activity and even “replay” biometric authentication gestures like PINs or patterns.
Defending the perimeter
The rapid rollout of ultra-powerful models, such as Anthropic’s Claude Mythos and OpenAI’s GPT-5.5-Cyber, has created a sense of urgency in both the tech industry and the U.S. government. These models are so capable at finding bugs that their creators initially restricted access to a small group of trusted “defenders.”
As John Hultquist, chief analyst at Google’s threat intelligence arm, pointed out, for every AI-driven attack we catch, there are likely many more already in the wild. The advantage of AI for criminals is simple: speed. It allows them to find, test, and launch attacks at a scale that was previously impossible for human teams.
The silver lining
Despite the risks, there is a glimmer of hope. The same technology that can be used to find bugs can also be used to fix them. In the long term, experts think AI will help “harden” the trillions of lines of code that run our world, making us safer over time. However, we are in a “transitional period” in which the risks are high.
For now, the best defense is to keep your software updated. As this latest discovery shows, the window for defenders to stay ahead is getting smaller every day.
The post Hackers Are Using AI to Build Exploits, Google Security Researchers Find appeared first on Android Headlines.