
We’ve heard about how AI could potentially replace our jobs one day. We’re already seeing how AI has replaced the roles of writers and graphic designers. But what if AI could replace bug hunters? Turns out it can. According to Google, its AI bug hunter Big Sleep has reported its first-ever vulnerabilities.
Google AI bug hunter finds vulnerabilities
This is according to Heather Adkins, Google’s vice president of security, who shared the news in a post on X. For those unfamiliar, Big Sleep is a Google LLM-based AI bug hunter. It was developed by Google’s DeepMind department along with hackers at Project Zero, another division at Google dedicated to finding security flaws and vulnerabilities.
That being said, we’re not sure how big an impact these vulnerabilities have or how severe they are. This is because the vulnerabilities have not been fixed yet. It’s standard practice, even at Project Zero, where Google gives companies a grace period to fix the issues before reporting it publicly.
However, the takeaway here is the fact that AI managed to find these vulnerabilities. Speaking to TechCrunch, Google spokesperson Kimberly Samra said, “To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention.”
Taking the good with the bad
It is obvious that having an AI capable of discovering security flaws and vulnerabilities is a good thing. AI doesn’t sleep, unlike humans. This means it can constantly be on the hunt for flaws before it become an issue. That being said, Google’s Big Sleep isn’t the first of its kind. There are other AI tools like RunSybil and XBOW.
In fact, XBOW made headlines when it reached the top of the US leaderboards at the HackerOne bug bounty program. But before we get too excited, there are still a lot of kinks that need to be worked out. There have been several reports where AI have reported bugs that turned out to be hallucinations.
Hallucinations are a known AI issue, where the AI simply makes something up. We’ve seen it happen when you ask it to provide information and it pulls something made up completely from thin air. According to Vlad Ionescu, co-founder and chief technology officer at RunSybil, he told TechCrunch, “That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap.”
The post Google’s AI Bug Hunter Finds Its First Vulnerabilities appeared first on Android Headlines.