Illustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images
- Lovable’s recent data mishap is a reminder of the security risks of vibe coding.
- A security flaw in Lovable’s system allowed access to users’ data and sparked online backlash.
- Lovable competes with other startups and large frontier labs that are building AI coding tools.
Lovable’s recent security fumble just gave pro software engineers one more reason to be wary of vibe coding.
On Monday, an X user called Lovable out and said that the Swedish AI-coding startup suffered a mass data breach “affecting every project created before November 2025.”
The individual, who goes by the username “Impulsive” on X, said that they were able to access another user’s code, AI chat histories, and customer data through their free Lovable account.
“Nvidia, Microsoft, Uber, and Spotify employees all have accounts. The bug was reported 48 days ago. It’s not fixed. They marked it as duplicate and left it open,” they wrote.
In response, Lovable denied there was a data breach and said that seeing public projects’ code was a deliberate decision.
After backlash on X about the message’s clarity and how users should secure their data moving forward, Lovable shared a second statement.
The company explained that it allowed others to view “public” projects “to make it easy to explore what others were building.” It added that since December, it has switched off public visibility by default across all subscription tiers.
In the second statement, Lovable also acknowledged the security error that the original X post first flagged.
“Unfortunately, in February, while unifying permissions in our backend, we accidentally re-enabled access to chats on public projects,” Lovable wrote. “Upon learning this, we immediately reverted the change to make all public projects’ chats private again. We appreciate the researchers who uncovered this.”
Some users said they appreciated Lovable’s transparency, while others said the company’s first message was akin to “gaslighting.”
Tom Van de Wiele, founder of security firm Hacker Minded, told Business Insider the incident is “another unfortunate example of lacking secure defaults and a failure to threat model for the automated and AI age.”
He added that relying on users to understand what’s public and what’s not “always falls flat eventually.”
Jake Moore, global cybersecurity advisor at ESET, said the debate over whether the incident qualifies as a breach risks missing the bigger issue.
“It isn’t really a traditional breach but it’s also not harmless either,” he told Business Insider. “It’s essentially more of a design flaw, seeing as data was exposed rather than hacked.”
“When a company argues semantics instead of impact, it usually means security wasn’t baked in from day one, which is the reality of what caused this,” he added.
A trade-off
In general, professional developers discourage overreliance on AI because it can produce messy, untested code. They say vibe coding comes with information security concerns, including company data being exposed.
Van de Wiele said companies building these tools often face a trade-off between making products easy to use and keeping them secure — but that doesn’t excuse weak protections.
“Companies are often caught between a rock and a hard place, wanting to lower friction for new users while trying to protect against data scrapers,” he said, adding that there are real consequences for users whose information may be scraped and resold.
Moore said vibe-coding tools can make these risks worse if users don’t fully understand what’s being exposed.
“Vibe coding continues to accelerate bad defaults and users need to be explicitly aware of this and have fail-safes and backups in place,” he said.
That dynamic could make incidents like this more common, he suggested.
“If users can accidentally expose sensitive data through AI coding defaults, attackers don’t need to hack anything at all,” Moore said.
String of security mishaps
The Lovable error comes after two other major data leaks from AI companies in the last few weeks.
In late March, Anthropic mistakenly leaked an archive of nearly 2,000 files and 500,000 lines of code. Anthropic said at the time that “no sensitive customer data or credentials were involved or exposed.”
Earlier this week, website hosting platform Vercel said it had identified an incident that gave unauthorized users access to certain internal Vercel systems.
Vercel said that the incident started with a compromise of Context.ai, a third-party tool used by a Vercel employee. The attacker used that access to take over the employee’s Google Workspace account, which also gave them access to some Vercel environments.
“We are actively investigating, and we have engaged incident response experts to help investigate and remediate. We have notified law enforcement and will update this page as the investigation progresses,” Vercel said in a statement on Monday.
On a February podcast, Andreessen Horowitz general partner Anish Acharya said that companies shouldn’t use AI-assisted coding for every part of their business because it’s not worth the risks. Plus, relying on AI to write code carries risks, he said.
Â