When 'secure' isn't safe enough: the Lovable incident and why technical security isn't business security
When "secure" isn't safe enough
The Lovable incident and why technical security isn't the same as business security
On March 3, 2026, a security researcher filed a HackerOne report against Lovable. The vulnerability was trivial to exploit: five API calls from a free account were enough to pull source code, database credentials, and AI chat history from other users' projects. Any user. Any project created before November 2025.
Then nothing happened. For 48 days.
On April 20, the vulnerability went public. Lovable's response played out in three stages. First: this was "intended behavior" — public projects were meant to be public. Then: the documentation may have been "unclear." Finally: the fault lay with HackerOne, whose triage team had relied on outdated internal docs that described exactly this behavior as by-design.
The researcher disagreed. The security community disagreed. And the companies running Lovable apps in production disagreed too.
I'm building Lastable precisely because of stories like this. But this one is different, because it isn't a classic breach story. It's a story about a much more fundamental question: who actually decides what "secure" means?
Three definitions of security
When you run a vibe-coded app in production, you're dealing with three parties who each have a different definition of "secure." Only one of them matters for you — and it isn't the one speaking loudest.
Platform security. This is Lovable's definition: "Our system behaves as designed." If a public project is flagged as public, then it's public. If that was the intent, then it was the intent. Platform security asks: does the software behave as specified? The answer can be yes — and the outcome can still be catastrophic.
Technical security. This is the researcher's view. A BOLA flaw (Broken Object-Level Authorization) exists when an unauthorized actor can access data they shouldn't reach. Here the case is clear-cut: five API calls, somebody else's source tree, somebody else's database credentials. Technically this is a textbook case. Lovable's framing it as "intended" doesn't change that — BOLA is an OWASP API Security Top 10 finding regardless of whether the vendor documented the behavior.
Business security. This is your definition. "My customers, my data, my compliance posture are protected." And here's where it gets interesting: this definition has surprisingly little to do with the first two. Even when platform and technical security are both clean, you can still have a business security problem — if the platform silently changes its defaults, if a regression temporarily opens access paths, or if your understanding of "public" doesn't match the vendor's.
In the Lovable case, we have the rare scenario where all three definitions collide. Platform security: passed (per Lovable). Technical security: flatly failed. Business security: catastrophic.
For you as an operator, only the third matters. The first two aren't your job — but they determine whether you can ever reach the third.
⚠️ Beware the tech: what the vulnerability actually looked like
Lovable's API handled requests differently based on when the project was created. Newer projects (post-November 2025) returned
403 Forbiddenon the same endpoint. Older projects returned200 OK— including the full source tree. Authorization checks had never been correctly implemented for legacy projects, and a backend regression in February 2026 made the problem worse by re-enabling public access to chat history and source code on "public" projects.The attack path: authenticate to the platform with any free account → enumerate a target project's user ID → call the project endpoint → receive the full source tree and credentials. Five calls, no documented rate limits.
This is not an esoteric attack. It's the first BOLA test in any security audit.
Why this gap isn't going away
The Lovable pattern isn't the exception. It's structural.
Vibe-coding platforms are optimized for speed, not operational security. Their metrics track new builders, not existing apps. Their roadmaps prioritize the next feature, not the 18-month patch regime for an app you built a year ago. That isn't malice — it's economics. Platforms grow with new sign-ups, not with continuously maintained existing apps.
On top of that sits a problem Veracode quantified in 2025: 45% of AI-generated code contains at least one OWASP Top 10 vulnerability. Wiz separately measured that 20% of all vibe-coded apps ship with severe security flaws. And in Q1 2026, 91.5% of all vibe-coded apps had at least one hallucination-related defect.
So statistically you're loading your codebase with vulnerabilities, hosting on a platform that has no economic incentive for operational excellence, and then walking into incidents like Lovable — where the vendor, under pressure, can decide that the observed behavior was "intended" all along.
This isn't a criticism of the platforms. It's a division of labor. Lovable builds tools. Running your application reliably over time is not what the tool is for. The only question is: who is?
Lovable isn't a one-off
The Lovable incident is the latest in a series that now reads as a pattern.
In July 2025, a Replit agent deleted the entire production database of a SaaStr project during a declared code freeze — 1,206 executive records, 1,196 company records wiped. The agent then fabricated test results to cover it up. Replit CEO Amjad Masad had to respond publicly and roll out new safeguards.
In August 2025, the Tea dating app was breached: 72,000 sensitive images (including 13,000 government IDs) and 1.1 million private messages ended up on 4chan. Root cause: an open Firebase bucket with no authentication. The AI had generated correct upload code — just without the negative constraint "do not make this publicly accessible."
In March 2026, Claude Code ran terraform destroy against DataTalks.Club's production environment and wiped 2.5 years of student data including all automated snapshots — 1.94 million records gone, only partially restored through Amazon support.
The patterns repeat: missing environment separation, client-side security logic, unconfigured default permissions, backups that don't exist or live in the same blast radius as production, unchecked agent actions with no delete-protection. Escape.tech systematically scanned 5,600 vibe-coded apps and found 2,000 high-severity vulnerabilities, 400 exposed secrets, and 175 cases of leaked personal data — in live production systems. Tenzai built 15 identical apps on five different vibe-coding platforms and identified 69 vulnerabilities, six of them critical.
Line these cases up side by side and one thing stands out: in almost every incident, the platform vendor gave a variation of the same answer. "The system worked as designed" or "the user should have configured it differently." Technically, that may be true. Commercially, it's no comfort.
What this means for anyone running vibe-coded apps in the EU
If you operate a vibe-coded app in the EU — or serve EU users from anywhere — you have two problems at once: a technical one and a regulatory one. In the short term, the regulatory one is more expensive.
GDPR Article 32 requires you to ensure "appropriate" security of processing. "Lovable said it was fine" is not a legal defense. As the controller, you must independently assess your processor's security — which means you need a DPA (data processing agreement), documented TOMs (technical and organizational measures), and your own assessment of adequacy. None of those artifacts ship with Lovable by default.
GDPR Article 33 requires you to report a breach within 72 hours of becoming aware of it. If your project was created before November 2025 and processed personal data, the Lovable incident may be a reportable event for you — regardless of whether Lovable classifies it that way. Lovable calling it "intended behavior" does not relieve you of your obligation to assess.
Starting August 2026, the EU AI Act stacks on top. Penalty ranges add up: GDPR up to €20M or 4% of global annual turnover, AI Act up to €35M or 7%. The two are independently enforceable.
In practical terms:
- If your app was built on Lovable before November 2025: open an audit trail today, determine whether personal data was processed, document an Art. 33 assessment.
- Independent of creation date: verify where the data physically resides (Lovable defaults to non-EU hosting), obtain a DPA, document TOMs.
- For any new vibe-coded app going into production: run an independent security check before go-live. The platform won't do it for you.
These aren't my recommendations. This is what happens when a data protection officer or an external auditor evaluates your application.
What we do at Lastable
I started Lastable precisely because this gap — between "platform functioning as designed" and "business is secure" — isn't closing on its own. It's widening.
We run a free Health Check on your vibe-coded app: security scan, GDPR assessment, hosting analysis, maintenance score. PDF report by email in five business days. No cost, no obligation.
If you want help afterward with migration to EU hosting or ongoing operations, we're here. If not, you walk away with a report you can use — with or without us.
The first 20 apps get a deep-dive audit. Spots are still open.
Sources
- The Next Web — Lovable security crisis: 48 days of exposed projects
- Bastion — Lovable Data Breach April 2026: What Was Exposed & How to Respond
- The Register — Vibe coding upstart Lovable denies data leak, cites 'intentional behavior'
- Cyber Kendra — Lovable Left Thousands of Projects Exposed for 48 Days
- Breached.Company — Five API Calls From a Free Account (BOLA technical deep-dive)
- SQ Magazine — Lovable API Flaw Exposes Sensitive User Project Data
- Lovable — Our response to the April 2026 incident
- Veracode 2025 State of Software Security Report
- Wiz Research Blog
- GDPR Art. 32 & 33 — full text
- EU AI Act — official overview and timelines