The 5 security mistakes almost every vibe-coded app shares
The 5 security mistakes almost every vibe-coded app shares
Vibe coding has produced a set of recurring security patterns so persistent they almost qualify as platform features by now. I've spent the last few months cataloging publicly documented incidents — from Moltbook through Tea, Enrichlead, and DataTalks all the way to the Lovable incident in April — and in almost every case, the causes trace back to a very small list.
These five mistakes aren't exotic. They aren't new. They aren't even specific to AI-generated code. But they show up in vibe-coded apps so reliably that they deserve a genre of their own.
If you're running a vibe-coded app in production — or you're about to put one there — go through this list. Each item includes a 60-second self-test. If even one comes back red, you have work ahead of you.
Mistake 1 — Default-public databases
Most vibe-coded apps use Supabase, Firebase, or MongoDB Atlas as their backend. That's a reasonable choice — these platforms are well documented and ready to go in minutes. Lovable, Replit, and Bolt recommend them by default.
The problem: their default configurations are designed to make starting easy, not to protect your data. In Supabase, Row-Level Security (RLS) is disabled on new tables by default. In Firebase, default development security rules typically allow any authenticated access — meaning anyone who creates a free Firebase account can read and write. On MongoDB Atlas Free-Tier clusters, "no auth required" was the default for years.
These configurations aren't documented incorrectly. They're explicitly described. But when you're vibe coding, you inherit the default because your AI tool inherited it — and neither of you is currently in the mood to design a permission strategy. Welcome to vibe-coding crystal meth.
Moltbook died from this exact thing: RLS was never enabled, the anon key sat in the frontend, anyone could do anything. Three days live, then over.
✓ 60-second self-test — database defaults
- Open your Supabase / Firebase / MongoDB dashboard. Is RLS or are Security Rules active on every production table?
- If yes: have you written policies that check real conditions — not
USING (true)orallow read: if true?- Open your app's public API in a private browser window and try to access user data without logging in. If it works, you've got the bug.
Mistake 2 — Security logic in the frontend
AI tools generate code that works. They rarely generate code that's protected against tampering.
The classic pattern: your app has a premium section. The frontend checks "does the user have a premium subscription?" and if yes, shows the section. The backend never repeats the check — the server delivers the data to anyone who asks for it.
The same goes for authorization: if your "is this user allowed to do that?" logic only lives in the JavaScript bundle and isn't enforced server-side on every endpoint, I can call your API without proper permission. Open browser console, write fetch(), done.
Enrichlead was three days live before users bypassed the paywall through the browser console. The paywall logic lived entirely in the frontend. The Lovable incident in April followed the same pattern at platform level: the authorization check wasn't enforced server-side for legacy projects. Five API calls were enough to access another account's source tree.
The genre is rounded out by API keys in the frontend. If your live Stripe key, your OpenAI credentials, or your service role key end up in the frontend bundle (and that happens almost by default in vibe-coded apps), then anyone willing to look at the bundle has them too.
✓ 60-second self-test — frontend logic
- Open your app in a private browser window, hit F12 → Network tab.
- Click a protected feature. Look at the API request. Copy the URL and run it without an authentication header (e.g., with
curl). If the data comes back, you've got the bug.- Search the JS bundle for
sk_live_,AKIA,AIza,ghp_, orservice_role. Hits = problem.
Mistake 3 — No separation between test and production
Vibe-coding platforms give you a single environment by default. There's no Dev, Staging, and Production — there's just "the Lovable app you're building." When you make it live, that exact app is live. When you change it, you're changing the live version.
The problem shows up in two ways.
First: data. When you create test users while building, insert test content, run test payments — all of it lands in the same database as your live users. When you launch, the test data needs to be cleaned out without breaking the live data. This rarely happens cleanly in practice.
Second: AI agents with write access. When you give Claude Code, Cursor, or Replit agents access to your repository and infrastructure, the agent often has access to production resources. At DataTalks.Club in March 2026, Claude Code ran a terraform destroy that wiped 2.5 years of production data including all automated snapshots — 1.94 million records, only partially recovered through Amazon support. At SaaStr, a Replit agent wiped the entire database during a declared code freeze, then fabricated test results to "cover up" the incident. Agents like to please.
What both cases have in common: no real separation between agent playground and production, no delete protection, backups in the same cloud region and therefore the same blast radius.
✓ 60-second self-test — environments
- Does your app have separate databases for test and production?
- If not: can you accidentally delete the live DB by clicking "Reset DB" in the editor?
- Do you have backups that are NOT in the same cloud region and not under the same credentials as production?
- Does your AI agent have write permission to production — and if so, is there delete protection?
Mistake 4 — Storage buckets without auth
If your app stores images, documents, or files, it's hooked into a storage bucket — Firebase Storage, Supabase Storage, Cloudflare R2, or Amazon S3. These buckets have their own permission layer, separate from database auth. The AI almost always forgets about it.
When the AI agent writes your upload code, it writes the file to the bucket and configures the bucket permissions to "public read." Because that works during testing. Because no one during testing asks: "do all users need to see all files?"
Tea, a dating app specifically for women, ended up with 72,000 sensitive images — including 13,000 government IDs — and 1.1 million private messages on 4chan, because the Firebase bucket was public. The AI had written the upload code correctly — it just hadn't been given the negative constraint "don't make it public" in the prompt. The incident has become the textbook example for exactly this pattern.
On top of that: storage URLs are often enumerable. If your file is https://your-bucket.firebase.app/uploads/user-1234/profile.jpg, the URLs for user-1235, user-1236, and so on are guessable. Even if the bucket doesn't openly list its contents, I can derive them.
✓ 60-second self-test — storage
- Upload a file to your app. Copy the file's URL.
- Open the URL in a private browser window without logging in. If the file loads: bug.
- Try changing a character in the URL (file ID or user-ID path). If other files load: bigger bug.
Mistake 5 — Frozen dependencies
Vibe-coding platforms create your code with a snapshot of libraries at build time. Lovable, Replit, and Bolt freeze package.json / requirements.txt with the versions the AI agent thought were right at that moment.
No one updates those dependencies afterward. There's no platform-side mechanism that runs npm audit or pip-audit on your code monthly and warns you when one of your libraries gets a known CVE. There's no automatic pull request suggesting a security update. There isn't even an email.
So your vibe-coded app starts aging the moment it's online. The Veracode 2025 State of Software Security Report found that 45% of AI-generated code contains at least one OWASP Top 10 vulnerability. A Q1 2026 study showed that 91.5% of all vibe-coded apps contain at least one hallucination-related defect. Both are findings about the code at creation time. And code libraries don't age well — every version bump you skip is one more chance that the vibe-coded app is exposed to a published exploit.
Georgia Tech's Vibe Security Radar registered 35 new CVEs caused directly by AI-generated code in March 2026 alone. In January, it was six.
✓ 60-second self-test — dependencies
- When were dependencies last updated in your repository? (Check the git log or the mtime on
package-lock.json/requirements.txt.)- If the answer is "at build time": you're betting that no critical CVE has hit any of your libraries in the months since.
- List your five most-important direct dependencies and Google each + "CVE 2026". If anything comes back, you at least know what you're doing today.
What these five have in common
At first glance, the mistakes have little to do with each other: a database toggle, a frontend pattern, an environment concept, a bucket default, and a patching regime. But they all follow the same structural logic.
The platform gives you tools optimized for "easy start" by default. The AI agent generates code optimized for "works right now" by default. You're the only layer left between "it works" and "it's safe."
That's a position no one walks into voluntarily. You vibe coded because you didn't want to spend two weeks learning database security, cloud permissions, and dependency management before testing your idea. That exact desire is now the risk.
The simple truth: none of the five mistakes is hard to fix. But each requires that someone actively check for it. The platforms don't. The AI agents don't. The default toolset for vibe-coded apps has no answer to this specific problem.
What we do at Lastable
We're building exactly that layer — the one between "AI-generated code" and "safe production operation." Our Full Technical Health Check inspects all five mistakes from this list and gives you a clear report on what you can fix yourself and where we'd help. The free Quickscan does a non-invasive look from the outside first to give you an initial read.
No account needed, no hidden costs, no automatic sales call.
If the Quickscan flags red, you have two options: fix it yourself, or talk to us. Both are fine — and either way, vibe coding stays fun, even down the road.
Sources
- Lastable Blog — Three Days to Shutdown: what Moltbook teaches about RLS
- Lastable Blog — When 'secure' isn't safe enough: the Lovable incident
- The Next Web — Lovable security crisis: 48 days of exposed projects
- Fanatical Futurist — Moltbook vibe coded security breach (February 2026)
- NPR — Tea encouraged its users to spill. Then the app's data got leaked.
- prodmoh.com — The $10M Mistake: Deconstructing the Tea App & Enrichlead Disasters
- Tom's Hardware — Claude Code deletes developers' production setup (DataTalks.Club)
- Fortune — AI-powered coding tool wiped out a software company's database (SaaStr/Replit)
- Veracode 2025 State of Software Security Report — 45% of AI-generated code
- Towards Data Science — The Reality of Vibe Coding: AI Agents and the Security Debt Crisis
- Supabase Docs — Row Level Security
- Firebase Docs — Storage Security Rules