
The Art of the Prompt
Prompting has changed. It used to be a dark art - now it's more like a conversation. But getting it right still matters, and here's a framework that makes it simple.
Read Story→
There is a version of this conversation that is purely theoretical. A discussion about potential risks, hypothetical breaches and abstract concerns about AI-generated code.
That version is no longer available. The incidents are real, they are documented and they are happening right now to products built exactly the way many of us are building.
This is not a piece designed to frighten you out of vibe coding. I use these tools myself and I think the opportunity is genuine. But I'd be doing you a disservice if I didn't lay out what's actually happening - because some of it is serious, and founders who don't know about it are building on ground that's less stable than they think.
Security researchers have been studying vibe-coded applications at scale and the findings are consistent across multiple independent studies. According to CSO Online, a December 2025 assessment of five major vibe coding tools - including Claude Code, Cursor and Replit - found a total of 69 vulnerabilities across 15 test applications, with several rated critical. Across the broader research landscape, between 40 and 62% of AI-generated code contains security vulnerabilities depending on the study. AI-written code produces flaws at 2.74 times the rate of human-written code.
Those are not comfortable numbers. And they matter because the apps being built with these tools are no longer just weekend experiments. They're handling real customer data, real payments and real personal information.
In February 2026, a social networking platform called Moltbook launched. The founder publicly stated he had not written a single line of code himself - the entire thing was vibe-coded. Three days after launch, security firm Wiz discovered that the database had been configured with full public read and write access. Not because anyone decided that was acceptable. Because the AI scaffolded it that way during development and nobody checked before deployment.
The exposure included 1.5 million authentication tokens and 35,000 user email addresses. The root cause was not a sophisticated attack. It was a misconfigured database that the founder didn't know to look for, built by an AI that didn't flag it as a problem.
The most extensively documented case involves Lovable - a vibe coding platform valued at $6.6 billion with eight million users. What happened there is worth understanding in detail because it illustrates several layers of how things can go wrong.
In February 2026, a backend update at Lovable accidentally re-enabled access to public project chat histories and source code - undoing security protections the company had deliberately built throughout 2025. A security researcher named Matt Palmer discovered the flaw and reported it to Lovable's bug bounty programme on March 3rd. The bug bounty partner classified it as intended behaviour and closed the report without escalating it. The vulnerability remained live.
On April 20th, 76 days after the regression was introduced, Palmer went public. As reported by Computing.co.uk, any free Lovable account could access another user's full source code, database credentials, AI chat histories and customer data - across every project created before November 2025. The researcher demonstrated the severity by accessing a live admin panel for Connected Women in AI, a Danish nonprofit, and pulling real names, job titles, LinkedIn profiles and Stripe customer IDs from its database. Five API calls from a free account.
Lovable's own incident report acknowledged the full chain of failures. Their initial public response denied a breach had occurred and described the exposed data as "intentional behaviour." They then blamed their own documentation. Then their bug bounty partner. Only after significant public backlash did a genuine apology follow.
The platform is not uniquely irresponsible. It was building fast, at scale, under enormous commercial pressure. But the incident illustrates a structural problem that goes beyond any single company.
The same week as the Lovable disclosure, two other AI security incidents made headlines.
Vercel, a widely used platform for hosting and deploying web applications, confirmed on April 19th that its systems had been breached. The attack originated with Context.ai, a third-party AI tool used by a Vercel employee - the employee had connected it to their Vercel Google Workspace account, which gave attackers a path into Vercel's internal systems. TechCrunch reported that a threat actor subsequently claimed to be selling the stolen data. Vercel confirmed a limited subset of customers was affected and is working with Google Mandiant and law enforcement on the ongoing investigation.
Separately, CSO Online reported that Bitwarden's CLI tool - used by developers to manage passwords programmatically - was hijacked for approximately 90 minutes on April 22nd. The malicious version specifically targeted credentials for AI coding tools including Claude Code, Cursor and Codex CLI. Bitwarden contained the incident quickly and confirmed that only 334 developers downloaded the compromised version during that window. Vault data was not breached.
Three separate incidents. Three different attack vectors. All within the same seven days. All connected to the AI development ecosystem.
The CTO of Wiz, one of the leading cloud security firms, put it plainly at the RSAC security conference in March 2026: "When someone who is non-technical creates this amazing application, many times they don't think about security and they don't even know what's inside the application because they didn't even create it on their own."
That's the problem in one sentence.
If you are building applications with AI tools - especially anything that handles customer data, user accounts or payments - there are some things worth doing now rather than later.
Understand what your app is actually storing. AI tools build databases quickly and quietly. Do you know what data your application is collecting? Where it's stored? Who can access it by default? These are not rhetorical questions.
Row-level security is not optional. If you're using Supabase - which is the default backend for many vibe coding platforms - row-level security needs to be enabled and configured correctly. The Moltbook breach and a significant portion of the Lovable incidents trace directly back to this being skipped. If you don't know what row-level security is, ask your AI tool to explain it and then ask it to check whether it's correctly configured in your project.
Never put API keys in your code. AI tools frequently embed credentials - database passwords, payment keys, third-party API tokens - directly into the code they generate. If that code is ever visible to anyone else, those credentials are exposed. Use environment variables instead and ask your AI to check your codebase for any hardcoded secrets before you deploy.
Treat "public" settings with suspicion. The Lovable incident happened partly because users didn't fully understand what "public" meant in practice. Before you deploy anything, understand exactly what is and isn't visible to the outside world.
Ask your AI to review its own code for security issues. After building a feature, follow up in the same conversation: "Now act as a security engineer. Review the code you just wrote and identify any vulnerabilities." It's worth doing - but be aware that prompting alone is not a reliable security solution. A Carnegie Mellon University study found that even giving AI agents explicit security instructions didn't consistently reduce flaws. Think of it as a first pass, not a guarantee. The only truly reliable approach is independent security testing of your running application.
Be careful what third-party tools you connect to your accounts. The Vercel breach happened because an employee connected a third-party AI tool to their work Google Workspace account with broad permissions. Before you connect any AI tool to your business accounts, check what access you're granting and whether it's necessary.
Vibe coding is a genuine leap forward in what founders can build. The speed, the accessibility, the ability to turn an idea into a working product without a development team - these things are real.
But "it works" and "it's safe" are two very different standards. The tools are optimised to make things work. Security is something you have to bring to the process yourself - or at least know to ask for.
The incidents documented here are not fringe cases. They happened on major platforms, to real users, with real consequences. The good news is that most of the mitigations are not complicated. They just require knowing to look.
Building something and not sure if it's secure? Bring it to Vibe Coding Lab - a free community of founders who are figuring this out together.
No comments yet. Be the first to share your thoughts.
More from Build

Prompting has changed. It used to be a dark art - now it's more like a conversation. But getting it right still matters, and here's a framework that makes it simple.
Read Story→
The exact tools I use to go from idea to live app - no developer needed. Here's my real workflow and what it actually costs.
Read Story→
A new generation of founders is building real software without writing a single line of code. Meet the vibe coders... and why this changes everything.
Read Story→