I Built an App with GPT-4 — Here's What VibeSafe Found
By Justin Mendez on 5/7/2025
The AI Coding Revolution: Dream to Deployed in Minutes?
We're living in an incredible era for developers. AI tools like GPT-4, GitHub Copilot, and Cursor are transforming our workflows, allowing us to go from idea to functional code at speeds previously unimaginable. It's like having a super-powered coding assistant available 24/7.
To put this to the test, I decided on a classic experiment: Could GPT-4 build a complete, albeit simple, application for me with minimal intervention? I tasked it with creating a To-Do List application – a common enough project, but one with a frontend, a backend API, and data persistence (even if just in-memory for this test).
GPT-4 rose to the challenge, churning out Node.js/Express code for the API (CRUD operations for to-do items) and React components for the frontend. With a few minor tweaks to connect the pieces, I had a working app in what felt like no time!
But as the initial excitement settled, a crucial question emerged: Is this AI-generated code actually secure? It functions, yes, but what about hidden vulnerabilities? That's where VibeSafe entered the picture. I decided to use our own tool to audit the code GPT-4 had so diligently prepared.
The GPT-4 App-Building Blitz (A Quick Recap)
My interaction with GPT-4 was straightforward. I provided prompts like:
- "Generate a Node.js Express backend for a To-Do list. It should have API endpoints for creating, reading, updating, and deleting to-do items. Store items in an in-memory array for now."
- "Create React components for a To-Do list app: one to display items, one to add items, and a main app component to manage state."
- "Show me how to fetch and display to-do items from the backend API in the React app."
GPT-4 delivered. The code was largely coherent, and the app performed its basic functions. The speed was undeniable.
Showtime: vibesafe scan
on AI-Generated Code
With the GPT-4 generated project (gpt4-todo-app/
) ready, it was time for the moment of truth. I navigated to the project root in my terminal and ran the command that's becoming second nature:
cd path/to/gpt4-todo-app
vibesafe scan
I'll admit, I was curious. Would VibeSafe find anything? Or had GPT-4 learned enough from its vast training data to produce secure-by-default code for such a simple application?
VibeSafe's Verdict: The Good, The Bad, and The Fixable
The scan completed in seconds. And yes, VibeSafe had some findings. Here are the highlights:
Finding 1: The Forgotten API Key
- What VibeSafe Found: Deep within a (hypothetical) configuration file for connecting to a future, more advanced version of the to-do service, VibeSafe flagged a hardcoded placeholder API key.
- The Code Snippet (Illustrative
config.js
generated by AI):// TODO: Replace with a real analytics service in V2 const ANALYTICS_API_KEY = "YOUR_API_KEY_HERE_DO_NOT_COMMIT"; // ... rest of the config
- Why It's a Risk: Even placeholder keys, if they follow a known format or if a developer later replaces it and forgets to remove the placeholder from version control, can become a security risk. Real keys committed to repositories are a primary target for attackers.
- VibeSafe Alert (Conceptual):
CRITICAL: Hardcoded placeholder API Key found in config.js:2
Finding 2: The Overly Trusting API Endpoint
- What VibeSafe Found: The
POST /api/todos
endpoint, which GPT-4 created for adding new to-do items, lacked any input validation for thetext
field of the to-do item. - The Code Snippet (Illustrative Express route):
// GPT-4 generated snippet from routes/todos.js app.post('/api/todos', (req, res) => { const { text } = req.body; // No validation or sanitization! const newTodo = { id: Date.now(), text, completed: false }; todos.push(newTodo); res.status(201).json(newTodo); });
- Why It's a Risk: Without validating or sanitizing
text
, an attacker could send an empty string, an excessively long string (potentially causing performance issues or crashes), or even malicious scripts if this data were ever rendered insecurely on the frontend (see next point!). - VibeSafe Alert (Conceptual):
MEDIUM: Missing input validation for 'req.body.text' in POST /api/todos (routes/todos.js:3)
Finding 3: The Frontend XSS Sneak
- What VibeSafe Found: In the React component responsible for displaying to-do items, GPT-4 used
dangerouslySetInnerHTML
to render the to-do item's text. While sometimes necessary, it's a red flag if the content isn't strictly controlled or sanitized. - The Code Snippet (Illustrative React component):
// GPT-4 generated snippet from components/TodoItem.jsx function TodoItem({ todo }) { return ( <li style={{ textDecoration: todo.completed ? 'line-through' : 'none' }}> <span dangerouslySetInnerHTML={{ __html: todo.text }} /> {/* Uh oh! */} {/* ... buttons to toggle/delete ... */} </li> ); }
- Why It's a Risk: Combined with the lack of input validation on the backend, if a malicious script was submitted as a to-do item's text, this would execute in the browser of any user viewing that to-do list – a classic Cross-Site Scripting (XSS) vulnerability.
- VibeSafe Alert (Conceptual):
HIGH: Use of dangerouslySetInnerHTML with unsanitized data in components/TodoItem.jsx:4
Finding 4: The Outdated Package Suggestion (Hypothetical)
- What VibeSafe Found (Illustrative): Let's imagine GPT-4, in an attempt to add a feature like markdown rendering for to-do items, suggested an older, less common markdown library with a known, medium-severity ReDoS (Regular Expression Denial of Service) vulnerability.
- Why It's a Risk: Using dependencies with known CVEs directly exposes your application to those vulnerabilities.
- VibeSafe Alert (Conceptual):
MEDIUM: Dependency 'old-markdown-parser@0.2.1' (CVE-2022-XXXXX) found in package.json
Lessons Learned: AI is a Powerful Intern, Not a Seasoned Architect
This experiment was illuminating. GPT-4 (and similar AI tools) are phenomenal for scaffolding, boilerplate, and even complex logic generation. The speed is undeniable.
However, security, especially nuanced security, isn't always its strongest suit out-of-the-box. AI models learn from vast datasets, but they might not always have the latest CVE information, the full context of your application's trust boundaries, or the adversarial mindset needed to anticipate all potential attack vectors.
This is where a specialized tool like VibeSafe shines. It acts as that crucial, automated security review layer, catching things that both humans and AI might overlook in the rush to build.
Remediation: VibeSafe Insights + AI Assistance = Secure Code
Fixing the issues was straightforward, thanks to VibeSafe's clear pointers:
- API Key: Removed the placeholder and made a note to use environment variables for real keys.
- Input Validation: Added a simple validation library (like
express-validator
) to the backend API endpoint. - XSS Risk: Changed
dangerouslySetInnerHTML
to simply rendertodo.text
as text content in the React component. For more complex scenarios, I'd use a sanitization library like DOMPurify. - Outdated Package: Searched for a more up-to-date and secure markdown library.
Interestingly, for some of these, I fed the VibeSafe output back into an AI assistant (like Cursor's Cmd+I
feature):
"VibeSafe found this issue: [pasted VibeSafe alert for input validation]. Can you help me add input validation to this Express route using express-validator?"
The AI was then able to quickly generate the corrected, more secure code snippet!
Conclusion: Code with AI, Secure with VibeSafe
AI-assisted development isn't just the future; it's the present. It empowers us to build faster and smarter. But as we embrace these tools, integrating automated security scanning into our workflow is no longer a luxury—it's a necessity.
VibeSafe proved to be an excellent companion in this AI-driven development experiment. It provided the quick, actionable security feedback needed to turn AI-generated code into more robust and trustworthy code.
So, go ahead and leverage the power of AI in your projects. And when you do, let VibeSafe help you keep that development vibe secure!
Technical Deep Dive: The dangerouslySetInnerHTML
Pitfall
Let's zoom in on the XSS issue VibeSafe might find with AI-generated React code. dangerouslySetInnerHTML
is a React prop that directly sets the HTML content of an element. Its name is a deliberate warning from the React team: use it with extreme caution!
Why AI Might Suggest It:
AI models might suggest dangerouslySetInnerHTML
if a prompt implies needing to render HTML content that comes from a variable. For example, if you ask it to display user-generated rich text or markdown that's already been converted to HTML.
The Risk:
If the HTML string passed to dangerouslySetInnerHTML
comes from an untrusted source (like user input that hasn't been rigorously sanitized), it can contain malicious JavaScript (<script>alert('XSS')</script>
) or HTML that breaks your layout or steals information.
Secure Alternatives & Mitigation:
- Avoid It If Possible: If you only need to render plain text, always prefer directly rendering the text variable (e.g.,
{todo.text}
). React automatically escapes text content, preventing XSS. - Server-Side Sanitization: If you absolutely must render user-supplied HTML, sanitize it on the server before it's even sent to the client or stored. Use a robust HTML sanitization library (like DOMPurify configured for your specific needs) that removes all potentially malicious tags and attributes, allowing only a safe subset of HTML.
- Client-Side Sanitization (Less Ideal for this case but an option): If server-side isn't feasible for some reason, you could sanitize on the client before passing to
dangerouslySetInnerHTML
, but this is generally less secure as client-side controls can sometimes be bypassed. - Markdown-to-Safe-HTML Libraries: If dealing with user-supplied Markdown, use a library that converts Markdown to HTML and sanitizes the output by default or allows easy integration with a sanitizer.
When VibeSafe flags dangerouslySetInnerHTML
, it's prompting you to critically assess: "Is the HTML being inserted here guaranteed to be safe?" If not, you need to implement one of the mitigations above.