7 Security Holes Every Backend Dev Must Close Before Launch
SQL injection, no rate limiting, mass assignment – 7 security holes I still see in production APIs. Each takes under an hour to fix. Fix them before launch, not after the breach. Real code examples. Real checklist.

You are not being hacked because hackers are smart. You are being hacked because you left the door open.
Let me tell you something that keeps me up at night.
I have seen the post-mortems. I have read the breach disclosures. I have sat in the room where a CTO had to explain to the board why customer data was leaked.
And in almost every case, the root cause was not a sophisticated zero-day exploit. It was not a nation-state actor with unlimited resources.
It was a basic security hole. The kind that gets covered in every "backend security 101" blog post. The kind that takes 15 minutes to fix.
The developers who got hacked were not stupid. They were rushed. They were under pressure to ship. They assumed "someone else" would handle security. Or they just did not know.
This article is for you if you are launching a backend service in the next 30 days. It does not matter if it is a side project, a startup MVP, or an enterprise API. Hackers do not care about your excuses. They scan for open doors constantly.
Here are the seven security holes I see most often. Close them before launch. Not next sprint. Not "after we get funding." Before launch.
What Makes a Security Hole Dangerous?
Not every vulnerability is equal. A hole is dangerous if it meets at least two of these criteria:
| Criterion | What It Means |
|---|---|
| Remotely exploitable | Attacker does not need access to your servers or internal network |
| No authentication required | Attacker does not need a valid user account |
| Leads to data exposure | Customer data, credentials, or internal secrets are accessible |
| Leads to code execution | Attacker can run arbitrary commands on your server |
| Common in automated scans | Bots are actively looking for this exact hole right now |
If a hole hits three or more of these, fix it today. Not tomorrow.
Here are the seven that hit the hardest.
Hole #1: No Rate Limiting on Authentication Endpoints
What it looks like:
Your login endpoint at POST /api/login accepts unlimited attempts. No captcha. No cooldown. No account lockout.
Why developers leave this open:
They forget. Or they assume "we will add it later." Or they think "we are too small to be targeted."
Why it is deadly:
This is how credential stuffing works. Attackers take username and password pairs from previous data breaches (millions of them) and try them against your login endpoint. They automate it. They run thousands of attempts per hour.
If even 0.1% of your users reuse passwords (and they do), the attacker gets in.
The real-world impact:
I have seen a startup lose $40,000 in one night because an attacker stuffed credentials, got into an admin account, and issued refunds to their own cards. The startup had no rate limiting. The attacker ran 50,000 login attempts in 8 hours. The startup never saw it coming.
What actually works:
| Protection | Implementation | Why it works |
|---|---|---|
| Rate limit by IP | 5 attempts per 15 minutes | Stops automated scripts cold |
| Rate limit by username | 5 attempts per username per hour | Prevents attackers from switching IPs |
| Account lockout | Lock account for 15 minutes after 10 failures | Frustrates credential stuffing |
| Captcha (reCAPTCHA or hCaptcha) | After 3 failures, require captcha | Blocks bots entirely |
How to implement (quick version):
If you use Express + Node.js:
const rateLimit = require('express-rate-limit');
const loginLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 5, // 5 attempts
skipSuccessfulRequests: true, // Don't count successful logins
keyGenerator: (req) => {
// Rate limit by IP AND username
return `${req.ip}:${req.body.username}`;
}
});
app.post('/api/login', loginLimiter, handleLogin);
For other stacks, similar libraries exist for every language. Do not write your own rate limiting. Use the battle-tested library.
Check before launch:
- [ ] Login endpoint has rate limiting
- [ ] Password reset endpoint also has rate limiting (hackers target this too)
- [ ] Registration endpoint has rate limiting (prevents account creation spam)
- [ ] Failed attempts trigger a cooldown (not just "try again immediately")
Hole #2: SQL Injection (Still Happening in 2026)
What it looks like:
// DO NOT DO THIS
const query = `SELECT * FROM users WHERE email = '${userEmail}' AND password = '${userPassword}'`;
db.execute(query);
Why developers leave this open:
They learned SQL before they learned about parameterized queries. Or they are using an ORM incorrectly. Or they think "input sanitization is enough" (it is not).
Why it is deadly:
SQL injection is the oldest trick in the book. It has been documented for over 20 years. And it still works because developers keep making the same mistake.
An attacker enters ' OR '1'='1' -- into your email field. Suddenly your query becomes:
SELECT * FROM users WHERE email = '' OR '1'='1' -- ' AND password = 'anything'
The -- comments out the password check. The OR '1'='1' is always true. The attacker logs in as your first user – often an admin.
The real-world impact:
In 2024, a major healthcare provider leaked 2 million patient records through a SQL injection vulnerability in a public-facing API endpoint. The API was supposed to return only public data. The attacker used UNION SELECT to extract private columns.
The fix? One parameterized query. Twenty minutes of work.
What actually works:
Always use parameterized queries (prepared statements). Never concatenate user input into SQL strings.
// DO THIS INSTEAD
const query = 'SELECT * FROM users WHERE email = ? AND password = ?';
db.execute(query, [userEmail, userPassword]);
The database treats the parameters as data, not code. Even if the user enters ' OR '1'='1, it is treated as a literal string to compare against, not as SQL logic.
ORM users: Most ORMs (Prisma, TypeORM, Sequelize, SQLAlchemy, etc.) use parameterized queries automatically if you use their safe methods. The danger is when you drop down to raw queries. Do not write raw queries unless you absolutely must – and if you do, use parameterization.
Check before launch:
- [ ] Every database query uses parameterization (no string concatenation)
- [ ] No raw SQL queries without parameterization
- [ ] ORM is configured to escape inputs automatically (most do by default)
- [ ] Dynamic table/column names (which cannot be parameterized) are validated against an allowlist
Hole #3: Secrets in Environment Variables (But Actually Still Leaking)
What it looks like:
You learned that hardcoding secrets is bad. So you put them in .env files. Good.
But then you:
- Commit
.envto git anyway - Log environment variables on startup (visible in logs)
- Expose a debug endpoint that dumps
process.env - Send error reports to Sentry that include the full environment
- Use a hosting platform that accidentally logs build arguments
Why developers leave this open:
They think "I put it in .env" is the end of the story. It is not. That is the beginning.
Why it is deadly:
Once a secret is exposed, it is exposed forever. You can rotate it, but damage may already be done. An attacker with your database password, API key, or JWT secret can:
- Read or delete your entire database
- Make API calls on your behalf (costing you money)
- Forge authentication tokens (log in as any user)
The real-world impact:
A developer at a fintech startup committed their .env file to a public GitHub repository. It was there for 4 hours before someone noticed. In those 4 hours, a bot scraped the file, extracted the AWS keys, and spun up $50,000 worth of crypto miners on the company's AWS account.
The .env file had been excluded in .gitignore. But the developer force-added it with git add -f .env during a late-night debugging session. They forgot to remove it before pushing.
What actually works:
| Secret type | Where to store it (not .env for production) |
|---|---|
| Database passwords | Secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault) |
| API keys for external services | Same as above |
| JWT signing keys | Same as above |
| Environment-specific config | Environment variables (but never logged) |
For local development: .env is fine. Just never commit it. Use .env.example in your repo with placeholder values.
For production: Use your platform's secrets management. Most hosting platforms (Vercel, Railway, Heroku, AWS, GCP, Azure) have built-in secret stores. Use them.
Defense in depth:
- [ ] Rotate all secrets before launch (anything used in development gets a new value)
- [ ] No secrets in logs (strip them from error reports, LogRocket, Sentry, DataDog)
- [ ] No
/debug/envendpoint (remove it if you added it for testing) - [ ]
.envis in.gitignore(and you have verified it is not tracked) - [ ] Secrets are scoped to minimum permissions (database user has only what it needs)
Hole #4: No Input Validation on ANY User Input
What it looks like:
const userId = req.params.id;
const user = await db.findUserById(userId);
No validation. No type checking. No bounds checking.
Why developers leave this open:
They trust the frontend. Or they think "the database will reject invalid data anyway." Or they just forgot.
Why it is deadly:
Injection is the most obvious risk, but not the only one.
Without input validation, attackers can:
- Send an array where you expect a string (causing your code to crash)
- Send a massive string (1MB) to your
POST /commentendpoint (filling your database and driving up costs) - Send weird Unicode characters that break your string processing
- Send negative numbers for IDs or pagination limits (bypassing your logic)
The real-world impact:
A social media startup had an API endpoint: GET /posts?limit=100. No validation on limit. An attacker sent limit=1000000000. The database tried to return 1 billion rows. The database crashed. The entire platform was offline for 6 hours.
The fix? Math.min(limit, 100).
What actually works:
Validate every input from the client:
- Query parameters
- URL parameters (route params)
- Request body (JSON)
- Headers (especially
Content-Type,Authorizationformat) - File uploads (size, type, content)
Use a validation library. Do not hand-roll this.
JavaScript/Node.js (Zod – highly recommended):
const z = require('zod');
const createPostSchema = z.object({
title: z.string().min(1).max(200),
content: z.string().min(1).max(10000),
tags: z.array(z.string().max(20)).max(10),
publishDate: z.string().datetime().optional()
});
app.post('/api/posts', (req, res) => {
const result = createPostSchema.safeParse(req.body);
if (!result.success) {
return res.status(400).json({ errors: result.error.errors });
}
// result.data is now safe to use
});
Python (Pydantic):
from pydantic import BaseModel, Field
from typing import List, Optional
class CreatePostRequest(BaseModel):
title: str = Field(..., min_length=1, max_length=200)
content: str = Field(..., min_length=1, max_length=10000)
tags: List[str] = Field(default=[], max_items=10)
publish_date: Optional[str] = None
Go (go-playground/validator):
type CreatePostRequest struct {
Title string `json:"title" validate:"required,min=1,max=200"`
Content string `json:"content" validate:"required,min=1,max=10000"`
Tags []string `json:"tags" validate:"max=10,dive,max=20"`
PublishDate *string `json:"publish_date" validate:"omitempty,datetime=2006-01-02T15:04:05Z07:00"`
}
Check before launch:
- [ ] Every endpoint validates input (no blind trust)
- [ ] Validation includes types, lengths, ranges, formats
- [ ] File uploads have size and type limits
- [ ] Array/object fields have depth and size limits (prevent billion-laughs attacks)
- [ ] Validation errors return 400, never 500
Hole #5: Mass Assignment (Over-Posting)
What it looks like:
app.put('/api/users/:id', (req, res) => {
const updates = req.body;
await db.updateUser(req.params.id, updates);
});
The client sends a JSON body. The server updates the user with whatever fields are in that body.
Why developers leave this open:
Convenience. Writing explicit field-by-field updates is tedious. So they use ORM methods like update() with the whole request body.
Why it is deadly:
An attacker can send fields they should not be allowed to modify.
Example: Your user object has an isAdmin field. The attacker sends:
{
"email": "attacker@example.com",
"isAdmin": true
}
Your code updates email (allowed) AND isAdmin (NOT allowed). The attacker becomes an admin.
The real-world impact:
A well-known project management SaaS had this exact vulnerability. An attacker sent a {"role": "admin"} field to their profile update endpoint. The server processed it. The user became an admin. They exported all projects from their company's workspace.
The fix? Explicit whitelist of updatable fields.
What actually works:
Explicit whitelist. Define exactly which fields can be updated. Ignore everything else.
// Good: explicit whitelist
const allowedUpdates = ['email', 'name', 'avatarUrl'];
const updates = {};
for (const field of allowedUpdates) {
if (req.body[field] !== undefined) {
updates[field] = req.body[field];
}
}
await db.updateUser(req.params.id, updates);
Even better: Use a library that requires explicit definition.
// Using class-transformer + class-validator (NestJS style)
class UpdateUserDto {
@IsOptional() @IsEmail()
email?: string;
@IsOptional() @IsString()
name?: string;
// isAdmin is NOT in this DTO – cannot be updated
}
Check before launch:
- [ ] Every PATCH/PUT endpoint has a whitelist of allowed fields
- [ ] No
req.bodyis passed directly to database update methods - [ ] Hidden/internal fields (isAdmin, accountBalance, permissions) are excluded from client updates
- [ ] If using an ORM, you are using explicit field lists, not
req.bodydirectly
Hole #6: No Rate Limiting on API Endpoints (Beyond Login)
What it looks like:
Rate limiting on login. Nothing else.
Why developers leave this open:
They think "only login needs protection." Or they think rate limiting is "too hard" to configure for all endpoints.
Why it is deadly:
Without rate limiting on other endpoints, attackers can:
- Scrape all your data – Call
GET /usersorGET /poststhousands of times to download everything - Burn your API budget – If you pay per API call (OpenAI, Twilio, etc.), attackers can run up huge bills
- DoS your database – Slow queries on large result sets can be called repeatedly, taking down your DB
- Abuse free trials – Hit your
POST /signupendpoint thousands of times to create fake accounts
The real-world impact:
A startup using OpenAI's API forgot to rate limit the endpoint that generates summaries. An attacker found the endpoint and called it 500,000 times in 12 hours. The startup received a $12,000 OpenAI bill. The attacker made $0. They just wanted to cause damage.
The fix? A simple rate limiter on that endpoint: 10 requests per user per minute.
What actually works:
Apply rate limiting to all endpoints by default. Then increase limits for specific endpoints if needed.
| Endpoint Type | Suggested Rate Limit (per user per minute) | Why |
|---|---|---|
| Login / Password reset | 3-5 | Prevent credential stuffing |
| Registration | 3-5 per IP per hour | Prevent fake account spam |
| POST/PUT (writes) | 10-30 | Prevent abuse of create/update operations |
| GET (reads) | 60-120 | Allow normal browsing, prevent scraping |
| Expensive operations (AI calls, email send, etc.) | 5-10 | Prevent cost-based attacks |
| Public endpoints (no auth) | 30-60 per IP | Prevent anonymous scraping |
Implementation example (global rate limiter in Express):
const rateLimit = require('express-rate-limit');
// Default: 100 requests per minute per user (or IP if no user)
const globalLimiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
keyGenerator: (req) => req.user?.id || req.ip,
standardHeaders: true,
legacyHeaders: false,
});
app.use(globalLimiter); // Apply to EVERY endpoint
// Then override for specific sensitive endpoints if needed
app.post('/api/login', loginSpecificLimiter);
app.post('/api/users', registrationLimiter);
Check before launch:
- [ ] Every endpoint has rate limiting (not just login)
- [ ] Limits are appropriate for cost (AI endpoints have stricter limits)
- [ ] Unauthenticated endpoints have rate limits by IP
- [ ] Rate limit headers are exposed (so good clients can self-throttle)
- [ ] Limits are high enough for legitimate use (monitor your 99th percentile)
Hole #7: No Security Headers (CSP, HSTS, X-Frame-Options, etc.)
What it looks like:
Your API returns only the data. No security-related HTTP headers.
Why developers leave this open:
They think security headers are "frontend problems." Or they simply do not know they exist.
Why it is deadly:
Even if your backend API is secure, missing headers can expose your users to:
- Clickjacking – Your site embedded in a malicious iframe where users click hidden buttons
- XSS amplification – Without a Content Security Policy (CSP), an XSS bug becomes a complete compromise
- Protocol downgrade – Without HSTS, attackers can force users to HTTP and intercept traffic
- MIME type sniffing – Attackers can upload malicious files disguised as images
The real-world impact:
A fintech dashboard had no CSP header. An attacker found a small reflected XSS vulnerability in a search parameter (otherwise low severity). Because there was no CSP, the attacker turned that minor bug into a full session hijacker. They stole user tokens and drained accounts.
With a strict CSP, even if the XSS existed, the attacker's script would have been blocked by the browser.
What actually works:
Add these headers to every HTTP response (including error responses):
| Header | Value | What it does |
|---|---|---|
Content-Security-Policy |
default-src 'self' |
Blocks inline scripts, external scripts, etc. |
Strict-Transport-Security |
max-age=31536000; includeSubDomains; preload |
Enforces HTTPS for one year |
X-Frame-Options |
DENY |
Prevents your site being iframed |
X-Content-Type-Options |
nosniff |
Prevents MIME type sniffing |
Referrer-Policy |
strict-origin-when-cross-origin |
Controls what referrer info is sent |
Permissions-Policy |
geolocation=(), microphone=(), camera=() |
Disables unused browser features |
Implementation (Express/Node.js with helmet package):
const helmet = require('helmet');
app.use(helmet());
Yes, it is that simple. helmet sets all recommended security headers with sensible defaults.
Customizing CSP (most important header, also hardest):
Helmet's default CSP is strict (blocks inline scripts). For SPAs or apps that use inline scripts, you will need to configure it:
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'", "'unsafe-inline'"], // Only if you must
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", "data:", "https:"],
connectSrc: ["'self'", "https://api.yourdomain.com"],
fontSrc: ["'self'", "https://fonts.gstatic.com"],
objectSrc: ["'none'"],
mediaSrc: ["'self'"],
frameSrc: ["'none'"],
},
},
}));
Other frameworks:
- Python (Django):
django-csppackage +SecurityMiddleware - Python (FastAPI):
fastapi-securityor manual headers middleware - Go (Gin):
gin-contrib/securemiddleware - Ruby on Rails:
secure_headersgem
Check before launch:
- [ ] Security headers are present on all responses (run
curl -I https://yourdomain.comto check) - [ ] CSP does not use
unsafe-inlineorunsafe-evalunless absolutely required - [ ] HSTS is enabled with
includeSubDomainsandpreload - [ ] X-Frame-Options is DENY (unless you need to embed elsewhere)
- [ ] No
X-Powered-By: Expressheader (reveals your stack). Helmet removes it.
Bonus Hole (Because Seven Is Never Enough): Logging Sensitive Data
What it looks like:
console.log('Login attempt:', req.body);
// or
logger.info('User created', { user: req.body });
Why it is deadly:
Your logs go to log aggregators (DataDog, Splunk, LogRocket, etc.). Those services are high-value targets for attackers. If you log passwords, tokens, or personal data, you are one breached log aggregator away from a disaster.
What actually works:
- Never log the request body entirely. Log specific, safe fields.
- Redact passwords, credit cards, tokens, and API keys before logging.
- Use structured logging with explicit safe fields.
- Treat logs as sensitive data. Encrypt them at rest. Restrict access.
// Safe logging
const { password, creditCard, ...safeBody } = req.body;
logger.info('Create user request', { body: safeBody, userId: result.id });
Check before launch:
- [ ] No passwords, tokens, or secrets in logs
- [ ] No full request/response bodies logged
- [ ] Log aggregator access is restricted
- [ ] Logs have retention policies (not stored forever)
How to Find These Holes Before Launch (Self-Audit Checklist)
You do not need an expensive security firm to find the basics. Run through this checklist yourself.
Automated Tools (Free)
| Tool | What it finds | Command |
|---|---|---|
| ZAP (OWASP Zed Attack Proxy) | SQLi, XSS, missing headers, many others | Download GUI, run automated scan against your local server |
| Nuclei | Known vulnerability patterns | nuclei -u http://localhost:3000 |
| Snyk | Dependency vulnerabilities | snyk test |
| npm audit | Vulnerable Node packages | npm audit (built-in) |
Manual Checks (15 minutes)
| Check | How to test |
|---|---|
| Rate limiting | Send 20 rapid requests to /api/login. Do any succeed after 5? |
| SQL injection | Enter ' OR '1'='1 into every text field. Do you get unexpected results? |
| Mass assignment | Add "isAdmin": true to a PUT/PATCH request. Does it work? |
| Missing headers | Run curl -I https://yourdomain.com. Is CSP present? HSTS? |
| Secrets in logs | Trigger an error. Does it contain process.env or database credentials? |
Realistic Timeline: Closing These Holes
If your launch is in 7 days, do not panic. Prioritize.
Day 1 (Highest priority, 2 hours):
- Add rate limiting to login and registration endpoints
- Check all database queries for SQL injection (search for string concatenation with
+or${}) - Run
npm auditor equivalent and fix critical vulnerabilities
Day 2-3 (Medium priority, 4 hours):
- Add global rate limiting to all endpoints
- Whitelist allowed fields on all PATCH/PUT endpoints
- Add security headers (install
helmetor equivalent)
Day 4 (Lower priority but still do it, 2 hours):
- Validate input on every endpoint (start with the ones that write data)
- Search logs for secrets (look for
password,secret,key,token) - Rotate all production secrets (use fresh values, not dev secrets)
Day 5 (Final check, 1 hour):
- Run ZAP automated scan against your staging environment
- Review the checklist above. Mark every item complete.
Day 6-7:
- Launch. Sleep better knowing you are not one of the 7 common holes.
Frequently Asked Questions
Do I really need all of this for an MVP?
Yes. The size of your company does not matter to attackers. Automated bots do not know you are a startup. They scan every IP on the internet constantly. If your hole is there, they will find it within 24 hours.
What about using a BaaS or PaaS? Do they handle this?
Partly. Platforms like Firebase, Supabase, and AWS Amplify handle some things (SQL injection, headers). They do not handle business logic issues (mass assignment, missing input validation, insufficient rate limiting). You are still responsible for those.
How do I test rate limiting before launch?
Write a simple script:
#!/bin/bash
for i in {1..20}; do
curl -X POST https://localhost:3000/api/login \
-d '{"email":"test@example.com", "password":"wrong"}' \
-H "Content-Type: application/json"
echo "Request $i"
done
Count how many returned 200 (success) vs 429 (too many requests). Expect 5 successes, then 429 errors.
What is the most common security hole you actually see?
Rate limiting on login. By far. I audit a backend, and 60% of the time, there is no rate limiting on the login endpoint. It is the easiest fix with the biggest impact.
Is HTTPS enough?
No. HTTPS encrypts traffic between client and server. It does nothing for injection, mass assignment, missing headers, or rate limiting. HTTPS is table stakes. It is the bare minimum. Not the finish line.
How do I stay updated on new security holes?
Follow OWASP Top 10 (updated every 2-3 years). Subscribe to the newsletter of your language's security team (e.g., Node.js Security WG). Do not rely on "common sense." Attackers are creative.
The Bottom Line
Here is the honest truth.
I have shipped code with some of these holes. Every developer has. Security is not about being perfect. It is about reducing the most common, most dangerous holes before they become headlines.
The seven holes above are not sophisticated. They are basic. And that is exactly why they are so common. Developers ignore the basics because they are chasing the next feature.
But the basics are what keep you in business.
Rate limiting stops credential stuffing. Parameterized queries stop SQL injection. Secrets management stops accidental exposure. Input validation stops malformed attacks. Mass assignment whitelists stop privilege escalation. Global rate limiting stops scraping and DoS. Security headers stop amplification attacks.
None of these takes more than an hour to implement. Together, they reduce your attack surface by 80%.
Before you launch – not next week, not after your first paying customer, before launch – run the checklist.
Close the seven holes.
Your users will never thank you for security. They will never know you did it.
But they will never have to find out what happens when you do not.
– Written by Fredsazy

Iria Fredrick Victor
Iria Fredrick Victor(aka Fredsazy) is a software developer, DevOps engineer, and entrepreneur. He writes about technology and business—drawing from his experience building systems, managing infrastructure, and shipping products. His work is guided by one question: "What actually works?" Instead of recycling news, Fredsazy tests tools, analyzes research, runs experiments, and shares the results—including the failures. His readers get actionable frameworks backed by real engineering experience, not theory.
Share this article:
Related posts
More from Software
May 12, 2026
25I keep meeting brilliant developers who can't get hired. This is not a theory. Go check LinkedIn yourself. Here's what's actually broken – and what talented engineers can do about it. Real talk. No fake examples.

May 10, 2026
38Most people complain that Google is hard to please. Here's why that's actually the best thing about it — and why you'd hate it if Google sent everyone traffic

May 9, 2026
34Google wasn't touching my site. Then I did these 10 things. Here's exactly what changed — and what you should check first
