Beyond the OWASP Top 10: How to identify security vulnerabilities in your code
We have all shipped code to production that passed every check. The PR gets approved, the build goes to production, and we move on. The problem is that the most serious bugs often don’t show up in those reports. They are security vulnerabilities that come from our own business logic, from interactions between services, and from the wrong assumptions we make about how our system will be used. These are the kinds of problems no off-the-shelf tool can find, because they are unique to what we build.
The limits of checklists and scanners
A list of known vulnerabilities is a great starting point. It shows common mistakes other developers have made, like not sanitizing SQL inputs or leaving an S3 bucket public. Automated scanners are good at finding these known patterns. They can spot a string being concatenated directly into a database query and flag it as a possible SQL injection.
What these tools miss is context. A scanner does not understand your application’s multi-tenancy model. It does not know that a user with role: "viewer" in tenant A should never even be able to see a resource from tenant B, even if they guess the ID. The scanner only sees a database query that looks correct from a syntax point of view. It checks if the code is written the right way, but it cannot tell if the logic makes sense from a security perspective.
This is how many vulnerabilities slip through. Our code reviews usually focus on correctness, asking “Does this feature work as specified?”. We rarely have a structured way to ask the more important question: “How could this feature be abused?”. The result is a system that works well for the expected user, but stays exposed to anyone who steps outside that flow.
> EXPERT CODE REVIEW
Scanners found 0 issues. 2 logic flaws remain.
WARNING: Cutting secure logic triggers an explosion.
When security vulnerabilities are unique to your system
The most serious vulnerabilities almost never come from a single isolated flaw. They show up in the connections between components. Problems appear when one part of the system makes assumptions about another.
The most dangerous issues usually live in how things interact. Think about a microservices architecture where an OrderService needs to check a user’s permissions. It calls the AuthService to validate a token. The AuthService confirms the token is valid and returns a userId. The OrderService then fully trusts that userId and uses it to fetch orders.
// OrderService
async function getOrder(orderId, authToken) {
// Calls AuthService to validate the token
const { userId, isValid } = await authService.validateToken(authToken);
if (!isValid) {
throw new Error("Unauthorized");
}
// The vulnerability is here. We have a valid user, but not necessarily the *right* user.
const order = await db.orders.find({ where: { id: orderId } });
// We should check if this order belongs to this userId.
// if (order.ownerId !== userId) { throw new Error("Forbidden"); }
return order;
}
The bug is not in the AuthService or the OrderService individually. Both do their job correctly when viewed in isolation. The vulnerability exists at the integration point, in the unstated assumption that a valid token from any user is enough to fetch any order. This is an Insecure Direct Object Reference (IDOR) created by business logic, not by a misconfiguration in the framework.
Your application’s specific logic is what creates these blind spots. Another common pattern is state manipulation. Imagine a checkout process with multiple steps:
- Add a product to the cart (a standard $10 subscription).
- Go to a separate page to apply a 50% discount coupon. The backend confirms the coupon is valid for the items in the cart and stores the discount in the session.
- Move to the final payment page, where the total is calculated as $5.
A logical flaw appears if the user can go back to step 1 after applying the discount and switch the standard subscription for a premium one that costs $100. If the system does not revalidate the coupon based on the new cart contents, the user proceeds to checkout and pays $50 for a $100 product. Each step in the flow was secure on its own. The vulnerability was created by the sequence of operations and the lack of a state revalidation.
How to find these problems in practice
Finding these vulnerabilities requires going beyond automated tools and checklists. You need to look at the system trying to understand how someone could exploit it and question the assumptions that were left in your system design.
Following the flow of data through your application
Every time a user provides input, you should treat it as untrusted. This is a basic principle that we often forget as data moves through our systems. A good practice during code review is to pick a single piece of user-provided data and trace its entire lifecycle.
- Where does the data enter and exit? Follow a parameter like
organization_idfrom the initial API request. - Where is it first read?
Where is it last used?
At any point is it written to a log file, where it could expose information? How does it change along the way? Is it transformed or modified?
Does its type change from string to integer, creating a possible issue in strict equality checks ("123"vs.123)?
When thisorganization_idis passed from a frontend gateway to a backend service, does the backend service validate again that the authenticated user actually belongs to that organization, or does it blindly trust the gateway?
This kind of manual tracing is tedious, and that is exactly why it is a good candidate for assistance. You can use AI-powered tools to analyze a pull request and automatically map these data flows. Ask it to “trace all usage of the parameter request.body.documentId and list every function and database query it reaches.” This will not find the vulnerability for you, but it will give you a map of the highest-risk areas to inspect manually.
Identifying trust assumptions at integration points
Vulnerabilities tend to appear at the edges of the system. Between services, in integrations with external APIs, or in the use of libraries. In each of these places, there is always some assumption being made. The work is to identify what is being assumed and check if it actually makes sense.
When Service A calls Service B, it is common to pass identity or permission information in headers or in the request body, like X-User-ID: 123 or X-User-Roles: admin. The question is: does Service B validate this at any point? Or could any service inside the internal network call Service B directly, send X-User-Roles: admin, and gain full access?
Propagating identity inside the system requires a secure foundation, like service-to-service authentication and signed tokens.
A classic example with external APIs is a payment webhook. Your service receives a request from a payment provider saying an order was paid. The payload contains order_id: "xyz" and status: "SUCCESS". Most developers will verify the cryptographic signature of the request to confirm it actually came from the provider. But many forget the next step: validating state within the system itself. Does order “xyz” actually exist? Is it in pending_payment state? An attacker can replay a valid webhook from an old order to get a new product for free if you do not verify the internal state of the application.
Reviewing how your code uses its dependencies
Dependency scanning checks for known CVEs in the libraries you use. It will not tell you if you are using a perfectly safe library in an unsafe way. For example, a JWT library will correctly verify a token signature, but validating the claims inside it is your responsibility. Your code needs to check whether the aud (audience) claim matches your service and whether the iss (issuer) is what you expect. The library did its job. The security of the implementation depends entirely on how you use it.
This is another place where an AI assistant can help during development. You can ask a model that has context about your codebase to “review my use of the jsonwebtoken library in this file and check whether I am validating the audience and issuer claims.” This goes beyond looking for known CVE patterns and starts analyzing the logic of your implementation. The suggestions still need to be verified by a developer, but this can surface common omissions that would otherwise go unnoticed and help you focus on the most important security decisions.
A quick check before shipping code to production
You cannot reduce this kind of deeper review to a simple checklist, but you can adopt a small shift in mindset. Before approving a pull request, pause for 30 seconds and ask yourself one question:
“If I were a malicious actor, how would I abuse this change?”
Forget the expected flow and think about what can happen outside of it. What happens if I send a negative number? A UUID from another tenant? What if I execute these two API calls in the wrong order? This quick adversarial thinking exercise often reveals exactly the kinds of business logic flaws and broken assumptions that automated tools will always miss. It forces you to look at your code like someone trying to break it.