News

Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns—and They’re Everywhere

  • Erik Buchanan--securityboulevard.com
  • published date: 2026-01-09 00:00:00 UTC

None

<p>Security teams have always known that insecure direct object references (IDORs) and broken authorization vulnerabilities exist in their codebases. Ask any AppSec leader if they have IDOR issues, and most would readily admit they do. But here’s the uncomfortable truth: they’ve been dramatically underestimating the scope of the problem.</p><p>Recent bug bounty data tells a stark story. <a href="https://global.ptsecurity.com/en/research/analytics/standoff-bug-bounty-in-review-november-2024/">Roughly half</a> of all high and critical severity findings now involve broken access control vulnerabilities – IDORs, authorization bypasses, and similar business logic flaws. These aren’t theoretical concerns. Each IDOR reported through a bug bounty program typically signals several more lurking undiscovered in the same codebase. Security teams know they’re there, but finding them has always been time-intensive, manual work that gets deprioritized against other pressing demands.</p><p>Now, large language models (LLMS) are changing that equation – and revealing just how pervasive these vulnerabilities actually are.</p><h3><strong>Why Traditional Tools Miss Business Logic Flaws</strong></h3><p>Traditional static analysis tools <a href="https://www.paloaltonetworks.com/cyberpedia/what-is-sast-static-application-security-testing#:~:text=SAST%20can%20identify%20a%20variety,of%20the%20application%20%E2%80%94%20before%20deployment.">excel at finding certain classes of vulnerabilities</a>. They’re effective at catching SQL injection, cross-site scripting, and other issues that follow predictable patterns of data flow. These tools work by tracing how user input moves through code – mechanically following the path from source to sink.</p><p>IDORs and authorization flaws are fundamentally different. They’re not about contaminated data flowing to dangerous functions. Rather, they’re about missing context and misunderstood intent. Consider a typical IDOR scenario: an API endpoint accepts a user ID parameter and returns that user’s profile data. The code fetches the data correctly. It returns it properly formatted. From a structural standpoint, everything looks fine. The vulnerability exists not in what the code does, but in what it doesn’t do. It fails to verify that the requesting user has permission to access that particular profile.</p><p>Traditional static analyzers struggle in this scenario because the vulnerability is semantic, not structural. If the data returned were intended to be public, such as a list of published articles, authorization might be unnecessary. Distinguishing between these requires understanding what the developer intended, what the business rules should be, and what security controls are missing. That’s exactly where LLMs are useful.</p><h3><strong>Understanding Context and Intent</strong></h3><p>LLMs read code differently than rule-based analyzers. They understand variable names, function purposes, code comments, and broader application context. When an LLM sees a function called “getUserInvoice(invoiceId)” that returns sensitive financial data based solely on an ID parameter, it can reason that it requires an authorization check.</p><p>This contextual understanding extends beyond individual functions. LLMs can assess whether the data being returned is sensitive, whether the endpoint appears to be public or private, and whether appropriate safeguards exist elsewhere in the call chain. They can infer developer intent and compare it against what the code actually implements.</p><p>Security teams that have begun incorporating AI-powered analysis into their scanning workflows report finding previously unknown authorization vulnerabilities across their codebases, often multiple instances of similar flaws that had gone undetected for extended periods. For many teams, this represents their first comprehensive view of how extensively these business logic vulnerabilities permeate their applications, revealing a problem far larger than what periodic penetration tests or bug bounty programs had suggested.</p><h3><strong>The Limitations of Pure LLM Approaches</strong></h3><p>Before we get carried away with their ability, note that LLMs still have significant limitations that make them unsuitable as standalone security tools.</p><ul> <li>First, they’re not deterministic. Run the same LLM against the same code twice, and you’ll likely get different results. Independent security researchers have documented this extensively. <a href="https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/">In a study led by Sean Heelan,</a> an LLM found a critical kernel vulnerability in only 8 of 100 runs against the same benchmark. The other 92 runs missed it entirely, and many produced false positives.</li> <li>Second, LLMs are expensive at scale. Running comprehensive LLM analysis across a large codebase costs 2-3 orders of magnitude more than traditional static analysis. For organizations scanning millions of lines of code regularly, pure LLM approaches become economically impractical.</li> <li>Third, LLMs perform poorly on the vulnerability classes where traditional SAST excels. <a href="https://semgrep.dev/blog/2025/finding-vulnerabilities-in-modern-web-apps-using-claude-code-and-openai-codex/">When tested on SQL injection detection</a>, LLM-based approaches showed false positive rates between 95% and 100%. They struggle with complex data flow tracing across many files and miss sanitization performed in framework layers they don’t fully understand.</li> </ul><p>This isn’t a failure of LLMs. It’s simply the wrong tool for that job. LLMs excel at semantic reasoning about business logic, not mechanical tracing of data flows through complex application layers.</p><h3><strong>The Case for Hybrid Detection</strong></h3><p>The answer isn’t choosing between traditional static analysis and LLMs. It’s combining both approaches strategically.</p><p>Static analysis does what it does best: comprehensive, fast, deterministic scanning. It can enumerate every API endpoint in an application, trace every user input parameter, and identify every database query reliably and repeatedly.</p><p>LLMs then apply contextual reasoning to those outputs. Given a list of 500 API endpoints that accept user-controlled identifiers, an LLM can systematically evaluate whether each endpoint implements appropriate authorization checks. It can distinguish between intentionally public data and sensitive information that requires protection. It can assess whether the authorization logic makes sense given the apparent business context.</p><p>This hybrid approach delivers something neither technique achieves alone: comprehensive coverage of both traditional vulnerabilities and business logic flaws, with practical false positive rates that security teams can actually manage.</p><h3><strong>The Attacker Advantage</strong></h3><p>Here’s what should keep security leaders awake at night: attackers <a href="https://www.anthropic.com/news/disrupting-AI-espionage">also have access to LLMs</a>. While defenders build out security programs and experiment with new strategies for detecting logic vulnerabilities, attackers are gearing up to scan for and exploit them with the same LLMs.</p><p>This creates an urgent asymmetry. Offensive use of AI is fast, widely scalable, and easily replicated. A single attacker with access to commercial LLMs can scan for IDORs across numerous endpoints, automating what previously required manual expertise. Defensive security, by contrast, requires careful integration into existing development workflows, prioritization systems, and remediation processes.</p><p>Organizations that dismiss this as hype or defer investment until “later” are making a dangerous bet. The window to get ahead of AI-enabled attacks is narrowing.</p><h3><strong>A Practical Roadmap</strong></h3><p>For security teams already stretched thin, the future depends on organizational maturity. If you’re just establishing an application security program, focus on building the fundamentals. Deploy scanning tools that catch both traditional vulnerabilities and business logic flaws. Start with critical, high-impact issues and build the habit of regular remediation.</p><p>For security-mature organizations drowning in alert volume, the priorities are different. You need detection systems that genuinely prioritize and reduce noise. The most advanced teams are moving beyond basic vulnerability scanners toward platforms that understand their specific business context and adapt to their unique applications.</p><p>The economic reality is straightforward: security teams need automated detection for business logic vulnerabilities. The alternative (i.e. manually finding and fixing IDORs through pen tests and bug bounties) doesn’t scale. By the time external researchers find these issues, they’ve likely already been exposed for months or years.</p><p>Over the next several years, I expect the relationship between traditional SAST, LLM-based detection, and human security expertise to evolve significantly. Humans will remain in control but progressively move out of the tactical weeds. AI will increasingly handle tasks that previously required human security engineers: triaging findings, applying business context, designing remediations, etc. But AI will not replace the deterministic, reliable static analysis engines that form the foundation of modern application security. Agents are assisting and increasingly replacing simple human tasks. They’re too unreliable and too expensive to replace the fast, deterministic code analysis that humans have already handed over to computers.</p><p>The future belongs to platforms that thoughtfully blend both: powerful deterministic engines for comprehensive coverage and structural analysis, orchestrated by increasingly sophisticated AI that understands context, personalizes findings, and adapts to each organization’s unique environment.</p><p>The IDORs are already in your code. The only question is whether you’ll find them before someone else does.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/01/are-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere/" data-a2a-title="Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns—and They’re Everywhere"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fare-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere%2F&amp;linkname=Are%20There%20IDORs%20Lurking%20in%20Your%20Code%3F%20LLMs%20Are%20Finding%20Critical%20Business%20Logic%20Vulns%E2%80%94and%20They%E2%80%99re%20Everywhere" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fare-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere%2F&amp;linkname=Are%20There%20IDORs%20Lurking%20in%20Your%20Code%3F%20LLMs%20Are%20Finding%20Critical%20Business%20Logic%20Vulns%E2%80%94and%20They%E2%80%99re%20Everywhere" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fare-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere%2F&amp;linkname=Are%20There%20IDORs%20Lurking%20in%20Your%20Code%3F%20LLMs%20Are%20Finding%20Critical%20Business%20Logic%20Vulns%E2%80%94and%20They%E2%80%99re%20Everywhere" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fare-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere%2F&amp;linkname=Are%20There%20IDORs%20Lurking%20in%20Your%20Code%3F%20LLMs%20Are%20Finding%20Critical%20Business%20Logic%20Vulns%E2%80%94and%20They%E2%80%99re%20Everywhere" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F01%2Fare-there-idors-lurking-in-your-code-llms-are-finding-critical-business-logic-vulns-and-theyre-everywhere%2F&amp;linkname=Are%20There%20IDORs%20Lurking%20in%20Your%20Code%3F%20LLMs%20Are%20Finding%20Critical%20Business%20Logic%20Vulns%E2%80%94and%20They%E2%80%99re%20Everywhere" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>