If AI Becomes the User, What Happens to the SIEM?
None
<figure class="wp-block-image size-large"><a href="https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM.png"><img fetchpriority="high" decoding="async" width="1024" height="683" src="https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM-1024x683.png" alt="" class="wp-image-1661" srcset="https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM-1024x683.png 1024w, https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM-300x200.png 300w, https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM-768x512.png 768w, https://raffy.ch/blog/wp-content/uploads/2026/04/ChatGPT-Image-Apr-1-2026-11_26_10-AM.png 1536w" sizes="(max-width: 1024px) 100vw, 1024px"></a></figure><p>RSAC 2026 made one thing very clear to me: the market is moving fast, but it is still <strong>deeply confused</strong>. The big announcements from <a href="https://cloud.google.com/solutions/security/agentic-soc">Google</a>, <a href="https://www.splunk.com/en_us/blog/security/from-reactive-to-agentic-with-enterprise-security-at-rsac-2026.html">Splunk</a>, and <a href="https://www.databricks.com/blog/databricks-announces-lakewatch-new-open-agentic-siem">Databricks</a> all point in the same direction. Security operations are becoming more agentic, more API-driven, and more automated. But most of the category still looks crowded, early, and only lightly differentiated.</p><p>The interesting part is not that everybody now has an AI story. It is where the pressure is landing: attack speed, active response, and the possibility that AI itself becomes the primary user of the security stack.</p><h2 class="wp-block-heading">TL;DR</h2><ul class="wp-block-list"> <li><strong>Attacks are now fast</strong> enough that human-speed response is no longer a sufficient default.</li> <li>That will push the market toward active response, which is useful but also <strong>dangerous if the control logic</strong> is not deterministic enough.</li> <li>Most AI SOC vendors still sound similar because many of them sit on top of existing SIEMs and alert streams <strong>rather than changing the underlying detection</strong> or data architecture.</li> <li>The <strong>big SIEM vendors are moving</strong>, and one major EDR/SIEM vendor is expanding AI security into on-prem and sovereign environments.</li> <li>If <strong>AI becomes the user</strong> of security products, the UI matters less, the API matters more, and the economics of expensive SIEM platforms get harder to defend.</li> </ul><h2 class="wp-block-heading">Attacks are getting faster</h2><p>This is the part of the market I think people are still underestimating. CrowdStrike’s 2026 threat report says the average eCrime breakout time dropped to 29 minutes in 2025, and the fastest case it observed was 27 seconds. Databricks used its <a href="https://www.databricks.com/blog/databricks-announces-lakewatch-new-open-agentic-siem">Lakewatch announcement</a> to make a related point from the vulnerability side, citing research that mean time to exploit has fallen from 23.2 days in 2025 to 1.6 days in 2026.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"> <p>That changes what matters in the SOC. A lot of SIEM workflows still assume there is time to search, enrich, discuss, and decide. That model was already strained. It gets worse when attacks speed up and when the adversary is using AI to compress its own loop. Search still matters, but a search-centric operating model is not enough if the environment can be compromised end to end in under an hour.</p> </blockquote><p>The obvious answer is more <strong>active response</strong>. The problem is that this is where things get dangerous. If teams start handing more containment and remediation decisions to AI before the systems are ready, we are going to see more self-inflicted outages. The market is moving there anyway, because the alternative is to keep defending at human speed against machine-speed attacks. SOAR was supposed to close part of that gap and clearly did not.</p><h2 class="wp-block-heading">AI SOC is still confusing and mostly sounds the same</h2><p>That was probably my main emotional reaction leaving RSAC: confusion. There were simply too many vendors with very similar messaging. RSAC says the conference had more than 600 exhibitors this year. I could not independently validate an exact count of 36 AI SOC vendors from public RSAC data, but “roughly three dozen” felt directionally right from the floor, and many of them sounded remarkably similar.</p><p>The common pitch was familiar: reduce alerts, triage faster, investigate faster, give the analyst a copilot, automate parts of response. Some of that is clearly useful. But a lot of it still feels like a layer on top of the existing SIEM rather than a rethink of the detection stack itself. If the AI mostly sits on top of alert streams coming out of a legacy backend, then it may improve analyst productivity without materially fixing false negatives, brittle detections, or poor data design upstream.</p><p>That is also why I do not think most of this market is really using LLMs in a deep way yet. In most cases, the models are being used for triage, recommendations, summarization, and analyst assistance. That is very different from using LLMs for real detection, broader SOC operations, or meaningful changes to the underlying architecture.</p><blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"> <p>For a more complete framework of where <a href="https://raffy.ch/blog/2026/02/03/the-gaps-that-created-the-new-wave-of-siem-and-ai-soc-vendors/">AI SOC and SIEM</a> should be heading, see <a href="https://raffy.ch/SIEM">raffy.ch/SIEM</a>.</p> </blockquote><p>That is why so much of the category feels undifferentiated. The interfaces are different, the branding is different, and the demo flows are different, but the center of gravity often looks the same. The latest platform announcements only reinforce that point. If the platform owner adds the agentic layer too, the vendors sitting on top of Chronicle, Splunk, or similar platforms have a much harder moat to defend.</p><h2 class="wp-block-heading">The architecture is shifting</h2><p>By this point, the vendor movement is established. The more interesting question now is what it does to architecture. <a href="https://intelligencecommunitynews.com/sentinelone-expands-on-premises-offerings/">SentinelOne</a> adds another signal here by pushing more AI security capability into on-prem, sovereign, and air-gapped environments.</p><p>Put together, that points to a broader market shift. Storage matters more. Data routing matters more. Sovereignty and local control matter more. Cheap data lakes, strong analytics layers, and flexible orchestration matter more. Traditional SIEM UI matters less than it used to, and that matters not just for SIEM vendors but also for MDRs that differentiated by putting an AI layer on top of someone else’s backend.</p><p>That is also why Splunk’s cost model keeps coming back into the conversation. Splunk is powerful and mature, but if the agent becomes the main consumer of the system, customers start asking a different question: am I paying for the analytics engine, or am I paying for UI, workflow, and operating complexity that an agent increasingly does not care about?</p><h2 class="wp-block-heading">If AI becomes the user, the stack changes</h2><p>The most important implication may be economic, not just operational. Security products were built for human analysts. The value lived in the UI, the workflow, the search language, the dashboard, and the services needed to make all of that usable. But what happens if the real user becomes Claude Code, Codex, Gemini, or some internal agent instrumented across the entire security stack? <a href="https://danielmiessler.com/blog/the-great-transition">Daniel Miessler</a> has been arguing that companies and products increasingly become APIs. Security looks like one of the clearest versions of that shift.</p><p>In that world, every product starts to look more like an API than an application. That is exactly where the recent announcements are heading. <a href="https://github.com/refractionPOINT/lc-ai/tree/master/lc-soc">LimaCharlie’s new <code>lc-soc</code> release</a> is a concrete implementation of the same idea: an open-source “agentic SOC as code” where AI agents are coordinated through the cases system and D&R rules, then deployed and versioned like infrastructure.</p><p>If AI becomes the primary user, the UI does not disappear, but it stops being the center of gravity. The agent does not care about your console. It cares about whether the data is accessible, whether the schema is consistent, whether the analytics layer is fast, whether the permissions model is clean, and whether the actions are safe to orchestrate.</p><p>That creates real pressure on expensive SIEM economics. If the agent can query multiple tools directly, the premium attached to a deeply monetized UI gets harder to justify. The market may move toward something simpler: cheap storage, a strong analytics layer, and an orchestration layer on top. That does not mean incumbents disappear. It means their value proposition changes. If AI becomes the user, the winners may be the vendors with the best APIs, control points, and data access model.</p><h2 class="wp-block-heading">Evals become part of the control layer</h2><p>The next problem is trust and determinism. Once you push AI beyond triage and recommendations and let it make or recommend more consequential changes, you need a way to keep the system reliable. That is where eval loops come in.</p><p>I heard Josh Saxe make this point at RSAC in the context of AI-first infrastructure management: if agents are going to make changes in live systems, you need strong evaluation around them to keep behavior bounded and repeatable enough to trust. I think the same logic applies directly to security operations. The market is moving toward active response, but the models themselves were not built around strict determinism.</p><p>That means the answer is not blind autonomy. It is more likely a layered system where adaptive AI sits inside clearer control boundaries, with evals, policy, and deterministic automation around it. Evals stop being an AI engineering detail and become part of the security control layer itself.</p><p>The post <a href="https://raffy.ch/blog/2026/04/02/if-ai-becomes-the-user-what-happens-to-the-siem/">If AI Becomes the User, What Happens to the SIEM?</a> first appeared on <a href="https://raffy.ch/blog">Future of Tech and Security: Strategy & Innovation with Raffy</a>.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2026/04/if-ai-becomes-the-user-what-happens-to-the-siem/" data-a2a-title="If AI Becomes the User, What Happens to the SIEM?"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Fif-ai-becomes-the-user-what-happens-to-the-siem%2F&linkname=If%20AI%20Becomes%20the%20User%2C%20What%20Happens%20to%20the%20SIEM%3F" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Fif-ai-becomes-the-user-what-happens-to-the-siem%2F&linkname=If%20AI%20Becomes%20the%20User%2C%20What%20Happens%20to%20the%20SIEM%3F" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Fif-ai-becomes-the-user-what-happens-to-the-siem%2F&linkname=If%20AI%20Becomes%20the%20User%2C%20What%20Happens%20to%20the%20SIEM%3F" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Fif-ai-becomes-the-user-what-happens-to-the-siem%2F&linkname=If%20AI%20Becomes%20the%20User%2C%20What%20Happens%20to%20the%20SIEM%3F" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2026%2F04%2Fif-ai-becomes-the-user-what-happens-to-the-siem%2F&linkname=If%20AI%20Becomes%20the%20User%2C%20What%20Happens%20to%20the%20SIEM%3F" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div><p class="syndicated-attribution">*** This is a Security Bloggers Network syndicated blog from <a href="https://raffy.ch/blog">Future of Tech and Security: Strategy &amp; Innovation with Raffy</a> authored by <a href="https://securityboulevard.com/author/0/" title="Read other posts by Raffael Marty">Raffael Marty</a>. Read the original post at: <a href="https://raffy.ch/blog/2026/04/02/if-ai-becomes-the-user-what-happens-to-the-siem/">https://raffy.ch/blog/2026/04/02/if-ai-becomes-the-user-what-happens-to-the-siem/</a> </p>