Menu

Menu

Menu

Autonomy > Alerts: A New Stack for Application Security

Andrew Van Nest

Andrew Van Nest

Andrew Van Nest

May 12, 2025

May 12, 2025

May 12, 2025

Google’s ~$32 billion acquisition of Wiz was more than just a headline, it was a bookend to a chapter of cybersecurity innovation.

Wiz redefined what cloud security could be: deeply integrated, developer-native, and product-led. It proved that security could scale not as a checkbox but as software infrastructure. Yet as Wiz ascends into hyperscaler territory, it leaves behind a new frontier, one that’s arguably even more urgent: the application layer.

This is where modern complexity lives. It’s where business logic runs, where APIs expose sensitive systems, and where CI/CD pipelines move faster than security teams can respond. And yet, this layer remains largely underserved. Application security today is reactive, fragmented, and human-dependent. It’s a noisy loop of alerts, triage queues, and backlogged tickets which are completely out of sync with the pace of modern development. In a world of ephemeral infrastructure, AI-generated code, and distributed identity, that model is not just inefficient, it's becoming obsolete.

What we’re seeing are new kinds of security platforms. Ones that don’t just alert engineers, but act alongside them. Not a dashboard. Not a plugin. But an intelligent, embedded system that lives in the codebase, understands infrastructure, and takes secure action. The future of application security belongs to what we call an Autonomous Application Security Engineer; a system that reasons about risk, fixes what’s broken, and continuously improves alongside your software.

This vision has long felt like an aspirational roadmap. But recent advances in AI infrastructure, particularly in Model Context Protocols (MCPs) are making it possible to build today.

MCPs enable long-lived memory and multi-step reasoning for LLMs operating in dynamic environments. Instead of treating each request as an isolated task, MCPs allow AI agents to retain persistent context over time. That includes architectural knowledge, security posture evolution, prior decisions, and project-specific nuance. In the context of application security, this unlocks an entirely new design space. These systems don’t just respond to patterns, they understand how risk changes over time, how it relates to business priorities, and how to intervene intelligently.

A recent study, Generative AI in Cybersecurity: A Survey of Methods, Applications, and Challenges reinforces this point. The authors highlight continuity and context retention as critical barriers to deploying LLMs safely in security-critical workflows. Without memory, agents become stateless, repetitive, and incapable of making higher-order security decisions. But with persistent context (delivered through protocols like MCP) agents can track policy drift, understand historical changes, and reason about impact over time.

This persistence, when combined with structured agent coordination, enables systems to operate with planning and foresight. In Autonomous Agents in Cyber Defense, researchers propose multi-agent systems capable of triaging, remediating, and defending against complex threats autonomously. These agents don’t act in isolation, they collaborate. One scans for potential misconfigurations, another simulates adversarial access, a third proposes and tests remediations. Together, they emulate the workflows of a real security team plus the added benefit of scale, memory, and speed.

Critically, these systems are only as good as the structure they reason over. In Secure by Design with LLMs, the authors emphasize that model performance improves dramatically when inputs are modular, scoped, and declarative. In other words, clean infrastructure-as-code, testable permission boundaries, and declarative policies aren’t just good hygiene, they’re enabling infrastructure for secure AI reasoning.

This insight brings us back to fundamentals. Engineering principles we’ve long respected (least privilege, modularity, testability) are now prerequisites for safe autonomy. An LLM can’t reason about a monolith. But it can reason about a composable service with explicit permissions. It can’t validate behavior in a black box. But it can test behavior against clearly defined policies. And perhaps most powerfully, it can explain what it’s doing, why it’s doing it, and how it aligns with the organization’s security posture.

This is what an Autonomous AppSec Engineer could (and should) do. Start with simple, high-impact fixes: CVEs, exposed secrets, misconfigured IAM roles, unsafe default permissions, overly permissive APIs. Then evolve. Over time, such a system could maintain a living threat model of your application. It could continuously scan for regression risk, enforce compliance, and integrate with internal developer agents. It could respond to real-time drift in posture. It could help govern LLMs used inside the company, monitor vector stores, secure internal agent networks, and regulate memory sharing. And it could do all of this without waiting on a human to click “next.”

What MCP enables is a form of security memory (an institutional knowledge graph) accessible by autonomous agents, to ensure decisions are not just reactive but proactively strategic. It’s the difference between alerting on symptoms and understanding the system. And it’s exactly what modern application security has been missing.

We believe this is the next great security category. Not a better scanner. Not another SIEM. But a control plane for secure development that is embedded, intelligent, and always on. One that doesn’t look like a tool, but like infrastructure. One that replaces alert fatigue with trust, and replaces triage with action.

The primitives now exist. The research is accelerating. The architecture is viable. And the demand is only growing. Companies like Keycard are building the identity layer for this new era of software demands which are dynamic, ephemeral credentials designed for AI agents and modern developer workflows. Their approach aligns perfectly with the shift toward embedded, autonomous security systems.

These are companies that deserve to be built. And when they are, it will reshape how we think about security, not as an audit function, but as a dynamic, autonomous layer in the modern software stack.


References

Arora, S., Haran, S., Gupta, K., & Tiwari, H. (2024). Generative AI in cybersecurity: A survey of methods, applications, and challenges (arXiv:2405.12750). arXiv. https://doi.org/10.48550/arXiv.2405.12750

Fang, Y., Hu, W., Shah, H., Korkmaz, O., & Williams, A. (2024). Autonomous agents in cyber defense (arXiv:2401.0286). arXiv. https://doi.org/10.48550/arXiv.2401.0286

Kwiatkowska, M., Bordbar, B., & Milani Alfredo, L. (2023). Security challenges in autonomous systems design (arXiv:2312.00018). arXiv. https://doi.org/10.48550/arXiv.2312.00018

Li, C., Guo, Z., Fu, W., Jin, Y., Zhang, K., & Liu, J. (2024). Secure by design with LLMs: Towards software development with generative AI (arXiv:2312.00018v2). arXiv. https://doi.org/10.48550/arXiv.2312.00018v2

Sambasivan, R., Joseph, A. D., & Zheng, H. (2024). The path to autonomous cyber defense (arXiv:2404.10788). arXiv. https://doi.org/10.48550/arXiv.2404.10788

Zhang, J., Li, Z., & Yin, H. (2024). VulnBot: Autonomous penetration testing for a multi-agent system (arXiv:2501.13411). arXiv. https://doi.org/10.48550/arXiv.2501.13411

Exceptional Capital © 2025 - Exceptional Capital and the Exceptional Capital logo are trademarks of Exceptional Capital. All Rights Reserved.

The information on this website is not a solicitation of an offer to sell or purchase an interest in any investment fund or vehicle, nor of any provision of investment management or advisory services.

Exceptional Capital © 2025 - Exceptional Capital and the Exceptional Capital logo are trademarks of Exceptional Capital. All Rights Reserved.

The information on this website is not a solicitation of an offer to sell or purchase an interest in any investment fund or vehicle, nor of any provision of investment management or advisory services.

Exceptional Capital © 2025 - Exceptional Capital and the Exceptional Capital logo are trademarks of Exceptional Capital. All Rights Reserved.

The information on this website is not a solicitation of an offer to sell or purchase an interest in any investment fund or vehicle, nor of any provision of investment management or advisory services.