Advancing the Science of AI Security
The HiddenLayer AI Security Research team uncovers vulnerabilities, develops defenses, and shapes global standards to ensure AI remains secure, trustworthy, and resilient.



Turning Discovery Into Defense
Our mission is to identify and neutralize emerging AI threats before they impact the world. The HiddenLayer AI Security Research team investigates adversarial techniques, supply chain compromises, and agentic AI risks, transforming findings into actionable security advancements that power the HiddenLayer AI Security Platform and inform global policy.
Our AI Security Research Team
HiddenLayer’s research team combines offensive security experience, academic rigor, and a deep understanding of machine learning systems.

Kenneth Yeung
Senior AI Security Researcher

Conor McCauley
Adversarial Machine Learning Researcher

Jim Simpson
Principal Intel Analyst

Jason Martin
Director, Adversarial Research

Andrew Davis
Chief Data Scientist

Marta Janus
Principal Security Researcher
%201.png)
Eoin Wickens
Director of Threat Intelligence

Kieran Evans
Principal Security Researcher

Ryan Tracey
Principal Security Researcher
%201%20(1).png)
Kasimir Schulz
Director, Security Research
Our Impact by the Numbers
Quantifying the reach and influence of HiddenLayer’s AI Security Research.
Reduction in exposure to AI exploits
Disclosed through our security research
Issued patents
Latest Discoveries
Explore HiddenLayer’s latest vulnerability disclosures, advisories, and technical insights advancing the science of AI security.

Machine Learning is the New Launchpad for Ransomware
Researchers at HiddenLayer’s SAI Team have developed a proof-of-concept attack for surreptitiously deploying malware, such as ransomware or Cobalt Strike Beacon, via machine learning models. The attack uses a technique currently undetected by many cybersecurity vendors and can serve as a launchpad for lateral movement, deployment of additional malware, or the theft of highly sensitive data. Read more in our latest blog, Weaponizing Machine Learning Models with Ransomware.
Attack Surface
According to CompTIA, over 86% of surveyed CEOs reported that machine learning was a mainstream technology within their companies as of 2021. Open-source model-sharing repositories have been born out of inherent data science complexity, practitioner shortage, and the limitless potential and value they provide to organizations – dramatically reducing the time and effort required for ML/AI adoption. However, such repositories often lack comprehensive security controls, which ultimately passes the risk on to the end user - and attackers are counting on it. It is commonplace within data science to download and repurpose pre-trained machine learning models from online model repositories such as HuggingFace or TensorFlow Hub, amongst many others of a far less reputable and security conscientious nature. The general scarcity of security around ML models, coupled with the increasingly sensitive data that ML models are exposed to, means that model hijacking attacks, including AI data poisoning, can evade traditional security solutions and have a high propensity for damage.
Business Implication
The implications of loading a hijacked model can be severe, especially given the sensitive data an ML model is often privy to, specifically:
- Initial compromise of an environment and lateral movement
- Deployment of malware (such as ransomware, spyware, backdoors, etc.)
- Supply chain attacks
- Theft of Intellectual Property
- Leaking of Personally Identifiable Information
- Denial/Degradation of service
- Reputational harm
How Does This Attack Work?
By combining several attack techniques, including steganography for hiding malicious payloads and data de-serialization flaws that can be leveraged to execute arbitrary code, our researchers demonstrate how to attack a popular computer vision model and embed malware within. The resulting weaponized model evades current detection from anti-virus and EDR solutions while suffering only a very insignificant loss in efficacy. Currently, most popular anti-malware solutions provide little or no support in scanning for ML-based threats.
The researchers focused on the PyTorch framework and considered how the attack could be broadened to target other popular ML libraries, such as TensorFlow, scikit-learn, and Keras. In the demonstration, a 64-bit sample of the infamous Quantum ransomware is deployed on a Windows 10 system. However, any bespoke payload can be distributed in this way and tailored to target different operating systems, such as Windows, Linux, and Mac, and other architectures, such as x86/64.;
Hidden Ransomware Executed from an ML Model
Mitigations & Recommendations
- Proactive Threat Discovery: Don’t wait until it’s too late. Pre-trained models should be investigated ahead of deployment for evidence of tampering, hijacking, or abuse. HiddenLayer provides a Model Scanning service that can help with identifying malicious tampering. In this blog, we also share a specialized YARA rule for finding evidence of executable code stored within models serialized to the pickle format (a common machine learning file type).
- Securely Evaluate Model Behaviour: At the end of the day, models are software: if you don’t know where it came from, don’t run it within your enterprise environment. Untrusted pre-trained models should be carefully inspected inside a secure virtual machine prior to being considered for deployment.;
- Cryptographic Hashing & Model Signing: Not just for integrity, cryptographic hashing provides verification that your models have not been tampered with. If you want to take this a step further, signing your models with certificates ensures a particular level of trust which can be verified by users downstream.
- External Security Assessment: Understand your level of risk, address blindspots and see what you could improve upon. With the level of sensitive data that ML models are privy to, an external security assessment of your ML pipeline may be worth your time. HiddenLayer’s SAI Team and Professional Services can help your organization evaluate the risk and security of your AI assets
About HiddenLayer
HiddenLayer helps enterprises safeguard the machine learning models behind their most important products with a comprehensive security platform. Only HiddenLayer offers turnkey AI/ML security that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded in March of 2022 by experienced security and ML professionals, HiddenLayer is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures. For more information, visit www.hiddenlayer.com and follow us on LinkedIn or Twitter.

Unpacking the AI Adversarial Toolkit
Unpacking the Adversarial Toolkit
More often than not, it’s the creation of a new class of tool, or weapon, that acts as the catalyst of change and herald of a new age. Be it the sword, gun, first piece of computer malware, or offensive security frameworks like Metasploit, they all changed the paradigm and required us to adapt to face our new reality or ignore it at our peril.
Much in the same way, the field of adversarial machine learning is beginning to find its inflection points, with scores of tools and frameworks being released into the public sphere that bring the more advanced methods of attack into the hands of the many. These tools are often used with defensive evaluation in mind, but how they are used often depends on the hands of those who wield them.
The question remains, what are these tools, and how are they being used? The first step in defending yourself is knowing what’s out there.
Let’s begin!
Offensive Security Frameworks
Ask a security practitioner if they know of any offensive security frameworks, and the answer will almost always be a resounding ‘yes.’ The concept has been around for a long time, but frameworks such as Metasploit, Cobalt Strike, and Empire popularized the idea to an entirely new level. At their core, these frameworks amalgamate a set of often-complex attacks for various parts of a kill chain in one place (or one tool), enabling an adversary to perform attacks with ease, while only requiring an abstract understanding of how the attack works under the hood.
While they’re often referred to as ‘offensive’ security frameworks or ‘attack’ frameworks, they can also be used for defensive purposes. Security teams and penetration testers use such frameworks to evaluate security posture with greater ease and reproducibility. But, on the other side of the same coin, they also help to facilitate attackers in conducting malicious attacks. This concept holds true with adversarial machine learning. Currently, adversarial ML attacks have not yet become as commonplace as attacks on systems that support them but, with greater access to tooling, there is no doubt we will see them rise.
Here are some adversarial ML frameworks we’re acquainted with.
Adversarial Robustness Toolbox – IBM / LFAI

In 2018, IBM released the Adversarial Robustness Toolbox, or ART, for short. ART is a framework/library used to evaluate the security of machine learning models through various means and is now part of the Linux Foundation since early 2020. Models can be created, attacked, and evaluated all in one tool. ART boasts a multitude of attacks, defences, and metrics that can help security practitioners shore up model defenses and aid offensive researchers in finding vulnerabilities. ART supports all input data types and even includes tutorial examples in the form of Jupyter notebooks for getting started attacking image models, fooling audio classifiers, and much more.
Counterfit – Microsoft
Counterfit, released by Microsoft in May of 2021, is a command-line automation tool used to orchestrate attacks and testing against ML models. Counterfit is environment-agnostic, model-agnostic and supports most general types of input data (text, audio, image, etc.). It does not provide the attacks themselves and instead interfaces with existing attacks and frameworks such as Adversarial Robustness Toolbox, TextAttack, and Augly. Users of Counterfit will no doubt pick up on its uncanny resemblance to Metasploit in terms of its commands and navigation.
Cleverhans – CleverhansLab

CleverHans, created by CleverHans-Lab – an academic research group attached to the University of Toronto – is a library that supports the creation of adversarial attacks and defenses and the benchmarking thereof. Carefully maintained tutorial examples are present within the GitHub repository to help users get started with the library. Attacks such as CarliniWagner and HopSkipJump, amongst others, can be used, with varying implementations for the different supported ML libraries – Jax, PyTorch, and TensorFlow 2. For seamless deployment, the tool can be spun up within a Docker container, à la its bundled Dockerfile. CleverHans-Lab regularly publishes research on adversarial attacks on their blog, with associated proof-of-concept (POC) code available from their GitHub profile.
Armory – TwoSixLabs

Armory, developed by TwoSixLabs, is an open-source containerized testbed for evaluating adversarial defenses. Armory can be deployed via container either locally or in cloud instances, which enables scalable model evaluation. Armory interfaces with the Adversarial Robustness Toolbox to enable interchangeable attacks and defenses. Armory’s ‘scenarios’ are worth mentioning, allowing for testing and evaluating entire machine learning threat models. When building an Armory scenario, considerations such as adversaries’ objective, operating environment, capabilities, and resources are used to profile an attacker, determine the threat they pose and evaluate the performance impact through metrics of interest. While this is from a higher, more interpretable level, scenarios have a paired config file that contains detailed information on the attack to be performed, the dataset to use, the defense to test, and various other properties. Using these lends itself to a high standard of repeatability and potential for automation.
Foolbox – Jonas Rauber, Roland S. Zimmermann
Foolbox is built to perform fast attacks on ML models, having been rewritten to use EagerPy, which allows for native execution with multiple frameworks such as PyTorch, TensorFlow, JAX, and NumPy, without having to make any code changes. Foolbox boasts many gradient- and decision-based attacks, respectively, covering many routes of attack.
TextAttack – QData

TextAttack is a powerful model-agnostic NLP attack framework that can perform adversarial text attacks, text augmentation, and model training. While many offensive scenarios can be conducted from within the framework, TextAttack also enables the user to use the framework and related libraries as the basis for the development of custom adversarial attacks. TextAttack’s powerful text augmentation capabilities can also be used to generate data to help increase model generalization and robustness.
MLSploit – Georgia Tech & Intel

MLSploit is an extensible cloud-based framework built to enable rapid security evaluation of ML models. Under the hood, MLSploit uses libraries such as Barnum, AVPass, and Shapshifter to create attacks on various malware classifiers, intrusion detectors, and object detectors and identify control flow anomalies in documents, to name a few. However, MLSploit does not appear to have been as actively developed as other frameworks mentioned in this blog.
AugLy – FacebookResearch

AugLy, developed by Meta Research (Formerly Facebook Research), is not quite an offensive security framework but deals more specifically with data augmentation. AugLy can augment audio, image, text, and video to generate examples to increase model robustness and generalization. Counterfit uses AugLy for testing for ‘common corruptions,’ which they define as a bug class.
Fault Injection
As the name suggests, fault injection is the act of injecting faults into a system to understand how it behaves when it performs in unusual scenarios. In the case of ML, fault injection typically refers to the manipulation of weights and biases in a model during runtime. Fault Injection can be performed for several reasons, but predominantly to evaluate how models respond to software and hardware faults.
PyTorchFi
PyTorchFi is a fault injection tool for Deep Neural Networks (DNNs) that were trained using PyTorch. PyTorchFi is highly versatile and straightforward to use, supporting several use cases for reliability and dependability research, including:
- Resiliency analysis of classification or object detection networks
- Analysis of robustness to adversarial attacks
- Training resilient models
- DNN interpretability
TensorFi – DependableSystemsLab
TensorFI is a fault injection tool to provide runtime perturbations to models trained using TensorFlow. It operates by hooking TensorFlow operators such as LRN, softmax, div, and sub for specific layers and provides methods for altering results via YAML configuration. TorchFI supports a few existing DNNs, such as AlexNet, VGG, and LeNet.
Reinforcement-Learning/GAN-based Attack Tools
Over the last few years, there has been an interesting emergence of attack tooling utilizing machine learning, more precisely, reinforcement learning and Generative Adversarial Networks (GANs), to conduct attacks against machine learning systems. The aim – to produce an adversarial example for a target model. An adversarial example is essentially a piece of input data (be it an image, a PE file, audio snippet etc) that has been modified in a particular way to induce a specific reaction from an ML model. In many cases this is what we refer to as an evasion attack, also known as a model bypass.
Adversarial examples can be created in many ways, be it through mathematical means, randomly perturbing the input, or iteratively changing features. This process can be lengthy, but can be accelerated through the use of reinforcement learning and GANs.
Reinforcement learning in this context essentially weights input perturbations against the prediction value from the model. If the perturbation alters the predicted value in the desired direction, it weights it more positively and so on. This allows for a ‘smarter’ perturbation selection approach.
GANs on the other hand, typically have two networks, a generator and discriminator network respectively which train in tandem, by pitting themselves against each other. The generator model generates ‘fake’ data, while the discriminator model attempts to determine what was real or fake.
Both of these methods enable for fast and effective adversarial example generation, which can be applied to many domains. GANs are used in a variety of settings and can generate almost any input, for brevity this blog looks more closely at those which are more security-centric.
MalwareGym – EndgameInc
MalwareGym was one of the first automated attack frameworks to use reinforcement learning in the modification of Portable Executable (PE) files. By taking features from clean ‘goodware’ and using them to alter malware executables, MalwareGym can be used to create adversarial examples that bypass malware classifier models (in this case, a gradient-boosted decision tree malware classifier). Under the hood, it uses OpenAI Gym, a library for building and comparing reinforcement learning solutions.
MalwareRL – Bobby Filar
While MalwareGym performed attacks against one model, MalwareRL picked up where it left off, with the tool able to conduct attacks against three different malware classifiers, Ember (Elastic Malware Benchmark for Empowering Researchers), SoRel (Sophos-ReversingLabs), and MalConv. MalwareRL also comes with Docker container files, allowing it to be spun up in a container relatively quickly and easily.
Pesidious – CyberForce
Pesidious performs a similar attack, however it boasts the use of Generative Adversarial Networks (GANs) alongside its reinforcement learning methodology. Pesidious also only supports 32-bit applications.
DW-GAN – Johnnyzn
DW-GAN is a GAN-based framework for breaking captchas on the dark web, where many sites are gated to prevent automated scraping. Another interesting application where ML-equipped tooling comes to the fore.
PassGAN – Briland Hitaj et al (Paper) / Brannon Dorsey (Implementation)
PassGAN uses a GAN to create novel password examples based on leaked password datasets, removing the necessity for a human to carefully create and curate a password wordlist for consequent use with tools such as Hashcat/JohnTheRipper.
Model Theft/Extraction
Model theft, also known as model extraction, is when an attacker recreates a target model without any access to the training data. While there aren’t many tooling examples for model theft, it’s an attack vector that is highly worrying, given the relative ease at which a model can be stolen, leading to potentially substantial damages and business losses over time. We can posit that this is because it’s typically quite a bespoke process, though it’s hard to tell.
KnockOffNets – Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz
One such tool for the extraction of neural networks is KnockOffNets. KnockOffNets is available as its own standalone repository and as part of the Adversarial Robustness Toolbox. With only a black-box understanding of a model and no predetermined knowledge of its training data, the model can be relatively accurately reproduced for as little as $30, even performing well with interpreting data outside the target model’s training data. This tool shows the relative ease, exploitability, and success of model theft/model extraction attacks.
All Your GNN Models and Data Belong To Me – Yang Zhang, Yun Shen, Azzedine Benameur
Given its recency and relevancy, it’s worth mentioning the talk ‘All Your GNN Models and Data Belong To Me’ by Zhang, Shen and Benameur from the BlackHat USA 2022 conference. This research outlines how prevalent graph neural networks are throughout society, how susceptible they are to link reidentification attacks, and most importantly – how they can be stolen.
Deserialization Exploitation
While not explicitly pertaining to ML models, deserialization exploits are an often overlooked vulnerability within the ML sphere. These exploits happen when arbitrary code is allowed to be deserialized without any safety check. One main culprit is the Pickle file format, which is used almost ubiquitously with the sharing of pre-trained models. Pickle is inherently vulnerable to a deserialization exploit, allowing attackers to run malicious code upon load. To make matters worse, Pickle is still the preferred storage method for saving/loading models from libraries such as PyTorch and Scikit-Learn, and is widely used by other ML libraries.
Fickling – TrailOfBits

The tool Fickling by TrailOfBits is explicitly designed to exploit the Pickle format and detect malicious Pickle files. Fickling boasts a decompiler, static analyzer, and bytecode rewriter. With that, it can inject arbitrary code into existing Pickle files, trace execution, and evaluate its safety.
Keras H5 Lambda Layer Exploit – Chris Anley – NCCGroup
While not a tool itself, worth mentioning is the existence of another deserialization exploit, this time within the Keras library. While Keras supports Pickle files, it also supports the HDF5 format. HDF5 is not inherently vulnerable (that we know of), but when combined with Lambdas, they can be. Lambdas in Keras can execute arbitrary code as part of the neural network architecture and can be persisted within the HDF5 format. If a Lambda bundled within a pre-trained model in said format contains a remote backdoor or reverse shell, Keras will trigger it automatically upon model load.
Charcuterie – Will Pearce
Last but certainly not least is the collection of attacks for ML and ML adjacent libraries – Charcuterie. Released at LabsCon 2022 by Will Pearce, AKA MooHax, Charcuterie ties together a multitude of code execution and deserialization exploits in one place, acting as a demonstration of the many ways ML models are vulnerable outside of their algorithms. While it provides several examples of Pickle and Keras deserialization (though the Keras functionality is commented out), it also includes methods of abusing shared objects in popular ML libraries to load malicious DLLs, Jupyter Notebook AutoLoad abuse, JSON deserialization and many more. We recommend checking out the presentation slides for further reading.
Conclusions
Hopefully, by now, we’ve painted a vivid enough picture to show that the volume of offensive tooling, exploitation, and research in the field is growing, as is our collective attack surface. The tools we’ve looked at in this blog showcase what’s out there in terms of publicly available, open-source tooling, but don’t forget that actors with enough resources (and motivation) have the capability to create more advanced methods of attack. Fear the state-aligned university researcher!
On the other side of the coin, the term ‘script-kiddie’ has been thrown around for a long time, referring to those who rely predominantly on premade tools to attack a system without wholly understanding the field behind it. While not as point-and-shoot as offensive tooling in the traditional sense, the bar has been dramatically lowered for adversaries to conduct attacks on AI/ML systems. Whichever designation one gives them, the reality is that they pose a threat and, no matter the skill level, shouldn’t be ignored.
While these tools require varying skill levels to use and some far more to master, they all contribute to the communal knowledge-base and serve, at the very least, as educational waypoints both for researchers and those stepping into the field for the first time. From an industry perspective, they serve as important tools to harden AI/ML systems against attack, improve model robustness, and evaluate security posture through red and blue team exercises. Ensuring AI model security is critical in this context, as these frameworks enable researchers and practitioners to identify vulnerabilities and mitigate risks before adversaries can exploit them.
As with all technology, we stand on the shoulders of giants; the development and use of these tools will spur research that builds on them and will drive both offensive and defensive research to new heights.
About HiddenLayer
HiddenLayer helps enterprises safeguard the machine learning models behind their most important products with a comprehensive security platform. Only HiddenLayer offers turnkey AI/ML security that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded in March of 2022 by experienced security and ML professionals, HiddenLayer is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures. For more information, visit www.hiddenlaye2stg.wpenginepowered.com and follow us on LinkedIn or Twitter.
About SAI
Synaptic Adversarial Intelligence (SAI) is a team of multidisciplinary cyber security experts and data scientists, who are on a mission to increase general awareness surrounding the threats facing machine learning and artificial intelligence systems. Through education, we aim to help data scientists, MLDevOps teams and cyber security practitioners better evaluate the vulnerabilities and risks associated with ML/AI, ultimately leading to more security conscious implementations and deployments.

Analyzing Threats to Artificial Intelligence: A Book Review
An Interview with Dan Klinedinst
Introduction
At HiddenLayer, we keep a close eye on everything in AI/ML security and are always on the lookout for the latest research, detailed analyses, and prescient thoughts from within the field. When Dan Klinedinst’s recently published book: ‘Shall We Play A Game? Analyzing Threats to Artificial Intelligence’ appeared in our periphery, we knew we had to investigate.
Shall We Play A Game opens with an eerily human-like paragraph generated by a text generation model – we didn’t expect to see reference to a ‘gigantic death spiral’ either, but here we are! What comes after is a wide-ranging and well-considered exploration of the threats facing AI, written in an engaging and accessible manner. From GPU attacks and Generative Adversarial Networks to the abuse of financial AI models, cognitive bias, and beyond, Dan’s book offers a comprehensive introduction to the topic and should be considered essential reading for anyone interested in understanding more about the world of adversarial machine learning.
We were fortunate enough to have had the pleasure to speak with Dan and ask his views on the state of the industry, how taxonomies, frameworks, and lawmakers can help play a role in securing AI, and where we’re headed in the future – oh, and some Sci-Fi, too.
Q&A
Beyond reading your book, what other resources are available to someone starting to think about ML security?
The first source I’d like to call out is the AI Village at the annual DefCon conference (aivillage.org). They have talks, contests, and a year-round discussion on Discord. Second, a lot of the information on AI security is still found in academic papers. While researching the book, I found it useful to go beyond media reports and review the original sources. I couldn’t always follow the math, but I found their hypotheses and conclusions more actionable than media reports. MITRE is also starting to publish applied research on adversarial ML, such as the ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) framework mentioned in the next question. Finally, Microsoft has published some excellent advice on threat modeling AI.
You mention NISTIR 8269, “A Taxonomy and Terminology of Adversarial Machine Learning.” There are other frameworks, such as MITRE ATLAS(™). Are such frameworks helpful for existing security teams to start thinking about ML-specific security concerns?
These types of frameworks and models are useful for providing a structured approach to examine the security of an AI or ML system. However, it’s important to remember that these types of tools are very broad and can’t provide a risk assessment of specific systems. For example, a Denial of Service attack against a business analytics system is likely to have a much different impact than a Denial of Service on a self-driving bus. It’s also worth remembering that attackers don’t follow the rules of these frameworks and may well invent innovative classes of attacks that aren’t currently represented.
Traditional computer security incidents have evolved over many years - from no security to simple exploration, benign proof of concept, entertainment/chaos, damage/harm, and the organized criminal enterprises we see today. Do you think ML attacks will evolve in the same way?
I think they’ll evolve in different ways. For one thing, we’ll jump straight to the stage of attacking ML systems for financial damage, whether that’s through ransomware, fraud, or subversion of digital currency. Beyond that, attacks will have different goals than past attacks. Theft of data was the primary goal of attackers until recently, when they realized ransomware is more profitable and arguably easier. In other words, they’ve moved from attacking confidentiality to attacking availability. I can see attacks on ML systems changing targets again to focus on subverting integrity. It’s not clear yet what the impact will be if we cannot trust the answers we get from ML systems.
Where do you foresee the future target of ML attacks? Will they focus more on the algorithm, model implementation, or underlying hardware/software?
I see attacks on model implementation as being similar to reverse engineering of proprietary systems today. It will be widespread but it will often be a means to enable further attacks. Attacks on the algorithm will be more challenging but will potentially give attackers more value. (For an interesting but relatively understandable example of attacks on the algorithm, see this recent post). The primary advantage of using AI and ML systems is that they can learn, so as an attacker the primary goal is to affect what and how it learns. All of that said, we still need to secure the underlying hardware and software! We have in no way mastered that component as an industry.
What defensive countermeasures can organizations adopt to help secure themselves from the most critical forms of AI attack?
Create threat models! This can be as simple as brainstorming possible vulnerabilities on a whiteboard or as complex as very detailed MBSE models or digital twins. Become familiar with techniques to make ML systems resistant to adversarial actions. For example, feature squeezing and feature denoising are methods for detecting violations of model input integrity (https://docs.microsoft.com/en-us/security/engineering/threat-modeling-aiml). Finally, focus on securing interfaces, just like you would in traditional-but-complex systems. If a classifier is created to differentiate between “dog” and “cat”, you should never accept the answer “giraffe”!
Currently, organizations are not required to disclose an attack on their ML systems/assets. How do you foresee tighter regulatory guidelines affecting the industry?
We’ve seen relatively little appetite for regulating cybersecurity at the national and international level. Outside of critical infrastructure, compliance tends to be more market-based, such as PCI and cyber insurance. I think regulation of AI is likely to come out of the regulatory bodies for specific industries rather than an overarching security policy framework. For example, financial lenders will have to prove that their models aren’t biased and are transparent enough that you can show exactly what transactions are being made. Attacks on ML systems might have to be reported in financial disclosures, if they’re material to a public company’s stock price. Medical systems will be subject to malpractice guidelines and autonomous vehicles will be liable for accidents. However, I don’t anticipate an “AI Security Act of 2028” or anything in most countries.
EU regulators recently proposed legislation that would require AI systems to meet certain transparency obligations . With the growing complexity of advanced neural networks, is explainable AI a viable way forward?
Explainable AI (XAI) is a necessary but insufficient control that will enable some of the regulatory requirements. However, I don’t think XAI alone is enough to convince users or regulators that AI is trustworthy. There will be some AI advances that cannot easily be explained, so creators of such systems need to establish trust based on other methods of transparency and attestation. I think of it as similar to how we trust humans - we can’t always understand their thought processes, but if their externally-observable actions are consistently trustworthy, we grant them more trust than if they are consistently wrong or dishonest. We already have ways to measure wrongness and dishonesty, from technical testing to courts of law.
And finally, are you a science fiction fan? As a total moonshot, how do you think the industry will look in 50 years compared to past and present science fiction writing? *cough* Battlestar Galactica *cough*
I’m a huge science fiction fan; my editor made me take a lot of sci-fi references out of my book because they were too obscure. Fifty years is a long time in this field. We could even have human-equivalent AI by then (although I personally doubt it will be that soon.) I think in 50 years – or possibly much sooner – AI will be performing most of the functions that cybersecurity professionals do now – vulnerability analysis, validation & verification, intrusion detection and threat hunting, et cetera. The massive state space of interconnected global systems, combined with vast amounts of data from cheap sensors, will be far greater than what humans can mentally process in a usable timeframe. AIs will be competing with each other at high speed to attack and defend. These might be considered adversarial attacks or they might just be considered how global competition works at that stage (think of the AIs and zaibatsus in early William Gibson novels). Humans in the industry will have to focus on higher order concerns - algorithms, model robustness, the security of the information as opposed to the security of the computers, simulation/modeling, and accurate risk assessment. Oh and don’t forget all the new technology that AI will probably enable - nanotech, biotech, mixed reality, quantum foo. I don't lose sleep over our world becoming like those in the Matrix or Terminator movies; my concerns are more Ex Machina or Black Mirror.
Closing Notes
We hope you found this conversation as insightful as we did. By having these conversations and bringing them into the public sphere – we aspire to raise more awareness surrounding the potential threats to AI/ML systems, the outcomes thereof, and what we can do to defend against them. We’d like to thank Dan for his time in providing such insightful answers and look forward to seeing his future work. For more information on Dan Klinedinst, or to grab yourself a copy of his book ‘Shall We Play A Game? Analyzing Threats to Artificial Intelligence’, be sure to check him out on Twitter or visit his website.
About Dan Klinedinst
Dan Klinedinst is an information security engineer focused on emerging technologies such as artificial intelligence, autonomous robots, and augmented / virtual reality. He is a former security engineer and researcher at Lawrence Berkeley National Laboratory, Carnegie Mellon University’s Software Engineering Institute, and the CERT Coordination Center. He currently works as a Distinguished Member of Technical Staff at General Dynamics Mission Systems, designing security architectures for large systems in the aerospace and defense industries. He has also designed and implemented numerous offensive security simulation environments; and is the creator of the Gibson3D security visualization tool. His hobbies include travel, cooking, and the outdoors. He currently resides in Pittsburgh, PA.
About HiddenLayer
HiddenLayer helps enterprises safeguard the machine learning models behind their most important products with a comprehensive security platform. Only HiddenLayer offers turnkey AI/ML security that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Founded in March of 2022 by experienced security and ML professionals, HiddenLayer is based in Austin, Texas, and is backed by cybersecurity investment specialist firm Ten Eleven Ventures. For more information, visit www.hiddenlayer.com and follow us on LinkedIn or Twitter.
In the News
HiddenLayer’s research is shaping global conversations about AI security and trust.

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
Austin, TX – December 23, 2025 – HiddenLayer, the leading provider of Security for AI, today announced it has been selected as an awardee on the Missile Defense Agency’s (MDA) Scalable Homeland Innovative Enterprise Layered Defense (SHIELD) multiple-award, indefinite-delivery/indefinite-quantity (IDIQ) contract. The SHIELD IDIQ has a ceiling value of $151 billion and serves as a core acquisition vehicle supporting the Department of Defense’s Golden Dome initiative to rapidly deliver innovative capabilities to the warfighter.
The program enables MDA and its mission partners to accelerate the deployment of advanced technologies with increased speed, flexibility, and agility. HiddenLayer was selected based on its successful past performance with ongoing US Federal contracts and projects with the Department of Defence (DoD) and United States Intelligence Community (USIC). “This award reflects the Department of Defense’s recognition that securing AI systems, particularly in highly-classified environments is now mission-critical,” said Chris “Tito” Sestito, CEO and Co-founder of HiddenLayer. “As AI becomes increasingly central to missile defense, command and control, and decision-support systems, securing these capabilities is essential. HiddenLayer’s technology enables defense organizations to deploy and operate AI with confidence in the most sensitive operational environments.”
Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.
HiddenLayer’s Airgapped AI Security Platform delivers comprehensive protection across the AI lifecycle, including:
- Comprehensive Security for Agentic, Generative, and Predictive AI Applications: Advanced AI discovery, supply chain security, testing, and runtime defense.
- Complete Data Isolation: Sensitive data remains within the customer environment and cannot be accessed by HiddenLayer or third parties unless explicitly shared.
- Compliance Readiness: Designed to support stringent federal security and classification requirements.
- Reduced Attack Surface: Minimizes exposure to external threats by limiting unnecessary external dependencies.
“By operating in fully disconnected environments, the Airgapped AI Security Platform provides the peace of mind that comes with complete control,” continued Sestito. “This release is a milestone for advancing AI security where it matters most: government, defense, and other mission-critical use cases.”
The SHIELD IDIQ supports a broad range of mission areas and allows MDA to rapidly issue task orders to qualified industry partners, accelerating innovation in support of the Golden Dome initiative’s layered missile defense architecture.
Performance under the contract will occur at locations designated by the Missile Defense Agency and its mission partners.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its security platform helps enterprises safeguard their agentic, generative, and predictive AI applications. HiddenLayer is the only company to offer turnkey security for AI that does not add unnecessary complexity to models and does not require access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer’s platform delivers supply chain security, runtime defense, security posture management, and automated red teaming.
Contact
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
AUSTIN, TX — December 1, 2025 — HiddenLayer, the leading AI security platform for agentic, generative, and predictive AI applications, today announced expanded integrations with Amazon Web Services (AWS) Generative AI offerings and a major platform update debuting at AWS re:Invent 2025. HiddenLayer offers additional security features for enterprises using generative AI on AWS, complementing existing protections for models, applications, and agents running on Amazon Bedrock, Amazon Bedrock AgentCore, Amazon SageMaker, and SageMaker Model Serving Endpoints.
As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.
“As organizations embrace generative AI to power innovation, they also inherit a new class of risks unique to these systems,” said Chris Sestito, CEO and Co-Founder of HiddenLayer. “Working with AWS, we’re ensuring customers can innovate safely, bringing trust, transparency, and resilience to every layer of their AI stack.”
Built on AWS to Accelerate Secure AI Innovation
HiddenLayer’s AI Security Platform and integrations are available in AWS Marketplace, offering native support for Amazon Bedrock and Amazon SageMaker. The company complements AWS infrastructure security by providing AI-specific threat detection, identifying risks within model inference and agent cognition that traditional tools overlook.
Through automated security gates, continuous compliance validation, and real-time threat blocking, HiddenLayer enables developers to maintain velocity while giving security teams confidence and auditable governance for AI deployments.
Alongside these integrations, HiddenLayer is introducing a complete platform redesign and the launches of a new AI Discovery module and an enhanced AI Attack Simulation module, further strengthening its end-to-end AI Security Platform that protects agentic, generative, and predictive AI systems.
Key enhancements include:
- AI Discovery: Identifies AI assets within technical environments to build AI asset inventories
- AI Attack Simulation: Automates adversarial testing and Red Teaming to identify vulnerabilities before deployment.
- Complete UI/UX Revamp: Simplified sidebar navigation and reorganized settings for faster workflows across AI Discovery, AI Supply Chain Security, AI Attack Simulation, and AI Runtime Security.
- Enhanced Analytics: Filterable and exportable data tables, with new module-level graphs and charts.
- Security Dashboard Overview: Unified view of AI posture, detections, and compliance trends.
- Learning Center: In-platform documentation and tutorials, with future guided walkthroughs.
HiddenLayer will demonstrate these capabilities live at AWS re:Invent 2025, December 1–5 in Las Vegas.
To learn more or request a demo, visit https://hiddenlayer.com/reinvent2025/.
About HiddenLayer
HiddenLayer, a Gartner-recognized Cool Vendor for AI Security, is the leading provider of Security for AI. Its platform helps enterprises safeguard agentic, generative, and predictive AI applications without adding unnecessary complexity or requiring access to raw data and algorithms. Backed by patented technology and industry-leading adversarial AI research, HiddenLayer delivers supply chain security, runtime defense, posture management, and automated red teaming.
For more information, visit www.hiddenlayer.com.
Press Contact:
SutherlandGold for HiddenLayer
hiddenlayer@sutherlandgold.com

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity
On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
On September 30, Databricks officially launched its Data Intelligence Platform for Cybersecurity, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.
Why Databricks’ Data Intelligence Platform for Cybersecurity Matters for AI Security
Cybersecurity and AI are now inseparable. Modern defenses rely heavily on machine learning models, but that also introduces new attack surfaces. Models can be compromised through adversarial inputs, data poisoning, or theft. These attacks can result in missed fraud detection, compliance failures, and disrupted operations.
Until now, data platforms and security tools have operated mainly in silos, creating complexity and risk.
The Databricks Data Intelligence Platform for Cybersecurity is a unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
How HiddenLayer Secures AI Applications Inside Databricks
HiddenLayer adds the critical layer of security for AI models themselves. Our technology scans and monitors machine learning models for vulnerabilities, detects adversarial manipulation, and ensures models remain trustworthy throughout their lifecycle.
By integrating with Databricks Unity Catalog, we make AI application security seamless, auditable, and compliant with emerging governance requirements. This empowers organizations to demonstrate due diligence while accelerating the safe adoption of AI.
The Future of Secure AI Adoption with Databricks and HiddenLayer
The Databricks Data Intelligence Platform for Cybersecurity marks a turning point in how organizations must approach the intersection of AI, data, and defense. HiddenLayer ensures the AI applications at the heart of these systems remain safe, auditable, and resilient against attack.
As adversaries grow more sophisticated and regulators demand greater transparency, securing AI is an immediate necessity. By embedding HiddenLayer directly into the Databricks ecosystem, enterprises gain the assurance that they can innovate with AI while maintaining trust, compliance, and control.
In short, the future of cybersecurity will not be built solely on data or AI, but on the secure integration of both. Together, Databricks and HiddenLayer are making that future possible.
FAQ: Databricks and HiddenLayer AI Security
What is the Databricks Data Intelligence Platform for Cybersecurity?
The Databricks Data Intelligence Platform for Cybersecurity delivers the only unified, AI-powered, and ecosystem-driven platform that empowers partners and customers to modernize security operations, accelerate innovation, and unlock new value at scale.
Why is AI application security important?
AI applications and their underlying models can be attacked through adversarial inputs, data poisoning, or theft. Securing models reduces risks of fraud, compliance violations, and operational disruption.
How does HiddenLayer integrate with Databricks?
HiddenLayer integrates with Databricks Unity Catalog to scan models for vulnerabilities, monitor for adversarial manipulation, and ensure compliance with AI governance requirements.
Get all our Latest Research & Insights
Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.
Thanks for your message!
We will reach back to you as soon as possible.

