Innovation Hub

Featured Posts

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

Agentic ShadowLogic

Research
xx
min read

MCP and the Shift to AI Systems

Research
xx
min read

The Lethal Trifecta and How to Defend Against It

Research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

Videos

Report and Guides

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

HiddenLayer AI Security Research Advisory

CVE-2025-62354
XX
min read

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

SAI-ADV-2025-012
XX
min read

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

CVE-2025-62353
XX
min read

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

CVE-2025-62356
XX
min read

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

In the News

News
XX
min read
HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.

News
XX
min read
HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.

News
XX
min read
HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&amp;utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.

Insights
xx
min read

The Beginners Guide to LLMs and Generative AI

Large Language Models are quickly sweeping the globe. In a world driven by artificial intelligence (AI), Large Language Models (LLMs) are leading the way, transforming how we interact with technology. The unprecedented rise to fame leaves many reeling. What are LLM’s? What are they good for? Why can no one stop talking about them? Are they going to take over the world? As the number of LLMs grows, so does the challenge of navigating this wealth of information. That’s why we want to start with the basics and help you build a foundational understanding of the world of LLMs.

Insights
xx
min read

Securing Your AI System with HiddenLayer

Amidst escalating global AI regulations, including the European AI Act and Biden’s Executive AI Order, in addition to the release of recent AI frameworks by prominent industry leaders like Google and IBM, HiddenLayer has been working diligently to enhance its Professional Services to meet growing customer demand. Today, we are excited to bring upgraded capabilities to the market, offering customized offensive security evaluations for companies across every industry, including an AI Risk Assessment, ML Training, and, maybe most excitingly, our Red Teaming services.

Insights
xx
min read

A Guide to Understanding New CISA Guidelines

Artificial intelligence (AI) is the latest, and one of the largest, advancements of technology to date. Like any other groundbreaking technology, the potential for greatness is paralleled only by the potential for risk. AI opens up pathways of unprecedented opportunity. However, the only way to bring that untapped potential to fruition is for AI to be developed, deployed, and operated securely and culpably. This is not a technology that can be implemented first and secured second. When it comes to utilizing AI, cybersecurity can no longer trail behind and play catch up. The time for adopting AI is now. The time for securing it was yesterday.

Insights
xx
min read

What SEC Rules Mean for your AI

On July 26th, 2023 the Securities and Exchange Commission (SEC) released its final rule on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure. Organizations now have 5 months to craft and confirm a compliance plan before the new regulations go into effect mid-December. The revisions from these proposed rules aim to streamline the disclosure requirements in many ways. But what exactly are these SEC regulations requiring you to disclose, and how much? And does this apply to my organization’s AI?

Insights
xx
min read

The Real Threats to AI Security and Adoption

AI is the latest, and likely one of the largest, advancements in technology of all time. Like any other new innovative technology, the potential for greatness is paralleled by the potential for risk. As technology evolves, so do threat actors. Despite how state-of-the-art Artificial Intelligence (AI) seems, we’ve already seen it being threatened by new and innovative cyber security attacks everyday.&nbsp;

Insights
xx
min read

A Beginners Guide to Securing AI for SecOps

Artificial Intelligence (AI) and Machine Learning (ML), the most common application of AI, are proving to be a paradigm-shifting technology. From autonomous vehicles and virtual assistants to fraud detection systems and medical diagnosis tools, practically every company in every industry is entering into an AI arms race seeking to gain a competitive advantage by utilizing ML to deliver better customer experiences, optimize business efficiencies, and accelerate innovative research.&nbsp;

Insights
xx
min read

MITRE ATLAS: The Intersection of Cybersecurity and AI

At HiddenLayer, we publish a lot of technical research about Adversarial Machine Learning. It’s what we do. But unless you are constantly at the bleeding edge of cybersecurity threat research and artificial intelligence, like our SAI Team, it can be overwhelming to understand how urgent and important this new threat vector can be to your organization. Thankfully, MITRE has focused its attention towards educating the general public about Adversarial Machine Learning and security for AI systems.

Insights
xx
min read

Safeguarding AI with AI Detection and Response

In previous articles, we’ve discussed the ubiquity of AI-based systems and the risks they’re facing; we’ve also described the common types of attacks against machine learning (ML) and built a list of adversarial ML tools and frameworks that are publicly available. Today, the time has come to talk about countermeasures.

Insights
xx
min read

The Tactics Techniques of Adversarial Machine Learning

Previously, we discussed the emerging field of adversarial machine learning, illustrated the lifecycle of an ML attack from both an attacker’s and defender’s perspective, and gave a high-level introduction to how ML attacks work. In this blog, we take you further down the rabbit hole by outlining the types of adversarial attacks that should be on your security radar.

research
xx
min read

Agentic ShadowLogic

research
xx
min read

MCP and the Shift to AI Systems

research
xx
min read

The Lethal Trifecta and How to Defend Against It

research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

research
xx
min read

Same Model, Different Hat

research
xx
min read

The Expanding AI Cyber Risk Landscape

research
xx
min read

The First AI-Powered Cyber Attack

research
xx
min read

Prompts Gone Viral: Practical Code Assistant AI Viruses

research
xx
min read

Persistent Backdoors

research
xx
min read

Visual Input based Steering for Output Redirection (VISOR)

research
xx
min read

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

research
xx
min read

Introducing a Taxonomy of Adversarial Prompt Engineering

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

Report and Guide
xx
min read

HiddenLayer Named a Cool Vendor in AI Security

Report and Guide
xx
min read

A Step-By-Step Guide for CISOS

Report and Guide
xx
min read

AI Threat landscape Report 2024

Report and Guide
xx
min read

HiddenLayer and Intel eBook

Report and Guide
xx
min read

Forrester Opportunity Snapshot

news
xx
min read

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

news
xx
min read

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

news
xx
min read

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

news
xx
min read

HiddenLayer Appoints Chelsea Strong as Chief Revenue Officer to Accelerate Global Growth and Customer Expansion

news
xx
min read

HiddenLayer Listed in AWS “ICMP” for the US Federal Government

news
xx
min read

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

news
xx
min read

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

news
xx
min read

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

news
xx
min read

One Prompt Can Bypass Every Major LLM’s Safeguards

news
xx
min read

Cyera and HiddenLayer Announce Strategic Partnership to Deliver End-to-End AI Security

news
xx
min read

HiddenLayer Unveils AISec Platform 2.0 to Deliver Unmatched Context, Visibility, and Observability for Enterprise AI Security

news
xx
min read

HiddenLayer AI Threat Landscape Report Reveals AI Breaches on the Rise;

SAI Security Advisory

Command Injection in CaptureDependency Function

A command injection vulnerability exists inside of the capture_dependencies function of the src/sagemaker/serve/save_retrive/version_1_0_0/save/utils.py python file. The command injection allows for arbitrary system commands to be run on the compromised machine. While this may not normally be an issue, the parameter can be altered by a user when used in the save_handler.py file in the same directory.

SAI Security Advisory

Command Injection in Capture Dependency

A deserialization vulnerability exists inside of the NumpyDeserializer.deserialize function of the base_deserializers python file. The deserializer allows the user to set an optional argument called allow_pickle which is passed to np.load and can be used to safely load a numpy file. By default the optional parameter was set to true, resulting in the loading and execution of malicious pickle files. Throughout the codebase the optional parameter is not used allowing code execution to potentially occur.

SAI Security Advisory

R-bitrary Code Execution Through Deserialization Vulnerability

HiddenLayer researchers have discovered a vulnerability, CVE-2024-27322, in the R programming language that allows for arbitrary code execution by deserializing untrusted data. This vulnerability can be exploited through the loading of RDS (R Data Serialization) files or R packages, which are often shared between developers and data scientists. An attacker can create malicious RDS files or R packages containing embedded arbitrary R code that executes on the victim’s target device upon interaction.

SAI Security Advisory

Out of bounds read due to lack of string termination in assert

When assert is called the message is copied into a buffer and then printed. The copying will fill the whole buffer and fail to add a string terminator at the end of the copied buffer allowing an attacker to read some bytes from memory.

SAI Security Advisory

Path sanitization bypass leading to arbitrary read

A path traversal vulnerability exists inside of load_external_data_for_tensor function of the external_data_helper python file. This vulnerability requires the user to have downloaded and loaded a malicious model, leading to an arbitrary file read. The vulnerability exists because the _sanitize_path doesn’t properly sanitize the path.

SAI Security Advisory

Credentials Stored in Plaintext in MongoDB Instance

An attacker could retrieve ClearML user information and credentials using a tool such as mongosh if they have access to the server. This is because the open-source version of the ClearML Server MongoDB instance lacks access control and stores user information and credentials in plaintext.

SAI Security Advisory

Web Server Renders User HTML Leading to XSS

An attacker can provide a URL rather than uploading an image to the Debug Samples tab of an Experiment. If the URL has the extension .html, the web server retrieves the HTML page, which is assumed to contain trusted data. The HTML is marked as safe and rendered on the page, resulting in arbitrary JavaScript running in any user’s browser when they view the samples tab.

SAI Security Advisory

Cross-Site Request Forgery in ClearML Server

An attacker can craft a malicious web page that triggers a CSRF when visited. When a user browses to the malicious web page a request is sent which can allow an attacker to fully compromise a user’s account.

SAI Security Advisory

Improper Auth Leading to Arbitrary Read-Write Access

An attacker can, due to lack of authentication, arbitrarily upload, delete, modify, or download files on the fileserver, even if the files belong to another user.

SAI Security Advisory

Path Traversal on File Download

An attacker can upload or modify a dataset containing a link pointing to an arbitrary file and a target file path. When a user interacts with this dataset, such as when using the Dataset.squash method, the file is written to the target path on the user’s system.

SAI Security Advisory

Pickle Load on Artifact Get

An attacker can create a pickle file containing arbitrary code and upload it as an artifact to a Project via the API. When a victim user calls the get method within the Artifact class to download and load a file into memory, the pickle file is deserialized on their system, running any arbitrary code it contains.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.