HiddenLayer AI Security Advisory
HiddenLayer's AI Security Research team consists of multidisciplinary cybersecurity experts and data scientists dedicated to raising awareness about threats to machine learning and artificial intelligence systems.

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Heading
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
November 26, 2025
Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode
When in autorun mode with the secure ‘Follow Allowlist’ setting, Cursor checks commands sent to run in the terminal by the agent to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic, allowing an attacker to craft a command that will execute non-whitelisted commands.
October 17, 2025
Data Exfiltration from Tool-Assisted Setup
Windsurf’s automated tools can execute instructions contained within project files without asking for user permission. This means an attacker can hide instructions within a project file to read and extract sensitive data from project files (such as a .env file) and insert it into web requests for the purposes of exfiltration.
October 17, 2025
Path Traversal in File Tools Allowing Arbitrary Filesystem Access
A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.
October 17, 2025
Symlink Bypass in File System MCP Server Leading to Arbitrary Filesystem Read
A symlink bypass vulnerability exists inside of the built-in File System MCP server, allowing any file on the filesystem to be read by the model. The code that validates allowed paths can be found in the file: ai/codium/mcp/ideTools/FileSystem.java, but this validation can be bypassed if a symbolic link exists within the project.
October 17, 2025
Data Exfiltration through Web Search Tool
The Web Search functionality within the Qodo Gen JetBrains plugin is set up as a built-in MCP server through ai/codium/CustomAgentKt.java. It does not ask user permission when called, meaning that an attacker can enumerate code project files on a victim’s machine and call the Web Search tool to exfiltrate their contents via a request to an external server.
October 17, 2025
Unsafe deserialization function leads to code execution when loading a Keras model
An arbitrary code execution vulnerability exists in the TorchModuleWrapper class due to its usage of torch.load() within the from_config method. The method deserializes model data with the weights_only parameter set to False, which causes Torch to fall back on Python’s pickle module for deserialization. Since pickle is known to be unsafe and capable of executing arbitrary code during the deserialization process, a maliciously crafted model file could allow an attacker to execute arbitrary commands.
July 31, 2025
How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor
When in autorun mode, Cursor checks commands against those that have been specifically blocked or allowed. The function that performs this check has a bypass in its logic that can be exploited by an attacker to craft a command that will be executed regardless of whether or not it is on the block-list or allow-list.
June 9, 2025
Exposure of sensitive Information allows account takeover
By default, BackendAI’s agent will write to /home/config/ when starting an interactive session. These files are readable by the default user. However, they contain sensitive information such as the user’s mail, access key, and session settings. A threat actor accessing that file can perform operations on behalf of the user, potentially granting the threat actor super administrator privileges.
June 9, 2025
Improper access control arbitrary allows account creation
By default, BackendAI doesn’t enable account creation. However, an exposed endpoint allows anyone to sign up with a user-privileged account. This flaw allows threat actors to initiate their own unauthorized session and exploit the resources—to install cryptominers, use the session as a malware distribution endpoint—or to access exposed data through user-accessible storages.
June 9, 2025
Missing Authorization for Interactive Sessions
BackendAI interactive sessions do not verify whether a user is authorized and doesn’t have authentication. These missing verifications allow attackers to take over the sessions and access the data (models, code, etc.), alter the data or results, and stop the user from accessing their session.
April 3, 2025
Unsafe Deserialization in DeepSpeed utility function when loading the model file
The convert_zero_checkpoint_to_fp32_state_dict utility function contains an unsafe torch.load which will execute arbitrary code on a user’s system when loading a maliciously crafted file.
December 16, 2024
keras.models.load_model when scanning .pb files leads to arbitrary code execution
A vulnerability exists inside the unsafe_check_pb function within the watchtower/src/utils/model_inspector_util.py file. This function runs keras.models.load_model on a .pb file that the user wants to scan for malicious payloads. A maliciously crafted .pb file will execute its payload when run with keras.models.load_model, allowing for a user’s device to be compromised when scanning a downloaded file.
December 16, 2024
keras.models.load_model when scanning .h5 files leads to arbitrary code execution
A vulnerability exists inside the unsafe_check_h5 function within the watchtower/src/utils/model_inspector_util.py file. This function runs keras.models.load_model on the .h5 file the user wants to scan for malicious payloads. A maliciously crafted .h5 file will execute its payload when run with keras.models.load_model, allowing for a user’s device to be compromised when scanning a downloaded file.
October 24, 2024
Unsafe extraction of NeMo archive leading to arbitrary file write
The _unpack_nemo_file function used by the SaveRestoreConnector class for model loading uses tarfile.extractall() in an unsafe way which can lead to an arbitrary file write when a model is loaded.
September 18, 2024
Eval on XML parameters allows arbitrary code execution when loading RAIL file
An arbitrary code execution vulnerability exists inside the parse_token function of the guardrails/guardrails/validatorsattr.py Python file. The vulnerability requires the victim to load a malicious XML guardrails file, allowing an attacker to run arbitrary Python code on the program’s machine when the file is loaded. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Web UI renders javascript code in ML Engine name leading to XSS
An attacker authenticated to a MindsDB instance can create an ML Engine, database, project, or upload a dataset within the UI and give it a name (or value in the dataset) containing malicious arbitrary javascript code. Whenever another user enumerates the items within the UI, the malicious arbitrary javascript code will run.
September 12, 2024
Pickle Load on inhouse BYOM model finetune leads to arbitrary code execution
A vulnerability exists within the finetune method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file, which will perform pickle.loads on a custom model built via the Build Your Own Model process. An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into the BYOM model build process using the ‘Upload Custom Model’ feature. This object will be deserialized when the model is loaded via the ‘finetune’ method; executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.
September 12, 2024
Pickle Load on inhouse BYOM model describe query leads to arbitrary code execution
A vulnerability exists within the describe method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file, which will perform pickle.loads on a custom model built via the Build Your Own Model process. An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into the BYOM model build process using the ‘Upload Custom Model’ feature. This object will be deserialized when the model is loaded via the ‘describe’ method; executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.
September 12, 2024
Pickle Load on inhouse BYOM model prediction leads to arbitrary code execution
A vulnerability exists within the predict method of the ModelWrapperUnsafe class in the mindsdb/integrations/handlers/byom_handler/byom_handler.py file, which will perform pickle.loads on a custom model built via the Build Your Own Model process. An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into the BYOM model build process using the ‘Upload Custom Model’ feature. This object will be deserialized when the model is loaded via the ‘predict’ method; executing the arbitrary code on the server. Note this can only occur if the BYOM engine is changed in the config from the default ‘venv’ to ‘inhouse’.
September 12, 2024
Pickle Load on BYOM model load leads to arbitrary code execution
A vulnerability exists within the decode function of the mindsdb/integrations/handlers/byom_handler/proc_wrapper.py file, which will perform a pickle.loads on a custom model built via the Build Your Own Model process. An attacker authenticated to a MindsDB instance can inject a malicious pickle object containing arbitrary code into the BYOM model build process using the ‘Upload Custom Model’ feature. This object will be deserialized when the model is loaded via a ‘predict’ or ‘describe’ query; executing the arbitrary code on the server.
September 12, 2024
Eval on query parameters allows arbitrary code execution in SharePoint integration list item creation
An arbitrary code execution vulnerability exists inside the create_an_item function of the mindsdb/integrations/handlers/sharepoint_handler/sharepoint_api.py file in the Microsoft SharePoint integration. The vulnerability requires the attacker to be authorized on the MindsDB instance and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Eval on query parameters allows arbitrary code execution in SharePoint integration site column creation
An arbitrary code execution vulnerability exists inside the create_a_site_column function of the mindsdb/integrations/handlers/sharepoint_handler/sharepoint_api.py file in the Microsoft SharePoint integration. The vulnerability requires the attacker to be authorized on the MindsDB instance and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Eval on query parameters allows arbitrary code execution in SharePoint integration list creation
An arbitrary code execution vulnerability exists inside the create_a_list function of the mindsdb/integrations/handlers/sharepoint_handler/sharepoint_api.py file in the Microsoft SharePoint integration. The vulnerability requires the attacker to be authorized on the MindsDB instance and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Eval on query parameters allows arbitrary code execution in ChromaDB integration
An arbitrary code execution vulnerability exists inside the insert function of the mindsdb/integrations/handlers/chromadb_handler/chromadb_handler.py file in the ChromaDB integration. The vulnerability requires the attacker to be authorized on the MindsDB instance, and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Eval on query parameters allows arbitrary code execution in Vector Database integrations
An arbitrary code execution vulnerability exists inside the _dispatch_update function of the mindsdb/integrations/libs/vectordatabase_handler.py file. The vulnerability requires the attacker to be authorized on the MindsDB instance and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function, which can be used with multiple integrations.
September 12, 2024
Eval on query parameters allows arbitrary code execution in Weaviate integration
An arbitrary code execution vulnerability exists inside the select function of the mindsdb/integrations/handlers/weaviate_handler/weaviate_handler.py file in the Weaviate integration. The vulnerability requires the attacker to be authorized on the MindsDB instance and allows them to run arbitrary Python code on the machine the instance is running on. The vulnerability exists because of the use of an unprotected eval function.
September 12, 2024
Unsafe deserialization in Datalab leads to arbitrary code execution
An arbitrary code execution vulnerability exists inside the serialize function of the cleanlab/datalab/internal/serialize.py file in the Datalabs module. The vulnerability requires a maliciously crafted datalabs.pkl file to exist within the directory passed to the Datalabs.load function, executing arbitrary code on the system loading the directory.
September 12, 2024
Eval on CSV data allows arbitrary code execution in the MLCTaskValidate class
An arbitrary code execution vulnerability exists inside the validate function of the ClassificationTaskValidate class in the autolabel/src/autolabel/dataset/validation.py file. The vulnerability requires the victim to load a malicious CSV dataset with the optional parameter ‘validate’ set to True while using a specific configuration. The vulnerability allows an attacker to run arbitrary Python code on the machine the CSV file is loaded on because of the use of an unprotected eval function.
September 12, 2024
Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class
An arbitrary code execution vulnerability exists inside the validate function of the ClassificationTaskValidate class in the autolabel/src/autolabel/dataset/validation.py file. The vulnerability requires the victim to load a malicious CSV dataset with the optional parameter ‘validate’ set to True while using a specific configuration. The vulnerability allows an attacker to run arbitrary Python code on the machine the CSV file is loaded on because of the use of an unprotected eval function.
September 12, 2024
Eval on CSV data allows arbitrary code execution in the MLCTaskValidate class
An arbitrary code execution vulnerability exists inside the validate function of the MLCTaskValidate class in the autolabel/src/autolabel/dataset/validation.py Python file. The vulnerability requires the victim to load a malicious CSV dataset with the optional parameter ‘validate’ set to True while using a specific configuration. The vulnerability allows an attacker to run arbitrary Python code on the program’s machine because of the use of an unprotected eval function.
September 12, 2024
Eval on CSV data allows arbitrary code execution in the ClassificationTaskValidate class
An arbitrary code execution vulnerability exists inside the validate function of the ClassificationTaskValidate class in the autolabel/src/autolabel/dataset/validation.py file. The vulnerability requires the victim to load a malicious CSV dataset with the optional parameter ‘validate’ set to True while using a specific configuration. The vulnerability allows an attacker to run arbitrary Python code on the machine the CSV file is loaded on because of the use of an unprotected eval function.
August 30, 2024
Safe_eval and safe_exec allows for arbitrary code execution
Execution of arbitrary code can be achieved via the safe_eval and safe_exec functions of the llama-index-experimental/llama_index/experimental/exec_utils.py Python file. The functions allow the user to run untrusted code via an eval or exec function while only permitting whitelisted functions. However, an attacker can leverage the whitelisted pandas.read_pickle function or other 3rd party library functions to achieve arbitrary code execution. This can be exploited in the Pandas Query Engine.
August 30, 2024
Exec on untrusted LLM output leading to arbitrary code execution on Evaporate integration
Execution of arbitrary code can be achieved through an unprotected exec statement within the run_fn_on_nodes function of the llama_index/llama-index-integrations/program/llama-index-program-evaporate/llama_index/program/evaporate/extractor Python file in the ‘evaporate’ integration. This may be triggered if a victim user were to run the evaporate function on a malicious information source, such as a page on a website, containing a hidden prompt that is then indirectly injected into the LLM, causing it to return a malicious function which is run via the exec statement.
July 19, 2024
Crafted WiFI network name (SSID) leads to arbitrary command injection
The net_service_thread function in libwyzeUtilsPlatform.so spawns a shell command containing a user-specified WiFi network name (SSID) in an unsafe way, which can lead to arbitrary command injection as root during the camera setup process.
July 11, 2024
Deserialization of untrusted data leading to arbitrary code execution
Execution of arbitrary code can be achieved through the deserialization process in the tensorflow_probability/python/layers/distribution_layer.py file within the function _deserialize_function. An attacker can inject a malicious pickle object into an HDF5 formatted model file, which will be deserialized via pickle when the model is loaded, executing the malicious code on the victim machine. An attacker can achieve this by injecting a pickle object into the DistributionLambda layer of the model under the make_distribution_fn key.
June 4, 2024
Remote Code Execution on Local System via MLproject YAML File
A code injection vulnerability exists within the ML Project run procedure in the _run_entry_point function, within the projects/backend/local.py file. An attacker can package an MLflow Project where the MLproject main entrypoint command contains arbitrary code (or an operating system appropriate command), which will be executed on the victim machine when the project is run.
June 4, 2024
Pickle Load on Recipe Run Leading to Code Execution
A deserialization vulnerability exists within the recipes/cards/__init__.py file within the class BaseCard, in the static method load. An attacker can create an MLProject Recipe containing a malicious pickle file (e.g. pickle.pkl) and a python script that calls BaseCard.load(pickle.pkl). The pickle file will be deserialized when the project is run leading to execution of the arbitrary code on the victim machine.
June 4, 2024
Cloudpickle Load on PyTorch Model Load Leading to Code Execution
A deserialization vulnerability exists within the mlflow/pytorch/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Cloudpickle Load on Langchain AgentExecutor Model Load Leading to Code Execution
A deserialization vulnerability exists within the mlflow/langchain/utils.py file, within the function _load_from_pickle. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Cloudpickle Load on TensorFlow Keras Model Leading to Code Execution
A deserialization vulnerability exists within the mlflow/tensorflow/__init__.py file, within the function _load_custom_objects. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Cloudpickle Load on LightGBM SciKit Learn Model Leading to Code Execution
A deserialization vulnerability exists within the mlflow/lightgbm/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Pickle Load on Pmdarima Model Load Leading to Code Execution
A deserialization vulnerability exists within the pmdarima/__init__.py file, within the function _load_model. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Cloudpickle Load on PyFunc Model Load Leading to Code Execution
A deserialization vulnerability exists within the mlflow/pyfunc/model.py file, within the function _load_pyfunc. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Cloudpickle and Pickle Load on Sklearn Model Load Leading to Code Execution
A deserialization vulnerability exists in the sklearn/__init__.py file, within the function _load_model_from_local_file. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.
June 4, 2024
Pickle Load in Read Pandas Utility Function
The YData profiling library allows users to load pandas datasets from their filesystem using the read_pandas function. This function then grabs the extension of the file and sends it to a loading function based on the extension. One of the supported extensions, and file formats, is the python pickle module. As a result, when a user loads the dataset, arbitrary code will run on their system.
June 4, 2024
XSS Injection in HTML Profile Report Generation
ProfileReports can be saved as an HTML file so that they can be viewed directly in the browser. To do this, the program leverages Jinja2 to create templates. However, by default, Jinja2 doesn’t auto-escape any HTML that is rendered resulting in an attacker being able to inject an XSS attack, running arbitrary code when a report is viewed.
June 4, 2024
Pickle Load in Serialized Profile Load
Profile reports can be serialized and deserialized through the load/loads and dump/dumps functions allowing people to share reports with each other. Reports are serialized using the Python pickle module which is inherently insecure and can lead to arbitrary code being executed once the file is loaded.
June 4, 2024
Model Deserialization Leads to Code Execution
When loading nodes of type OperatorFuncNode Skops allows a model to call functions from within the operator module, specifying both the function and the arguments being passed to it. This system allows an attacker to craft a specialized payload in the form of a model that allows for arbitrary code execution to occur when a malicious model is loaded and compiled.
April 30, 2024
Command Injection in CaptureDependency Function
A command injection vulnerability exists inside of the capture_dependencies function of the src/sagemaker/serve/save_retrive/version_1_0_0/save/utils.py python file. The command injection allows for arbitrary system commands to be run on the compromised machine. While this may not normally be an issue, the parameter can be altered by a user when used in the save_handler.py file in the same directory.
April 30, 2024
Command Injection in Capture Dependency
A deserialization vulnerability exists inside of the NumpyDeserializer.deserialize function of the base_deserializers python file. The deserializer allows the user to set an optional argument called allow_pickle which is passed to np.load and can be used to safely load a numpy file. By default the optional parameter was set to true, resulting in the loading and execution of malicious pickle files. Throughout the codebase the optional parameter is not used allowing code execution to potentially occur.
April 1, 2024
R-bitrary Code Execution Through Deserialization Vulnerability
HiddenLayer researchers have discovered a vulnerability, CVE-2024-27322, in the R programming language that allows for arbitrary code execution by deserializing untrusted data. This vulnerability can be exploited through the loading of RDS (R Data Serialization) files or R packages, which are often shared between developers and data scientists. An attacker can create malicious RDS files or R packages containing embedded arbitrary R code that executes on the victim’s target device upon interaction.
February 23, 2024
Out of bounds read due to lack of string termination in assert
When assert is called the message is copied into a buffer and then printed. The copying will fill the whole buffer and fail to add a string terminator at the end of the copied buffer allowing an attacker to read some bytes from memory.
February 23, 2024
Path sanitization bypass leading to arbitrary read
A path traversal vulnerability exists inside of load_external_data_for_tensor function of the external_data_helper python file. This vulnerability requires the user to have downloaded and loaded a malicious model, leading to an arbitrary file read. The vulnerability exists because the _sanitize_path doesn’t properly sanitize the path.
February 6, 2024
Credentials Stored in Plaintext in MongoDB Instance
An attacker could retrieve ClearML user information and credentials using a tool such as mongosh if they have access to the server. This is because the open-source version of the ClearML Server MongoDB instance lacks access control and stores user information and credentials in plaintext.
February 6, 2024
Web Server Renders User HTML Leading to XSS
An attacker can provide a URL rather than uploading an image to the Debug Samples tab of an Experiment. If the URL has the extension .html, the web server retrieves the HTML page, which is assumed to contain trusted data. The HTML is marked as safe and rendered on the page, resulting in arbitrary JavaScript running in any user’s browser when they view the samples tab.
February 6, 2024
Cross-Site Request Forgery in ClearML Server
An attacker can craft a malicious web page that triggers a CSRF when visited. When a user browses to the malicious web page a request is sent which can allow an attacker to fully compromise a user’s account.
February 6, 2024
Improper Auth Leading to Arbitrary Read-Write Access
An attacker can, due to lack of authentication, arbitrarily upload, delete, modify, or download files on the fileserver, even if the files belong to another user.
February 6, 2024
Path Traversal on File Download
An attacker can upload or modify a dataset containing a link pointing to an arbitrary file and a target file path. When a user interacts with this dataset, such as when using the Dataset.squash method, the file is written to the target path on the user’s system.
February 6, 2024
Pickle Load on Artifact Get
An attacker can create a pickle file containing arbitrary code and upload it as an artifact to a Project via the API. When a victim user calls the get method within the Artifact class to download and load a file into memory, the pickle file is deserialized on their system, running any arbitrary code it contains.

Understand AI Security, Clearly Defined
Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.
