Research

Research
xx
min read

Machine Learning Models are Code

Researchers uncovered critical code execution flaws in TensorFlow and Keras models via Lambda layers and I/O operations.

Research
xx
min read

The Dark Side of Large Language Models Part 2

The Feedback Loop: The rapid proliferation of AI-generated content online creates a "Dead Internet" risk, where future models are trained on low-quality AI data, leading to a permanent degradation of information quality.

Research
xx
min read

The Dark Side of Large Language Models Part 1

Malware on the Fly: Attackers now use LLM APIs to synthesize polymorphic malware. In these scenarios, the malicious code (like a keylogger) is generated uniquely each time it executes, making it nearly invisible to traditional signature-based antivirus.

Research
xx
min read

Machine Learning Threat Roundup

Modern ML models (like those using PyTorch) often contain data.pkl files. These files are meant to reconstruct neural network weights but can be "poisoned" to include malicious system calls.

Research
xx
min read

Supply Chain Threats: Critical Look at Your ML Ops Pipeline

ML supply chain attacks leverage data poisoning and hijacked models to steal data and compromise cloud environments.

Research
xx
min read

Pickle Files: The New ML Model Attack Vector

Adversaries are weaponizing Python's pickle format to hide Cobalt Strike and Mythic C2 agents in machine learning models.

Research
xx
min read

Weaponizing ML Models with Ransomware

Machine learning models can hide ransomware in their weights using steganography and execute it via insecure pickle deserialization.

Research
xx
min read

Machine Learning is the New Launchpad for Ransomware

AI models can be weaponized with hidden ransomware, exploiting insecure serialization to bypass traditional security.

Research
xx
min read

Unpacking the AI Adversarial Toolkit

The Rise of Autonomy: Tools have evolved from static libraries to Autonomous Pentesting Agents (e.g., Penligent, XBOW) that use "Chain-of-Thought" reasoning to execute end-to-end attack chains without human intervention.

Research
xx
min read

Analyzing Threats to Artificial Intelligence: A Book Review

Dan Klinedinst discusses AI security frameworks, shifting threat landscapes, and the vital role of proactive threat modeling.

Research
xx
min read

Synaptic Adversarial Intelligence Introduction

HiddenLayer’s SAI team educates professionals and develops countermeasures to defend AI/ML systems against adversarial threats.

Research
xx
min read

Sleeping With One AI Open

AI systems face rising threats from model hacking, including evasion, poisoning, and theft.

Understand AI Security, Clearly Defined

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.