webinar

Offensive and Defensive Security for Agentic AI

March 17, 2025

Agentic AI systems are already being targeted because of what makes them powerful: autonomy, tool access, memory, and the ability to execute actions without constant human oversight. The same architectural weaknesses discussed in Part 1 are actively exploitable.

In Part 2 of this series, we shift from design to execution. This session demonstrates real-world offensive techniques used against agentic AI, including prompt injection across agent memory, abuse of tool execution, privilege escalation through chained actions, and indirect attacks that manipulate agent planning and decision-making.

We’ll then show how to detect, contain, and defend against these attacks in practice, mapping offensive techniques back to concrete defensive controls. Attendees will see how secure design patterns, runtime monitoring, and behavior-based detection can interrupt attacks before agents cause real-world impact.

This webinar closes the loop by connecting how agents should be built with how they must be defended once deployed.

Key Takeaways

Attendees will learn how to:

  • Understand how attackers exploit agent autonomy and toolchains

  • See live or simulated attacks against agentic systems in action

  • Map common agentic attack techniques to effective defensive controls

  • Detect abnormal agent behavior and misuse at runtime

Apply lessons from attacks to harden existing agent deployments

Register

Speakers

Jim Simpson

ML Threat Intel Specialist

HiddenLayer

Conor McCauley

Adversarial ML Researcher

HiddenLayer

Kenneth Yeung

Associate Threat Researcher

HiddenLayer

Related webinars

webinar
xx
min read

Offensive and Defensive Security for Agentic AI

webinar
xx
min read

How to Build Secure Agents

webinar
xx
min read

Beating the AI Game, Ripple, Numerology, Darcula, Special Guests from Hidden Layer… – Malcolm Harkins, Kasimir Schulz – SWN #471

Beating the AI Game, Ripple (not that one), Numerology, Darcula, Special Guests, and More, on this edition of the Security Weekly News. Special Guests from Hidden Layer to talk about this article: https://www.forbes.com/sites/tonybradley/2025/04/24/one-prompt-can-bypass-every-major-llms-safeguards/

Ready to See Every AI Asset?

Get complete visibility into your organization’s models, agents, datasets, and AI workflows.