research

AI Agents in Production: Security Lessons from Recent Incidents

By

Tom Bonner

March 25, 2026

Table of Contents

Share:

Overview

Two recent incidents at Meta and Amazon have brought renewed attention to the security risks of deploying agentic AI in enterprise environments. Neither was catastrophic, but both were instructive and helpful for framing the risks associated with agentic AI. In this post, we review what happened, examine why agents present a distinct risk profile compared to conventional tooling, and outline the control gaps that organisations should aim to close.

The Incidents

In mid-March 2026, it was widely reported that a Meta engineer asked an internal AI agent for help with a technical problem via an internal forum. The agent provided guidance which, when acted upon, exposed a significant volume of sensitive company and user data to employees without the appropriate authorisation. The exposure lasted approximately two hours before it was contained. Meta classified it as a "Sev 1," its second-highest internal severity rating.

Previously, in February 2026, the Financial Times also alleged that Amazon's agentic coding tool, Kiro, was responsible for a 13-hour outage that impacted AWS Cost Explorer in December. Engineers had purportedly allowed the tool to carry out changes to a customer-facing system without requiring peer approval, a control that would normally be mandatory for a human engineer. The tool determined that the optimal resolution was to delete and recreate the environment. Amazon's internal briefing notes described a pattern of incidents with "high blast radius" linked to “gen-AI assisted changes,” and acknowledged that best practices for these tools were "not yet fully established."

Meta confirmed the incident and stated that no user data was mishandled, while noting that a human engineer could equally have provided erroneous advice. The company has pointed to the severity classification itself as evidence of how seriously it treats data protection. Amazon publicly characterised its incidents as user errors rather than AI failures. Both responses may be technically defensible in a narrow sense, but they do not resolve the underlying governance question: if agents are given the same access and trust as human engineers, without equivalent controls, the distinction between "user error" and "agent error" is largely academic.

Why Agents Present a Different Risk Profile

Most enterprise security frameworks were designed around human actors and deterministic software. AI agents fit neither model cleanly.

Agents interpret goals, not just instructions. When tasked with fixing a problem, an agent will determine the steps it believes are necessary to reach the desired outcome. In the AWS case, Kiro was not instructed to delete the environment; it concluded that it was the right approach. The risk is autonomous decision-making operating without clearly defined boundaries.

Agents lack operational context. Human engineers carry accumulated knowledge about what systems are sensitive, what changes carry risk, and when to escalate. Agents do not carry that institutional memory. They optimise for the task at hand, and that gap in contextual awareness can lead to decisions that would be immediately recognisable as wrong to an experienced person but are entirely invisible to the agent itself.

Agents scale the impact of misconfiguration. A single overly broad permission or a missing approval step can have consequences that propagate quickly across systems. Both incidents demonstrated that a single autonomous action, taken without intervention, can expose data or disrupt services at a scale unlikely for a cautious human operator.

Agents inherit permissions without discrimination. In the Amazon case, Kiro operated with permissions equivalent to a human engineer and without the peer-review controls that would apply to a person. Trust was granted implicitly rather than scoped appropriately.

Control Gaps and How to Address Them

Both incidents were, in hindsight, preventable. The required controls are largely extensions of existing security practices, applied consistently to a new class of system.

Least-privilege access. Agents should be granted only the permissions necessary for the specific task they are performing, not the broad access typical of a human engineer role. This is standard practice for service accounts and should apply equally to AI agents.

Mandatory human authorisation for high-risk actions. Any action that is irreversible, involves sensitive data, or has the potential to cause systemic impact should require explicit approval before execution. Where agents have configurable defaults around authorisation, as Kiro did, those defaults should be reviewed and enforced at the organisational level, not left to individual engineers to manage.

Runtime visibility, investigation, and enforcement. Both incidents involved patterns of behaviour that should have been detectable in progress, not just in retrospect. It is worth distinguishing three related but distinct capabilities here. Visibility means being able to reconstruct a full agent session, including which tools were called, what data was accessed, and how a sequence of actions evolved, providing the operational context behind any given outcome. Investigation and threat hunting means being able to search and pivot across sessions and execution paths to identify anomalous behaviour before it becomes an incident. Enforcement means being able to act on that visibility in real time: blocking unsafe actions, redacting sensitive data, or halting execution based on policy. Most organisations currently have limited versions of the first and almost none of the latter two. All three should be treated as requirements for any production agentic deployment.

Protection against indirect prompt injection. The Meta and Amazon incidents were caused by misconfiguration and over-permissioning, but a distinct and under-addressed risk is that agents can also be manipulated through the content they process. Prompt injection, for instance, arriving via documents, tool responses, retrieved data, or MCP interactions, can corrupt agent memory, override system instructions, or redirect behaviour without any change to the initiating prompt or the access controls around it. This is an attack surface that access governance controls do not address, and it requires specific detection at the input and context layer of agent execution.

Staged rollout and sandboxing. Agents should be introduced in restricted environments before being granted access to production systems. Amazon's acknowledgement that best practices were "not yet fully established" at the point of deployment is a useful signal: if the governance framework is not mature, the deployment scope should reflect that.

Distinct agent identities. Agents should not share identity or permissions with human accounts. Operating under separate, purpose-scoped identities makes their activity easier to monitor, limits the impact of any individual failure, and ensures actions are attributable in audit logs.

Organisational Considerations

Beyond technical controls, both incidents reflect a governance challenge. Agents are being deployed at scale, in some cases with internal adoption targets and leadership pressure to drive usage, while the security and risk frameworks needed to govern them are still being developed. That sequencing creates exposure.

Security teams need to be involved in agent deployment decisions from the outset, not brought in after an incident to implement retrospective safeguards. That means establishing clear policies on what agents are permitted to do, what requires human oversight, and how exceptions are handled, before deployment.

As reported in our 2026 AI Threat Landscape Report, 31% of organisations cannot determine whether they have experienced an agentic breach. That figure is relevant not just as a risk indicator but as a baseline capability question. Before an organisation can remediate, it needs to know something happened. Investing in runtime visibility is therefore a prerequisite for everything else.

It is also worth noting that the "user error" framing, while convenient, can obscure systemic issues. If an agent is routinely being granted excessive permissions, or approval requirements are routinely being bypassed, that is a process failure, not an isolated human mistake. Root cause analysis should examine the system, not just the individual.

Conclusions

Agentic AI tools offer genuine operational value, and adoption across enterprise environments is accelerating. The incidents at Meta and Amazon are useful reference points, not because they were uniquely severe, but because they illustrate predictable failure modes and highlight emerging security challenges related to agentic security.

The controls required to close the security gap are largely extensions of existing security practice: least-privilege access, human authorisation for high-risk actions, runtime visibility and enforcement, and protection against prompt injection at the execution layer. The main challenge is ensuring these controls are applied consistently to AI agents, which are often treated as a special case exempt from the scrutiny applied to other systems with equivalent access.

As recent incidents have shown, they should not be.

Related Research

Research
xx
min read

AI Agents in Production: Security Lessons from Recent Incidents

Research
xx
min read

LiteLLM Supply Chain Attack

Research
xx
min read

Exploring the Security Risks of AI Assistants like OpenClaw

OpenClaw (formerly Moltbot and ClawdBot) is a viral, open-source autonomous AI assistant designed to execute complex digital tasks, such as managing calendars, automating web browsing, and running system commands, directly from a user's local hardware. Released in late 2025 by developer Peter Steinberger, it rapidly gained over 100,000 GitHub stars, becoming one of the fastest-growing open-source projects in history. While it offers powerful "24/7 personal assistant" capabilities through integrations with platforms like WhatsApp and Telegram, it has faced significant scrutiny for security vulnerabilities, including exposed user dashboards and a susceptibility to prompt injection attacks that can lead to arbitrary code execution, credential theft and data exfiltration, account hijacking, persistent backdoors via local memory, and system sabotage.

Stay Ahead of AI Security Risks

Get research-driven insights, emerging threat analysis, and practical guidance on securing AI systems—delivered to your inbox.