Zero Trust is well understood for users. Verify identity, check device health, enforce least privilege, assume breach. Most mature IT organisations have some version of this in place.

AI agents break that model. Not because Zero Trust principles are wrong โ€” but because agents operate in ways that existing Zero Trust architectures were never designed to handle.

The Fundamental Difference

A user authenticates, accesses a system, performs a task, and logs out. The trust boundary is the session. Controls verify the user’s identity and the device’s posture at the point of access.

An agent does not work this way. An agent authenticates once and then operates autonomously โ€” querying data sources, calling APIs, chaining tools, making decisions, and executing actions across multiple systems. There is no session in the traditional sense. There is no human reviewing each action before it happens.

This creates a set of architectural problems that user-centric Zero Trust does not address.

Where User-Centric Zero Trust Falls Short

Identity is not behaviour. For users, identity verification is a strong signal. If a verified user on a compliant device accesses a system they are authorised to use, the risk is manageable. For an agent, verifying identity tells you almost nothing about what the agent will do next. An authenticated agent with overprivileged access can query sensitive data, chain tool calls in unexpected sequences, or take autonomous actions that no human has reviewed โ€” all within a valid session.

Session-based controls do not apply. User Zero Trust architectures evaluate trust at login, at resource access, and periodically during the session. Agents operate continuously, often across sessions, and may persist for hours or days. A policy that checks trust at the start of an agent session and then allows unrestricted action for the duration is not Zero Trust. It is implicit trust with identity verification.

Lateral movement is the default. For users, lateral movement between systems is a threat indicator. For agents, moving between systems is the entire point. An agent that queries a CRM, pulls data from a finance system, and writes a report in a collaboration tool is doing exactly what it was designed to do. The challenge is distinguishing authorised lateral movement from unauthorised scope expansion.

Outputs are attack surfaces. Users produce outputs that other humans review. Agents produce outputs that other agents or automated systems may consume directly โ€” creating a chain of trust that compounds risk. A manipulated agent output that feeds into a downstream decision system can cause damage without any human ever seeing it.

What a Zero Trust Architecture for Agents Requires

Extending Zero Trust to AI agents requires controls that operate at a fundamentally different layer.

Per-action verification. Instead of verifying trust at session start, every significant action an agent takes should be evaluated against policy. This includes data queries, API calls, tool invocations, and any action that changes state. Microsoft’s Zero Trust for AI framework describes this as “continuous verification throughout every interaction.”

Scoped permissions per task. Agents should receive the minimum permissions required for each specific task, not a standing set of permissions for all possible tasks. This means dynamic permission scoping โ€” an agent performing a procurement review should not have the same data access as when it is drafting a marketing summary, even if it is the same agent.

Output validation and containment. Agent outputs should be validated before they are consumed by downstream systems. This includes checking for prompt injection artifacts, data leakage, and outputs that exceed the agent’s intended scope. Assume that agent outputs can be compromised and design systems to contain the blast radius.

Behavioural monitoring and anomaly detection. Because identity verification is insufficient for agents, continuous behavioural monitoring becomes essential. Baseline what normal agent behaviour looks like โ€” which systems it queries, how many API calls it makes, what data volumes it moves โ€” and alert on deviations.

Agent-to-agent trust boundaries. When agents communicate with other agents, each interaction should be treated as a trust boundary crossing. An agent that receives input from another agent should not inherently trust that input. This is the agentic equivalent of “assume breach.”

The Mid-Market Challenge

Large enterprises with dedicated security engineering teams can build custom agent monitoring and policy enforcement systems. Mid-market organisations need frameworks and tools that provide these capabilities without requiring bespoke development.

Microsoft’s Zero Trust for AI framework released in March 2026 provides a starting point, with reference architectures and a patterns library covering AI threat modelling, agentic security, and defence-in-depth for prompt injection. For Australian organisations, aligning these controls with Essential 8 requirements โ€” particularly around application control, restricting administrative privileges, and multi-factor authentication โ€” creates a defensible governance baseline.

The critical gap for most mid-market organisations is not awareness. It is the assumption that existing Zero Trust implementations already cover AI agents. They do not.

Three Starting Points

Audit agent permissions across all AI deployments. For every AI agent or AI-integrated application in use, document what data it can access, what actions it can take, and whether those permissions are scoped to specific tasks. Most organisations will find agents with standing broad access.

Implement output validation gates. Before agent outputs feed into decision systems, reports, or other agents, add a validation layer. This does not need to be complex โ€” even basic checks for data classification leakage and scope violations reduce risk materially.

Separate agent identity from user identity. Agents should have their own identity objects, their own permission sets, and their own audit trails. Piggybacking agent actions on user credentials creates blind spots that are difficult to audit and impossible to scope effectively.

The Architecture Gap Is an Operational Risk

The organisations that treat AI agent security as an extension of existing user security will find themselves with blind spots that grow proportionally with agent adoption. The organisations that recognise the architectural difference early and build for it will have a structural advantage as agentic AI scales.

Our team advises mid-market Australian organisations on building Zero Trust architectures that account for the specific trust model, permission requirements, and monitoring needs of AI agents โ€” without requiring dedicated AI security headcount.

If your organisation has deployed AI agents but has not extended its Zero Trust architecture to cover them, this is a conversation worth starting now.


CloudProInc is a Microsoft Partner and Wiz Security Integrator, working with Australian organisations on cloud, AI, and cybersecurity strategy.