Court Ruling in Amazon-Perplexity Case Raises New Questions for Agentic AI in Enterprise Systems

Amazon and Perplexity AI legal dispute highlights governance risks for AI agents in enterprise systems

Key Takeaways

A US federal court ruling has established that AI agents accessing password-protected systems without platform authorization may violate state and federal laws, highlighting the dominance of platform rules over user consent.

The case raises questions about the role of AI agents as intermediaries, with a ruling indicating that explicit terms of service and formal access agreements are necessary for AI to operate across third-party systems safely.

The ruling emphasizes the need for enterprises and AI developers to incorporate platform-level controls into their architectures, shifting focus from user permissions to a requirement for formal integrations and compliance with legal frameworks.

A US federal court ruling in Amazon.com Services LLC v. Perplexity AI is emerging as an early test case for how agentic AI systems will be governed as they begin interacting directly with enterprise platforms.

A March 9 preliminary decision from the Northern District of California found AI agents may violate state and federal law when accessing password-protected systems without platform authorization, even if acting with user consent. Analysis from JD Supra and Forbes highlight the broader implication: Control over digital systems may ultimately rest with platform operators, not end users or the AI agents acting on their behalf.

At issue is whether an AI agent can act as a proxy for a user across third-party systems, or whether platform-level permissions override user intent.

The Case: User Consent vs. Platform Control

Amazon alleged that Perplexity’s AI agent, Comet, accessed users’ password-protected Amazon accounts to browse products and make purchases without identifying itself as an AI system. According to the complaint, this violated Amazon’s terms of service, which restrict agent access to public areas and require identification of automated traffic.

The court sided with Amazon at the preliminary injunction stage, finding the company was likely to succeed under both the federal Computer Fraud and Abuse Act (CFAA) and California’s Comprehensive Computer Data Access and Fraud Act (CDAFA).

Critically, the court rejected the argument that user consent alone constituted authorization. Instead, it found that Amazon’s terms—and its explicit revocation of access via cease-and-desist—controlled whether access was permitted.

The ruling effectively establishes a hierarchy that platform rules may override user instructions when it comes to automated access.

Preliminary Ruling with Broad Implications

While the decision is now on appeal to the Ninth Circuit and enforcement has been temporarily stayed, it marks one of the first judicial interpretations of how agentic systems interact with existing computer access laws.

Platforms like Amazon are seeking to preserve control over customer interactions and system access, while AI agent providers argue that agents are simply extensions of user intent. That tension goes beyond legal doctrine. It points to a structural question about the future of digital systems: Whether AI agents will operate as independent intermediaries across platforms, or be constrained to tightly controlled, platform-approved pathways.

What This Means for ERP Insiders

The ruling lands as enterprises begin exploring agentic AI across ERP, HCM, and supply chain systems, often with the expectation that agents will orchestrate workflows across multiple platforms.

  • Platform control is gaining legal reinforcement. For platform operators, the ruling provides an early playbook. Explicit terms of service restricting AI agent behavior, requirements for agent identification, and formal revocation of access may strengthen legal claims against unauthorized automated activity.
  • Agent interoperability may depend on formal access models. For AI developers, the implications are more complex. The decision suggests that building agents capable of interacting with third-party systems, particularly in authenticated environments, may require deeper integration models, such as APIs, partnerships, or formal access agreements.
  • Agent-based architectures will need to account for platform-level control, not just user-level permissions. As AI agents move into transactional and system-of-record environments, enterprises should expect tighter access controls, formal integration requirements, and potential legal constraints on how agents interact across systems. Designing for API-first, governed access rather than autonomous system navigation will be critical as the legal framework evolves.