Defining ‘Loyalty’ for AI Agents: Insights from the Stanford AI Agents x Law Workshop

AI agents are rapidly moving from science fiction to daily reality. These sophisticated software systems promise to manage tasks, conduct transactions, and augment our capabilities in unprecedented ways. But as they become more integrated into our lives, critical questions arise: Whose interests will they serve? How can we ensure they act reliably and responsibly on our behalf?

These questions were at the heart of the AI Agents x Law Workshop, held on April 8th, 2025, at Stanford Law School. Part of an ongoing research initiative affiliated with Stanford CodeX and law.MIT.edu, the event brought together legal experts, technologists, founders, and consumer advocates in collaboration with the Stanford HAI Digital Economy Lab and the Consumer Reports (CR) Innovation Lab to map the complex legal and ethical terrain of emerging AI agent technologies. This event marked the beginning of a focused effort by these organizations to collaboratively define actionable standards and practices for consumer-centric AI.

The overarching goal echoed throughout the day, was to foster an ecosystem where AI agents are built to be trustworthy, safe, and aligned with the best interests of the individuals they serve – agents that work for people, not on them. This first post in a series will provide a brief overview of the workshop and then dive deeper into one of the central themes discussed: What does it mean for an AI agent to be “loyal” to its user? 

Setting the Stage: The Quest for Consumer-Centric Agents

The workshop kicked off with framing remarks emphasizing the high stakes. Professor Sandy Pentland (MIT/Stanford HAI) highlighted the intense industry interest driven not just by opportunity, but by liability concerns. Companies recognize the need for evidence-based best practices and standards to ensure agent systems don’t go off the rails, potentially leading to significant harm and legal challenges. The vision? To move towards agents that could potentially act as legal fiduciaries for their users.

Ben Moskowitz, VP of Innovation at CR, explained CR’s commitment to this vision. He spoke of “consumer-authorized agents” designed to empower users in the marketplace – tools that research, buy, and troubleshoot effectively, advocating tirelessly for consumer interests. He stressed that achieving this requires tackling normative questions, technical challenges, and defining clear expectations for agent behavior, underscoring CR’s dual role in both consumer protection advocacy and proactive product R&D to help build the desired future. Ben specifically called for consumer platforms like CR to help develop standardized testing methodologies to validate agent claims—echoing CR’s historical role in product reliability assessments.

What Does a “Loyal” AI Agent Mean for Consumers?

This fundamental question of loyalty was a recurring theme, explored in depth by myself (Dazza Greenwood, Stanford CodeX/law.MIT.edu) and Diana Stern (Deputy GC at Protocol Labs & Special Counsel at DLx Law), a leading Silicon Valley lawyer and collaborator on a pre-workshop blog series on this topic.

Imagine Bob, circa 2026, needing a new dishwasher. Instead of wading through endless online reviews and potentially misleading sponsored content, he asks his AI agent: “Find me the best dishwasher for my needs and budget.”

  • A “loyal” agent, operating under a duty of loyalty, would prioritize Bob’s stated interests. It would analyze objective information, compare features based on Bob’s criteria (price, efficiency, reliability ratings, specific features), and recommend the option that genuinely best serves Bob. Its internal logic and external actions would be aligned with maximizing Bob’s benefit.
  • An agent not bound by loyalty, however, might operate differently. Its recommendations could be skewed by hidden incentives. Perhaps it prioritizes dishwashers from manufacturers who pay the agent provider the highest commission or kickback. Maybe it highlights models from advertising partners, even if they aren’t the best fit for Bob. Bob might still get a dishwasher, but likely not the best one for him, potentially paying more or getting a less suitable product.

This “duty of loyalty” concept, central to traditional agency law (as seen in the “Iron Triangle” diagram), suggests a model where the agent provider is legally and ethically bound to put the user’s interests first within the scope of their relationship.

Beyond Promises: The Link Between Legal Frameworks & Technical Reality

The workshop discussion highlighted that merely claiming loyalty in a terms of service document isn’t sufficient. True loyalty must be reflected in the agent’s underlying architecture and behavior. As Ben Moskowitz prompted, what happens if an agent claims loyalty but acts otherwise, perhaps due to flawed design, negligence, or even intentional bias in its programming?

This necessitates observability and verifiability of agent decisions. We need ways to assess whether an agent is actually acting loyally. Can we technically test if its information processing and decision-making are free from undue influence from third-party interests or the provider’s own conflicting business models? Can we evaluate if it consistently prioritizes the user’s goals as instructed? This technical dimension is inseparable from the legal promise. Workshop attendees identified promising technical approaches, such as independent “agent audits” and sandboxed simulations—methods CR could lead or facilitate—to objectively measure an agent’s adherence to consumer-first standards.

Diana Stern’s work, which we discussed, further illuminates this by outlining different potential relationship models between agent providers and users:

  1. Fiduciary: The highest standard, embedding a duty of loyalty (as discussed above)
  2. Technology Provider: The opposite extreme, where the provider essentially says, “We just provide the tool; you bear all the risk,” disclaiming liability (as seen in some current terms)
  3. Contractor: An intermediate model where duties and responsibilities are defined by a specific contract or scope of work, potentially mixing elements of service provision with limited obligations.

Choosing a model has profound implications on user trust and provider liability. While the “technology provider” stance might seem safest legally for the provider, the “fiduciary” approach, despite its higher bar, could become a significant competitive differentiator, attracting users seeking agents they can genuinely trust.

Looking Ahead

Establishing loyalty is foundational, but it’s just one piece of the puzzle. The AI Agents x Law workshop also explored critical mechanisms for handling agent errors (leveraging UETA Section 10b), the challenges of authorizing agents securely (authenticated delegation), the impact of agents on legal practice and labor, and the need for robust evaluation methods (“evals”) to ensure agent performance and alignment. Future posts will explore other crucial topics surfaced during the workshop, such as error handling and the implications of new protocols like Agent-to-Agent (A2A) communication. Stay tuned for more. 

The transition to an agent-driven world requires careful thought, collaboration, and proactive design. By bringing together diverse perspectives, initiatives like this aim to develop the frameworks, standards, and technical solutions needed to ensure AI agents enhance, rather than undermine, consumer welfare and market fairness. To this end, CR is exploring prototype tests and interactive demos, aiming to make loyalty measurable and visible to everyday users. 

Interested in how AI agents can better serve people? Want to help define that future? We’d love to hear from you. Reach out to us anytime at innovationlab@cr.consumer.org.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy