Engineering “Loyalty By Design” In Agentic Systems

From managing personal data to facilitating complex transactions, AI agents will be acting on behalf of users in new and interesting ways. With these innovations comes a need to examine how these agents can truly serve the interests of consumers, free from conflicting incentives.

Consumer Reports and others are investing in R&D around personal AI agents that advocate and act on behalf of consumers. A key differentiator for public interest or not-for-profit offerings might be the notion of “loyalty by design”—in other words, assurances that an AI agent serves the users’ interests above all, rather than the interests of counterparties that may not necessarily be aligned.

Imagine an LLM-based agent that manages your travel bookings or e-commerce purchases. An agent that is “loyal by design” would understand your intent and act in ways that are explainable and expected. It would resist the influence of hidden kickbacks or decision frameworks that might not serve your best interests. It might even be designed to uphold a legal obligation, much like a fiduciary, that ensures it serves your interests first.

The legal concept of agency offers a starting basis to design such systems. The term “agent” itself can cause confusion, holding different meanings in the realms of software development and law. In software, it broadly refers to systems that perform tasks on behalf of users. However, the legal definition is much more specific, encompassing obligations that AI systems alone cannot fulfill. According to the Restatement (Second) of Agency § 1(1) (1958), agency is defined as “the fiduciary relation which results from the manifestation of consent by one person to another that the other shall act on his behalf and subject to his control, and consent by the other so to act.”

With that in mind, let’s clarify terms:

  • An AI agent is a digital assistant powered by advanced AI technologies, particularly Large Language Models (LLMs), which enable them to understand and respond to complex instructions, reason about different options, and use tools. 
  • An Agentic System encompasses both the AI agent and the entity that provides and operates the agent – ideally, an entity that can be trusted to ensure the agent acts in your best interest. While LLMs represent a significant leap forward, AI agents still require human oversight, especially in situations requiring nuanced judgment, ethical considerations, or legal interpretation. And practically, we are a long way from systems that can handle complex transactions without a lot of deterministic logic, human oversight, and “humans in the loop.” This is one reason why the role of the trusted entity within the Agentic System is so crucial. 

Agency law defines the roles and responsibilities of a fiduciary relationship. In this relationship, one person (the “principal”) gives authority to another person (the “agent”) to act on their behalf, with the agent being subject to the principal’s control. Consent is key: both the principal and agent must agree to the agency relationship for it to exist. The principal has the right to direct the agent’s actions, setting the goals or objectives to be achieved. However, in fiduciary relationships, the agent—trusted for their expertise—exercises independent judgment in determining how to carry out those duties in the principal’s best interest.

By applying this framework to Agentic Systems, we can establish clear lines of accountability and ensure that–by agreement and design–such systems can be built to prioritize the consumer’s (principal’s) interests above all others.

To summarize: in this framework, the principal (you, the consumer), can choose and direct an “Agentic System” (an AI agent operated by a trusted entity), which is authorized to interact with third parties (the companies or services you do business with) on your behalf. This forms what we might call an “iron triangle” of agency:

The Iron Triangle of Agency
Diagram 1: The “Iron Triangle” of Agency

 

Several levels of system guarantees are needed to transform this “iron triangle” from a theoretical legal framework into a real-world implementation of an Agentic System that is loyal by design. These might include:

Level Explanation How it Supports Duty of Loyalty
Governance The rules and bylaws governing the Agentic System’s operation, ensuring transparency and accountability. Ensures the trusted entity’s primary obligation remains focused on serving consumer interests above all else.
Data Stewardship How consumer data is collected, used, and protected. Maintains strict loyalty by preventing misuse of consumer data for any purpose not directly benefiting the consumer.
Instructions & Tooling The mechanisms used to control and direct the AI agent’s actions, ensuring it aligns with consumer goals/preferences. Enforces loyalty by ensuring all agent actions align with explicit consumer preferences and interests; avoids conflicts of interest or self-dealing.
Agent-to-Agent Communication Secure and transparent protocols for interactions between different AI agents. Maintains loyalty during third-party interactions by clarifying intents and scopes of authorization, and preventing compromises that could harm consumer interests.
Identity & Payments Secure methods for verifying identity and processing payments on the consumer’s behalf. Facilitates transactions and limits systemic risks (e.g., security, privacy, compliance, reputational) for all participants.

 

We’ll be digging into each of these levels and sharing our thoughts on how to build AI agents that are loyal by design. In the meantime, we’d love to hear your thoughts on this framework. Get in touch at innovationlab@cr.consumer.org.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy