
In my previous post, I shared highlights from Stanford CodeX’s AI Agents x Law Workshop exploring how we might foster an ecosystem where AI agents are built to be trustworthy, safe, and aligned with the best interests of the individuals they serve. In this post, I’ll dive into Section 10(b) of the Uniform Electronic Transactions Act (UETA)—a previously obscure provision—that has suddenly become critically relevant as AI-driven agents increasingly mediate commercial transactions.
Setting the Scene
Imagine asking your new AI shopping assistant to order a specific book, only to find 10 copies arriving at your door. Or perhaps it books a flight to Paris, France, instead of Paris, Texas, for that crucial conference. As AI agents move beyond providing information to actively conducting transactions on our behalf – buying goods, booking services, managing finances – the potential for costly errors increases. What happens then? Who is responsible, and what recourse do you have?
While the technology feels cutting-edge, part of the answer lies in a surprisingly relevant piece of legislation from the dawn of the internet age: the UETA. Enacted in 49 states and territories around 1999 to give legal validity to electronic signatures and records, UETA showed remarkable foresight by including provisions specifically addressing “electronic agents.” These rules, particularly Section 10(b) concerning errors are once again pertinent with the rise of powerful LLM-driven agents.
UETA Section 10(b): The Right to Undo Agent Errors
UETA Section 10(b) provides a critical safeguard for individuals when an electronic agent introduces an error into a transaction. In plain terms:
- If an electronic agent makes a mistake during a transaction (one you didn’t intend), and…
- You, the user, were not provided with a reasonable “means to prevent or correct the error” by the agent’s provider…
- Then, you generally have the legal right to “avoid the effect” of the erroneous transaction – essentially, to reverse or undo it.
This isn’t about agents giving bad advice – that might fall under different legal principles like negligence or deceptive practices. UETA Section 10(b) specifically targets situations where the agent itself, operating autonomously, messes up the action of the transaction.
Crucially, this right to reverse the transaction cannot simply be waived by fine print in the terms of service.
The Provider’s Role: Building the Escape Hatch
The key phrase here is the “means to prevent or correct the error.” This puts the onus squarely on the company providing the AI agent service. If they want to ensure the transactions conducted by their agents are considered final and legally binding, they must build mechanisms that give the user a fair chance to catch and fix mistakes before they become irreversible problems.
What does this look like in practice? At the Stanford CodeX’s AI Agents x Law Workshop, Andor Kesselman presented a compelling open-source demo showcasing exactly this. Implementations might include:
- Clear Confirmation Prompts: “You are about to purchase 10 widgets for $100. Confirm or Cancel?”
- Review Steps: Allowing users to review order details before final submission
- Spending Limits or Threshold Alerts: Flagging unusually large or atypical transactions for human verification
- Accessible Error Reporting: Clear paths for users to report issues promptly
As Diana Stern and I noted in a recent Stanford CodeX article:
“By implementing a user interface and process flow that enables customers to review and correct transactions before they are finalized, providers not only comply with UETA but also establish a strong argument for ratification… This design pattern – proactively building in error prevention and correction mechanisms – is therefore not just about legal compliance; it’s a fundamental aspect of responsible Transactional Agent development that helps define the point of finality and clarify the allocation of risk. But it’s also just good practice and a fair rule.”
Why This Matters Now More Than Ever
While UETA is over two decades old, its provisions on automated transactions and error handling are stepping into the spotlight. The “electronic agents” envisioned then were largely deterministic; today’s LLM-powered agents are far more complex and unpredictable, making robust error handling even more vital.
Because of UETA Section 10(b), consumers have a powerful legal remedy if an agent transaction goes wrong and the consumer wasn’t given a chance to fix it. For businesses deploying AI agents, UETA Section 10(b) is a clear mandate: building effective, transparent error prevention and correction isn’t just good customer service – it’s a legal necessity for ensuring transaction finality, mitigating liability, and ultimately, earning user trust in this new era of automated commerce.
Looking Ahead
While we’ve explored the importance of loyalty in AI agents and the legal frameworks for handling their errors, it’s also crucial to recognize that agents are no longer acting alone—they’re starting to talk to each other. My final post in this series will dive into the emerging world of Agent-to-Agent (A2A) communication and what it means for consumers.
Interested in how AI agents can better serve people? Want to help define that future? We’d love to hear from you. Reach out to us anytime at innovationlab@cr.consumer.org.