What to Know About California Final Rules for Automated Decision-Making Technologies

Companies are increasingly using AI—and less sophisticated algorithms—to make important decisions about consumers: who is selected for a rental unit, who is approved for a mortgage, who is hired for a dream job, who gets what medical care, and more. While these products can speed decision making, they can also be flawed. They can bake in bias, rely on incorrect information, or make recommendations based on spurious connections

On top of that, they pose a challenge to enforcing many existing laws, like civil rights laws and consumer protection laws. Those laws were written with human decision-makers in mind. Some rely on humans with intent, or humans who can be subpoenaed in court, or the type of documentation humans often create as they reach a decision. It’s hard to build a case when enforcers don’t know how an algorithmic decision product recommends one candidate over another, or how it’s used by a company.

Luckily, states are stepping up. In 2024, Colorado passed a law to make sure that consumers and workers know when companies are using automated decision-making technology (ADMT) to assist in consequential decisions about them. In July, the California privacy agency voted to finalize rules on the same topic. 

The final rules are a significant policy development; California will become the second state in the nation with safeguards specifically for the use of automated decision systems in a range of high stakes decisions. (A few other jurisdictions, including Illinois, Minnesota, New Jersey, and New York City, have enacted laws or promulgated rules addressing some types of automated decision-making, but none are as comprehensive as Colorado’s law or the California ADMT rules.) The California rules will be an important reference point for regulators and legislators in other states who want to tackle this issue. This analysis is aimed at helping those people—along with advocates and other stakeholders—understand the core components of the rules. 

Ultimately, while certain provisions of the final regulations are reasonable, the regulations as a whole became progressively weaker throughout the process. The end result is rules that do not protect consumers in many situations where an ADMT plays an influential role in major life decisions, that render important rights unlikely to be used in practice, and that carve workers out of numerous protections.

What situations do the rules cover?

The rule-making package covered a variety of subjects, but this analysis will focus on the key provisions of the new rules related to ADMTs, and briefly discuss new rules for when and how companies need to conduct risk assessments. 

Most of the rights related to ADMTs apply only when a company uses an ADMT in a “significant decision.” This means, in short, a decision that results in the provision or denial of financial services, housing, education enrollment opportunities, employment and independent contracting opportunities, compensation, or healthcare services. 

The final rules define ADMT in a manner significantly narrower than the definition used in earlier drafts. In a Fall 2024 draft, a system was covered if it “substantially facilitate[d]” human decision-making. The “substantially facilitate” standard applied to systems where the ADMT output was a “key factor” in a consequential decision.

In the final version of the rules, this “substantially facilitate” standard was replaced with a stricter “substantially replace” standard, which was defined as meaning that the decision was made “without human involvement.” In practice, this narrowing means that Californians often won’t be able to use their new rights in the relatively common situation where a human makes the final decision, but an ADMT plays an influential role in the process. Agency staff testified that, due in part to these changes, the final rules would likely cover only about 10 percent of California businesses subject to the California Consumer Protection Act.

For example, consider the AI hiring tool HireVue, which uses applicants’ device cameras to analyze their face movements, manner of speaking, and word choice to score their “employability” against other candidates, as the Washington Post reported in 2019. The ADMT rules likely would not cover a product like HireVue as long as the human making the final hiring decision understands the program’s output and reviews other relevant information, such as a resume — even if HireVue’s score ends up being a decisive factor in many candidates’ applications. Similarly, programs that rank potential renters, AI systems that screen and score college applicants, or systems that recommend courses of treatment to doctors may not be covered — even if the landlord, or college admissions officer, or doctor defaults to the ADMT’s recommendation in almost every case. 

This is worrisome, because research suggests that humans tend to view automated systems as authoritative and trustworthy, and are inclined to defer to a system’s recommendations even when they suspect it is malfunctioning. Unfortunately, these rules put trust in human reviewers to protect Californians when an ADMT is merely extremely influential in a significant decision.

What are the core consumer rights under the new rules?

There are three key consumer rights regarding ADMT use under the new rules:

  • the right to information before the significant decision;
  • the right to opt out or the right to appeal; and
  • the right to access more information about the ADMT

These rights are important components of transparency and accountability, although key limitations on how consumers and workers can utilize them (along with the rules’ narrow definitions) make it an open question how often Californians will be able to use them in practice.

Right to key information before the decision

Before an ADMT is used to make a consequential decision about a consumer, businesses are required to provide consumers with a disclosure, which the rules refer to as a “Pre-use Notice.” The disclosure must include a plain language description of the specific purpose for which the business plans to use the ADMT, how the ADMT processes personal information to make the decision about the consumer, what categories of personal information affect the output of the ADMT, the type of output generated by the ADMT, and how the output is used to make the significant decision. The disclosure also needs to remind consumers of their rights, including the right to opt out if applicable, as well as their right to access more information.  

While knowing the categories of personal information that drive the output is important, it is unclear how detailed the categories will be. The notice would be more useful to consumers if businesses were required to be specific about the personal information that affects the output of the ADMT — like sex, age, someone’s public facebook posts, their credit score, or information from their resume.

One improvement the agency made towards the end of the rule-making process is that businesses will be required to tell consumers in the pre-use notice how the significant decision will be made if they opt out of ADMT use. This is crucial: If you’re applying for a rental unit and you get a notice that AI will be used to review your application — but you can opt out — you may wonder what the ramifications of opting out would be. Would you be out of the running? Would a human look at your application instead? Spelling out the alternative process is part of the bare minimum information consumers need to make an informed decision of whether to opt out. 

Right to opt out of ADMT in a consequential decision, or right to appeal

The rules give consumers a right to opt out of the use of ADMT for significant decisions in some situations. As described above, the business will have to explain what the alternative decision-making process is if the consumer opts out. 

There are several significant exemptions, when businesses don’t have to provide an opt out. One is if the business instead offers a right to appeal the decision to a human reviewer. In order to qualify, a company must designate a human reviewer who has the authority to overturn the decision, who knows how to interpret and use the output of the ADMT, and who will analyze the output of the ADMT and any other information that is relevant to potentially changing the significant decision in question. The rules also outline a timeline for the appeals process.

Companies also don’t have to provide an opt-out in some cases related to allocation of work and compensation, or in decisions surrounding admissions or hiring. 

Right to access more information

Consumers can also submit a request to the company to receive more information about the company’s use of an ADMT to make a consequential decision about them. The company must provide:

  • The specific purpose for which the ADMT was used;
  • Information about the logic of the system, such that a consumer can understand how the ADMT processed their personal information to generate an output about them;
  • How the business used the output of the ADMT to make the decision (eg. Was it the sole factor? And if not, what were the other factors?);
  • The outcome of the decision making process for the consumer (e.g. If they were selected for a rental unit, turned down from a job, denied insurance, etc.).

In earlier drafts of the rules, this disclosure would have contained more information, but the agency removed several requirements. For example, previous versions would have required business to disclose the actual output the ADMT generated about the consumer. This might be something like a “do not hire” recommendation, a score of seven out of ten, a prediction that a consumer is 99% likely to repay the loan, or a red flag on an AI background check. Earlier drafts also required the businesses to disclose key parameters that affected the output, and how those parameters applied to the specific consumer. These now appear to be optional. Businesses also aren’t required to provide the personal information, or the sources of that information, that impact the output as part of this disclosure. That means that if they are using an ADMT that relies on information of dubious quality — such as inferences about a person’s lifestyle, drug use, or family size from their social media profiles — the consumer may not know, and therefore may not be able to challenge incorrect information in practice.

The rules also previously required companies to send a reminder notice to consumers after using an ADMT to make certain “adverse significant decisions” about consumers, such as firing someone or denying them financial services, housing, insurance, or healthcare. The notice would remind consumers of their right to access more information about the decision, their right to appeal if the business offers it in lieu of opt-out, and would explain that the business is prohibited by law from retaliation against consumers for exercising their privacy rights. 

Instead, the final version of the rules cut this reminder notice entirely. In practice, this means consumers are less likely to use their right to access. If a consumer is denied a home, or is fired, or is turned down for a loan by a business relying on ADMT, they will need to remember that they have the right to access more information about that decision — if they even noticed it in the pre-use disclosure in the first place. 

Risk Assessments

The rules also require businesses to conduct risk assessments when they are processing a consumer’s personal data in a way that presents a “significant risk to privacy.” Those circumstances include:

  • When they sell or share personal information with other entities; 
  • When they process sensitive information; 
  • When they use ADMT for a significant decision about a consumer; 
  • When they process personal information to train an ADMT; and
  • When they use automated processing to infer important traits about the consumer, like the consumer’s intelligence, health or economic situation, and more based upon the consumer’s presence at a sensitive location — such as doctor’s offices, food pantries, places of worship, political party offices — or based on systemic observation when a consumer is a student, an applicant to a school, job, or acting in their capacity as an employee or independent contractor.  

At a high level, the risk assessment rules require businesses to document their personal data processing practices. This includes information such as how they are collecting personal data, the approximate number of consumers whose data will be processed, the names and categories of service providers or third parties the company will disclose the data to, the benefits of the data processing and possible negative impacts to consumer privacy, as well as safeguards to address those negative impacts, and the names of individuals who reviewed or approved the assessment. Businesses submit information about their risk assessments to the agency on an ongoing basis, and the agency and the Attorney General can require a business to submit their full risk assessment at any time.

An earlier draft of the rules prohibited companies from moving ahead with data processing if they found that the risks to consumers’ privacy outweighed the benefits of the processing. The final rules weakened this important provision to be a restatement of a goal: That the goal of the risk assessment is restricting or prohibiting the processing of personal information when the risks to consumers’ privacy outweigh the benefits to the consumer, the business, other stakeholders, and the public. 

Consumer Reports and the Center for Democracy & Technology both pushed back on this change in our respective comments to the agency, arguing that this particular provision should be stronger. Not only should businesses be prohibited from certain kinds of personal data processing when the risks outweigh the benefits, but the privacy agency should be able to challenge businesses’ assessments of the tradeoffs. In other words, if the agency reviews a risk assessment and finds the business hasn’t captured all of the key risks, or has underplayed them, it could step in, hold a hearing, and if it determines a violation has happened, order the business to restrict or prohibit the worrisome processing. Knowing their risk assessments could be challenged might have provided businesses with greater motivation not to understate the risks.

Conclusion

In sum, the California privacy agency undertook the challenging work of being the first in the nation to detail regulations on the use of automated decision-making technology. But, with intense pushback from business and tech lobbyists, the agency narrowed the rules and struck important provisions. 

If you’re a legislator, staffer, or regulator working on policy related to the use of automated decision-making technology in high stakes decisions and want to learn more, we’d love to chat. Feel free to reach out at grace.gedye@consumer.org, mscherer@cdt.org, justin.brookman@consumer.org, rshetty@cdt.org, and thall@cdt.org.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy