What to Know about California’s Proposed Rules for Automated Decision-making Tech

If your grocery store is using cameras to track you around the produce section, should you have the right to opt-out? If a company is running emotion-detection software on your video job interview, should you have the right to know?

On December 8th, California privacy regulators met to discuss new draft rules aimed at addressing these exact kinds of questions. 

The draft rules are an innovative effort to protect consumers, as algorithmic decision-making tools and profiling technology play a growing role in our lives. The rules would grant Californians new opt-out rights, giving people more control over how technology tracks and makes decisions about them. 

But the rules will have policy implications outside of California too. With gridlock in Congress and the EU pushing ahead on AI regulation, states are likely to step in. The rules that California is considering would set a new high water-mark for consumer protections. Other states could follow California’s lead and implement similar rules. 

Why should I care? 

Typically, laws that create new rights leave plenty of work up to regulators, and important protections for consumers often don’t come to fruition until the rules are written. 

Ultimately, how these rules take shape will determine which Californians get the right to opt-out of automated decision-making and profiling tools – like keystroke loggers and location trackers  – and when. And, unlike the many bills that die while navigating a gauntlet of votes, these rules are actually going to happen. 

What do the rules on automated decision-making technology do?

Broadly, the draft rules would create new opt-out rights for consumers, job applicants, employees, and students subject to automated decision-making and profiling technology. They would also require companies to disclose a lot more information about how these tools work than they currently do. 

Key parts of the rules include:

  • Consumers have the right to opt-out when the tools are being used to make a decision with legal or significantly similar effects. That includes decisions about whether you get access to credit; decisions about insurance coverage; decisions in the justice system; decisions about job opportunities; and more.
  • Consumers have the right to opt-out when the tools are being used to profile you in a publicly accessible place, like a mall, movie theater, stadium, or hospital. Profiling means processing information about you to analyze or predict certain things about you, and includes wifi tracking, drones, facial and speech recognition tools, and more. 
  • Employees, independent contractors, job applicants, and students would also have the right to opt-out of profiling. That’s important because it’s difficult to switch schools or change jobs if tech is being used in an invasive or unfair way.

The privacy agency is also discussing giving consumers the right to opt-out of profiling for behavioral advertising. The rules include exceptions, when businesses don’t need to provide an opt-out option.

Businesses would also have to give you notice before using a tool covered by the rules, and would need to provide easy access to information about its use. And, if a business denies you an important good or service, they need to let you know, make information about the tool they used available, and let you know you can file a complaint with the privacy agency and the Attorney General. 

Our take on the rules

These draft rules are a huge step forward for consumers. They include the strongest and clearest protections surrounding the use of profiling and automated decision-making tools that we’ve seen so far in the country. Consumer Reports’ advocates provided feedback at the privacy agency’s December board meeting, applauding that progress.

We think it’s smart that the rules define “automated decision-making technology” broadly, and don’t limit new opt-out rights to situations where software is making decisions on its own. It’s clear that decision-making tools can be risky even if a human is empowered to intervene. Humans can wind up rubber-stamping machine recommendations. Sometimes, adding human discretion to the process makes decisions more biased

Where do we go from here?

The rules are still in the early stages and the California Privacy Protection Agency will continue to work on them. At some point, the formal rule making process will begin. Once that happens, the agency generally has a year to complete the rules, and then some time after that the rules will go into effect. 

There will likely be other opportunities for the public to provide their input, both during agency board meetings and in writing. You can sign up to get email updates about the rulemaking process

It’s important that advocates for consumers, workers, job seekers, independent contractors, and students weigh in, along with advocates for fairness in housing, the justice system, health care, insurance, and consumer finance. Businesses are making their voices heard through their lobbyists. The interests of consumers, employees, gig workers, patients, and renters should shape these rules too.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy