CR Publishes Artificial Intelligence Policy Recommendations

Artificial intelligence and algorithms aren’t just for chatbots. They’re also used behind the scenes to make critical decisions about our lives: reviewing and filtering out resumes, prioritizing who gets extra medical care, screening potential tenants, setting your car insurance premium or your rent, and more.

AI tools can benefit consumers. But they can also make the market less transparent and less fair, impinge on privacy, and put consumers’ safety at risk. 

We’ve developed policy positions to share with legislators, regulators, partner organizations – and anyone curious about where we stand. 

To make AI work well for consumers, we think lawmakers in Congress and in the states will have to enact some new protections, and agencies may need to create new rules. But existing laws and regulations also protect against many of the harms posed by AI. These provisions include Section 5 of the FTC Act, the Fair Credit Reporting Act, the Equal Credit Opportunity Act, civil rights laws, state consumer protection laws, and state privacy laws.  

You can read all of our positions here. Some key proposals include:

Transparency:

  • Require clear disclosure when an algorithmic tool is being used to help make a consequential decision about a consumer, like whether they qualify for a loan, are selected for a rental apartment, get a promotion, or see their insurance rates or utility bills go up.  
  • Require companies to explain why a consumer received an adverse decision when an algorithmic tool was used to help make a consequential decision about a consumer. Explanations must be clear enough that, at a minimum, a consumer could tell if the decision was based on inaccurate information. Explanations should include actionable steps consumers can take to improve their outcome. If a tool is so complex that the company using it cannot provide specific, accurate, clear, and actionable explanations for the outputs it generates, it should not be used in consequential decisions.
  • Require companies that develop algorithmic tools that help make consequential decisions to provide access to vetted, public interest researchers. To get a thorough understanding of how these tools work and where they fall short, independent, high quality research must be conducted.
  • Require companies to meaningfully substantiate the claims they make when describing or marketing their AI products. 

Fairness:

  • Prohibit algorithmic discrimination. Existing civil rights laws prohibit many forms of discrimination, but it is worth identifying any potential gaps and clarifying how these laws apply to companies developing and deploying AI and algorithmic tools, as well as how they are expected to comply and how these laws will be enforced when it comes to certain uses of AI.
  • Require companies that make tools used in consequential decisions to undergo independent, third-party testing for bias, accuracy and more pre-deployment, and regularly after deployment. Clear and consistent standards should be developed for testing and third-party auditors should be regulated by relevant regulators or bodies.
  • Limit self-preferencing: Big Tech shouldn’t use their AI models to preference their own products or services when doing so would materially harm competition.

Privacy:

  • Require data minimization across AI tools. Companies should be required to limit data collection, use, retention, and sharing to what is reasonably necessary to provide the service or conduct the activity that a consumer has requested, with limited additional permitted uses.
  • Prohibit the sale and sharing of personal data collected by generative AI tools to third parties
  • Ban remote biometric identification in publicly accessible spaces, including retail, with limited exceptions. Consumers cannot consent to be tracked if their only alternative is to not enter public/semi-public spaces.  

Safety:

  • Impact and risk assessments: Require companies creating tools that help make consequential decisions or are otherwise risky to conduct their own risk and impact assessments and make changes based on the issues those assessments raise.
  • Whistleblower protections and incentives: Whistleblowers can expose problems to the public that companies have no real incentive to disclose or address. But right now, there are few protections for people who want to disclose issues surrounding AI. 
  • Clarify liability for developers and deployers who fail to take appropriate precautions to prevent both malicious uses and unintended consequences of AI that put consumers at risk.

Enforcement + Government Capacity:

  • Provide additional resources for the Federal Trade Commission and state regulators: The nation’s consumer protection agency is vastly under resourced and staffed – especially relative to the companies it is charged with overseeing.
  • Create a private right of action for individuals or groups hurt by biased or faulty algorithms. Law enforcement agencies, like federal regulators and state attorneys general have nowhere near the capacity required to effectively enforce laws prohibiting algorithmic bias. 

You can read the rest of our policy positions here.

For partner organizations, lawmakers, legislative staff, and regulators who want to learn more about these recommendations or work with Consumer Reports (CR), contact:

Justin Brookman
Director of Technology Policy
justin.brookman@consumer.org

Grace Gedye
Policy Analyst, Artificial Intelligence
grace.gedye@consumer.org

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy