Survey: Majority of Americans are Uncomfortable with AI in Employment, Housing, Healthcare Decisions

Artificial intelligence and algorithms are increasingly being used to help make important decisions about your life. Some landlords use algorithmic tools to assess your suitability as a potential renter; some employers use AI to scan your resume and recommend whether or not you should be interviewed for a job. 

These types of AI systems – made by businesses, for other businesses, to help make decisions about you – often don’t get as much attention as generative AI tools like ChatGPT or Dall-E. That’s in part because consumers rarely interact with these tools and often don’t know they are being used at all. But the stakes are high: if the software that’s matching you to a job opening is biased, or the AI tool that’s running your background check is error-prone, you could lose out on a major opportunity. 

AI tools used in high-stakes decisions like these are getting scrutiny from regulators and from state legislators. This year, Colorado passed a law that creates new rules for companies that develop and use AI to help make decisions like access to housing, insurance, spots in school, loans, employment, and more. Legislators in Connecticut, California, and several other states considered similar bills.

We, at Consumer Reports (CR), wanted to get a better understanding of what Americans thought about using AI in these highly consequential scenarios. CR’s survey research team conducted a nationally representative multi-mode survey on several topics, including AI and algorithmic decision making in May of 2024. The survey was administered to 2,022 US adults by NORC at the University of Chicago. The full report on the AI and algorithmic decision making survey results is available here.

The majority of Americans are uncomfortable with the use of AI in high-stakes decisions about their lives 

We asked Americans how comfortable they felt with the use of AI and algorithms in a variety of situations, such as banks using algorithms to make underwriting decisions, landlords using AI to screen potential tenants, hospitals using AI to help make diagnoses, and employers using AI to analyze video job interviews. 

 

For each question, more than 50% of respondents were either somewhat uncomfortable or very uncomfortable with the prospect. The highest share of respondents said they were ‘very uncomfortable’ with the use of AI to grade video job interviews. This is not a hypothetical scenario: In recent years, companies such as HireVue have developed AI assessments used by major employers to analyze candidates’ choice of words, facial expressions, perceived “enthusiasm,” and more to generate an “employability” score.

CR also looked at demographic differences among the respondents who said they were ‘very uncomfortable.’ In the hospital scenario, women were more likely to be uncomfortable with AI or algorithms being used to make a diagnosis or treatment plan than men (72% versus 56%). In the job interview scenario, loan application scenario, and tenant screening scenario, older Americans were more likely to be uncomfortable than younger Americans. The full report includes more demographic insights.

The vast majority of Americans want to know what information about them an AI tool relies on when making job decisions

Building on the job interview scenario, we instructed respondents to imagine an AI program or algorithm had been used to determine whether or not they would be interviewed for a job they had applied for, and asked them if they would like to know specifically what information about them the program used to make that decision. 

Most Americans (83%) said yes, they would want to know. While there were differences across age groups, income levels, and political orientation, at least 75% of each demographic or political group said they’d want to know. 

 

CR has been pushing for state-level legislation that would require companies to disclose information about how an AI tool will assess you before the tool is used, and provide an explanation after adverse decisions. Colorado’s new law, which goes into effect in 2026, requires that companies disclose the “principle reason or reasons” for adverse decisions. California lawmakers are considering a similar bill, currently backed by CR, that would provide consumers with more detailed information about how they will be assessed by an AI tool, as well as actionable post-decision explanations. 

Americans overwhelmingly want the opportunity to correct any incorrect personal information an AI hiring tool relies on 

AI tools used in high-stakes decisions sometimes make mistakes, or draw on erroneous information. For example, consumers have sued companies that make algorithmic tenant screening tools, such as RentGrow and CoreLogic, after discovering they were rejected because the software confused them with someone else who had a similar name and a record of criminal convictions.

Building again on the job interview scenario, CR asked respondents whether they’d want the opportunity to correct any incorrect data if they found out that that AI or a computer algorithm made a decision about whether they got a job interview by relying, in part, on incorrect information. An overwhelming 91% of Americans said yes. 

 

Right now, Americans have the right to correct incorrect information in credit reports under the Fair Credit Reporting Act. It’s an important protection, but it doesn’t cover many scenarios where a business might rely on incorrect information when making a consequential decision. That’s why CR supports providing consumers with the chance to correct any incorrect personal data when companies use AI to make certain high-stakes decisions. Colorado’s first-in-the-nation AI bias law includes a right to correct, as does AB 2930, the AI bias bill in California

CR published an AI policy guide in March that outlines our key positions and recommendations for policymakers. 

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy