When Ian Russell set out to understand the role of technology platforms in the death of his 14 year old daughter Molly, the search answers became a multi-year quest for data. While Russell had access to Molly’s phone, tech companies dragged their feet for years before making data about her final months available to a UK coroner’s court.
Mr. Russell’s experience is emblematic of a wider challenge in technology policy, from social media and online marketplaces to chatbots. Because tech products behave differently for different people, it’s difficult to generalize the risks or harms that any single person might face. Then, when tech companies obstruct visibility into their products, it’s easy to cast doubt on people’s claims about the harms they experience.
Even worse, when researchers face similar data limitations, they can struggle to reproduce the problems that people report. As a result, people can feel unheard, angry, and even gaslit by the lack of evidence for things that are happening in front of their eyes. Others are harmed by what they don’t see – unaware that they might be getting a higher price or a worse experience for the same service because they don’t know how others are treated by the same tech.
How can we break through this traffic jam in science? This is the challenge that Dr. Nathan Matias took up in an article for Science he co-authored with Dr. Amy Orben at the UK Medical Research Institute, a leading psychologist who studies technology and youth mental health.
The people who experience digital harms aren’t anti-science. But their timelines are urgent, and when scientists and experts shrug and ask them to wait for more evidence, scientists alienate people with problems that can’t wait decades for answers. The erosion of science-based regulation in the US has made this problem worse by shuttering many of the early warning systems we once had, however imperfect. As a result, we expect a growing number of people to see science (and, to a similar extent, expertise like that held by Consumer Reports (CR)) as an obstruction rather than a benefit.
This is something CR has long observed as exacerbated by industry trying to ward off regulations. From tobacco and flame retardants to auto safety and now social media platforms, cryptocurrency markets, and artificial intelligence snakeoil, corporations routinely cite a “lack of conclusive evidence” as a justification for delaying regulation, avoiding liability, or continuing harmful practices.
CR’s work often focuses on surfacing consumer harms early— sometimes before the science has had time to catch up. But this puts us in a bind: when we move too fast, we’re accused of overreacting or lacking rigor. When we wait for the scientific community to reach consensus, the harm compounds, often quietly, invisibly, or unequally across communities.
The Traffic Jam in Safety Testing Personalized Products
The article summarizes how this traffic jam came to be in the study of social media and young people’s mental health. The problem starts when companies scale social technologies to millions or billions of people with limited safety testing or transparent mechanisms to study harms. Scientists get co-opted when leaders of the very same companies pretend to prioritize the most rigorous science and dangle token promises to scientists, knowing full well that it will delay accountability.
What makes social media, AI, and other tech products different from this classic corporate game is that their products are designed to deliver a different (personalized) experience to each user. As a result, methods like lab testing and population surveys tend to miss important harms that occur to smaller groups or people at the margins. And, while Nathan and Amy focus their article on tech policy, many different services across many different marketplaces are hard for CR to study because of personalization.
If science is to serve as an aid to consumer protection rather than a tool of delay, Nathan and Amy argue, we need to re-think the process and pace at which science is a gate-keeper for public grievances about markets and regulation.
Consumer Reports is a Scientific Institution
Nathan and Amy are describing a situation that we’re deeply familiar with here at CR. As an institution with a mission to build a fairer, safer, more transparent marketplace, it’s our job to keep up with the potential harms of platforms like Amazon, Facebook, and Google. Our own teams have to reckon with the pace at which digital technologies, be they algorithms, platforms, wearables, or connected appliances, are deployed to millions of people in real time and updated daily or even hourly, without pre-market testing or meaningful regulatory oversight. Because millions of people rely on us for reliable advice, it’s sometimes hard for us to talk about high-urgency issues that are still scientifically-uncertain.
Of the suggestions in Nathan and Amy’s article, two are especially relevant for CR:
- Accelerating how scientists receive evidence from broad publics and
- Accelerating coordination with a “Bad if true” list informed by public reports
Both of these come down to the institution doing a better job of structured listening.
Accelerating How Scientists Receive Evidence from the Public
First, we can become the place that captures stories of consumer harm across different products and services. Nathan and Amy argue that scientists can more quickly learn what they should prioritize by directly and rapidly collecting people’s complaints – infrastructure that we’ve already started to build with our robust survey team and member engagement projects like Community Reports, but mainstream science has been slow to embrace this approach. With digital products that behave differently for different people, this public engagement has become essential. Many damaging tech policy issues went unnoticed for years because the tech industry doesn’t include many people from the affected groups – including AI systems that are biased in their pricing and performance.
Second, we can turn people’s experiences into rigorous science while minimizing delays in efforts to understand and fix the serious harms of digital technologies. Nathan and Amy argue that scientists, civil society, governments, and industry need to find ways to coordinate and prioritize around possible harms before they have been fully confirmed. Part of the answer, they think, comes from the regulation of chemicals in a system that has been promoted by Greenpeace and the chemicals industry alike: a priority list of possible harms.
A “Bad if True” List of Possible Digital Harms
With a “Bad if True” list of digital harms, early evidence from severe incidents could be logged and archived in a central registry. Instead of confirming harms, the list would alert scientists, organizers, and companies about possible problems that might need solutions. Scientists could use the list to prioritize what to study and to justify their priorities when applying for funding.
Harmful designs would move, up, down, or off the “Bad if true” list on the basis of new evidence and assessments of the extent of possible harm. But no one would be prevented from working on something lower on the list if they care about it. With this public coordination function, investors, entrepreneurs, and regulators would get advance notice and regular updates about the state of the possible harm and could prepare accordingly.
If managed well, the “Bad if true” list would treat aggrieved publics with respect and dignity, treating them as consequential partners as they seek help and validation from scientific institutions. At the same time, the list would allow scientists to keep pursuing appropriately high standards of evidence for questions that require conclusive answers. By pooling evidence, the list would also combat the risk of people rushing to conclusions after a solitary study in cases where multiple studies are necessary. Just as importantly, such a system would incentivize entrepreneurs to use the list to develop design solutions to possible harms far before the evidence reaches a threshold that requires explicit regulation.
Building Trust in Science with Science that Helps People
We’re living in a moment where faith in institutions is waning. We’ve described one of the ways that science is leaving people feeling (perhaps rightfully) that it is out of touch, but you don’t have to look farther than the headlines to see the doubt in public health, in public safety, and in public policy. We think that one vital way to regain faith in those institutions is not to browbeat people with facts, but to show them the value of science via experiential learning and participation.
Participatory science is a solution to both the problems of broadness and slowness described in by Nathan and Amy, and to the problem of a lack of trust in the work that we do at CR. We are leaving a cultural age of trusted reporting setting the agenda, and entering (perhaps have already entered) one in which people’s conception of truth is affected by what they or their communities have experienced for themselves.
We have an obligation to help tackle these problems by doing what we can to loosen the traffic jam around studying digital technology – and we’re positioned to do so effectively. CR scientists have, for nearly 90 years, assuaged the fear and uncertainty of consumers by helping them with level headed guidance and useful tips/tricks. We need now to lean into that vital role of cutting through the grifters and snakeoil salesman, but with a new structural process where we listen to and validate fear and frustration, then engage people in finding solutions to take some of the heat of the room and let Science Work.
The ideal Community Reports project is one where we understand a problem statement with help from the community, gather data to confirm or disprove a hypothesis, and then work with our participants to make sense of the data we’ve collected and to advocate for change. That means we’re collecting inputs for a potential “bad if true” list, while also validating the consumer harms on the list and designing studies to understand how bad individual hot spots actually are.
And, we’re creating alternative ways of sourcing data that are resilient and lasting and bring people into the process of the work that CR and other consumer advocates do. Have a problem our team of community scientists should investigate? Interested in joining a community of scientists building a safer, stronger marketplace? Email us at community@cr.consumer.org!