CR’s Section 230 2020 Legislative Round-Up

Along with all else proffered by the year 2020, the year witnessed an unprecedented number of bills introduced in Congress that would alter or amend Section 230 of the Communications Decency Act. Passed in 1996, the law’s legal shields removed disincentives that could otherwise discourage consumers from sharing content and information, internet services from hosting consumers’ content in the first place, or those same services from moderating content at all.

In that regard, Section 230 has been essential to enabling speech, commerce, innovation, and some level of content moderation for consumers online. However, those same shields also remove the disincentive of possible lawsuits — a potential avenue of accountability — for the wide array of harms that internet services scale, worsen, and profit from. In this regard, by removing the threat of enormous litigation costs (let alone any potential damages or state criminal liability), Section 230 gives internet services more leeway to ignore the compounded scale of harm that their products and policies enable when they design systems that prioritize engagement and profit over harms to consumer safetycivic well-being, and civil rights.

As with any policy lever, modifying Section 230 should not be viewed as a silver bullet for all that ails the internet, but a useful tool to consider — deserving of serious engagement and analysis — in the legislative repair shop’s broad toolbox.

A Quick Primer of Section 230 Key Subsections

 

Section 230 contains a few key provisions that lawmakers have focused their attention and reform efforts on: subsections (c)(1), (c)(2), (e), and (f).

A quick review of these subsections proves helpful to understanding the legislation that attempts to amend the current law.

  • Subsection (c)(1) immunity shields “interactive computer services” and the users who use them from liability for content that someone else created and published by preventing these entities from being treated as a “publisher or speaker” of the third party’s material. While its focus is on the hosting and dissemination of content, courts have also applied (c)(1) immunity to prevent suits based on content moderation decisions.
  • Subsection (c)(2) immunity ensures that neither voluntarily choosing to moderate content in good faith nor providing technical means for other content providers to moderate content will expose an interactive computer service or user to civil liability, further enabling moderated and curated spaces online. Specifically, this subsection permits users and interactive computer services to moderate any material they consider to be: “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
  • Subsection (e) explicitly enumerates exceptions that subsection immunities do not cover: federal criminal law, intellectual property law, state law (consistent with the statute), communications privacy law, and sex trafficking law.
  • Subsection (f) defines the law’s key terms — the most relevant definitions addressed by the various reform proposals include: “interactive computer service” and “information content provider.” Some proposals alter what these terms mean, while others add further definitions to subsection (f).

 

A Review of Section 230 Reform Legislation Introduced in 2020

 

Members of Congress introduced twenty-three bills modifying Section 230 in 2020, and the conversations begun then are poised to evolve well into the 117th Congress.

Broadly, these proposals fall into three main categories: those eliminating or conditioning (c)(1) immunities for certain types of content, those eliminating permissible reasons to moderate, curate, and fact-check content by way of eliminating a combination of (c)(1) and (c)(2) immunities, and those either wholly upending Section 230 or using it wholesale as an incentive to modify various business models.

The first category of proposals amend Section 230 either to condition or eliminate subsets of (c)(1) immunity — immunity for hosting and disseminating third-party-content. This category, broadly, contains proposals designed to incentivize more vigilant content curation and moderation online. Bills conditioning immunity would require compliance with new regulatory requirements, such as transparency reports, implementation of notice-and-takedown systems, appeals processes, or law enforcement reporting mechanisms. Proposals that would eliminate immunity for additional subsets of (c)(1) immunity would do so based either on content — such as child sexual abuse material (CSAM) and content violating civil rights or terrorism laws — or on the way that content is delivered to users — algorithmically-recommended material, for instance.

The second category goes in the opposite direction. These bills would disincentivize moderation, either by removing Section 230 immunity for many subsets of content moderation, or by conditioning (c)(1) immunity on “neutral” enforcement. Both alterations would open services up to litigation for taking a broad range of actions to remove or restrict harmful content and users, thereby deterring moderation.

The third category consists of those bills designed to repeal Section 230 in full, or to condition all Section 230 applicability on structural changes in online business models (most often those concerning advertising).

Here, CR charts in full those bills introduced in 2020 that amend or repeal Section 230. We offer an abridged chart below, and evaluate six of these proposals and what they contribute to the legislative ecosystem of Section 230 reform:

Six Proposals for Reform

 

S. 3398/H.R. 8454: Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act

Sponsored and Introduced by Sen. Graham (R-SC), Sen. Feinstein (D-CA), Sen. Hawley (R-MO), & Sen. Blumenthal (D-CT)

The EARN IT Act removes (c)(1) immunity for federal and state civil actions and state criminal prosecution concerning CSAM, and establishes a nineteen-member Attorney General-led commission to develop best practices to prevent, reduce, and respond to CSAM online. Responding to civil society concerns about undermining encryption, Senator Leahy amended the bill in July before it passed out of the Judiciary Committee to clarify that an interactive computer service utilizing end-to-end encryption or failing to take an action that would otherwise undermine encryption would not give rise to the liability in question.

CR’s Analysis

While attempting to address a serious, grave, and urgent problem, EARN IT is misguided, and may ultimately cause more harm. Alternative starting points to address the harms that EARN IT seeks to address merit investigation, however, and include Senator Wyden’s Invest in Child Safety Act, alongside careful consideration of other potential proposed reforms to Section 230 — some of which are considered in other legislation discussed below.

Despite the Leahy amendment attempting to preserve encryption, losing liability protections could incentivize companies to implement technologies — such as client-side scanning — that would broadly compromise the effectiveness of encryption. And unencumbered encryption enables the security of financial systems, healthcare data, and emergency communications for consumers everywhere, so undermining it could destabilize the safety and security of incredibly sensitive consumer data spanning critical industries.

The design of the commission that EARN IT proposes is weighted heavily against consumer protection, privacy, civil liberties, and civil rights expertise: it requires only that two of the nineteen members have expertise in any one of those four categories. The weight of the commission’s recommendations are unclear: while the bill’s first iteration permitted services to “earn” back (c)(1) immunity through certifying best-practice usage, the present version does not. The committee’s weighted structure, however, still runs the risk of unduly influencing future policymaking with one-sided recommendations.

This is especially concerning given that the first adjustment to Section 230 in 2018, the Stop Enabling Sex Traffickers Act and Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA-SESTA) also sought to carve out one particular type of harm — sex trafficking — from Section 230’s immunities with the aim of reducing that harm. In practice, it seems to have worsened the harm it purportedly sought to solve and disproportionately impacted marginalized communities. EARN IT would repeat the structural change utilized in FOSTA-SESTA — introducing a new categorical exception under 230(e) — without any clear adjustments to account for or redress the harms such a change can cause.

S. 4066: Platform Accountability and Consumer Transparency (PACT) Act

Sponsored and Introduced by Sen. Schatz (D-HI) & Sen. Thune (R-SD)

The PACT Act first mandates publishing an easily-accessible acceptable use policy, setting up the basis for its dual notice-and-takedown regime: illegal content and potentially-policy-violating content (PPVC). For illegal content and activity, it removes (c)(1) immunity 24 hours after notice of the offending content is provided to the interactive computer service. It simultaneously empowers the Federal Trade Commission (FTC) to treat services’ failure to: (1) review notices of PPVC within 14 days, or (2) inform affected parties and review appeals of such takedowns — as “unfair or deceptive acts or practices” (UDAP) violations: illegal under Section 5 of the FTC Act.

It further mandates that live company representatives are regularly accessible via toll-free phone numbers to receive such notices, along with the publication of quarterly transparency reports. Failures to make either available are also FTC-enforceable as UDAP violations. Small businesses — defined as those with fewer than 1,000,000 monthly active users/visitors and revenues under $25,000,000 in the 24 months prior — are exempted from both, and given leeway on the turnaround times. The bill also excludes “internet infrastructure services” — web hosting, domain registration, caching, cloud management, content delivery networks, etc. — from the proposed notice-and-takedown regime entirely. Infrastructure and small businesses aside, however, the bill otherwise applies to all interactive computer services originally considered in Section 230.

The bill expands Subsection (e) exceptions to include federal civil statutes, executive agency regulations, and federal legislative and judicial establishments, and empowers State Attorneys General to enforce federal civil laws.

Finally, looking to longer-term horizons and further research, it commissions a General Accounting Office (GAO) Report for Congress on the viability of a whistleblower protection program for interactive computer service workers to raise consumer protection concerns, and directs the National Institute of Standards and Technology to develop a voluntary framework on best practices to manage risk for good faith moderation.

CR’s Analysis

Publishing acceptable use and enforcement policies, informing consumers of content removals and successful appeals, and regular, thorough transparency reports are all crucial to a transparent consumer experience online, and CR is glad to see the PACT Act thoroughly prioritize transparency, and on this front, would primarily encourage lawmakers to double down on this even further. Content moderation is rapidly moving beyond a “take-down/leave-up binary” — it may make sense for PACT to acknowledge that a company may take alternative actions (e.g., — applying labels or fact-checks, preventing sharing or algorithmic amplification, and otherwise reducing distribution) in response to offending content, and ensure transparency and disclosure when services take action that affect the distribution of online material, even when they don’t result in outright removal. The bill’s willingness to explore protecting industry consumer-protection whistle-blowers, if successful and executed fairly, could prove a novel new accountability measure where other transparency mechanisms fail. Regular scandals like Facebook’s attempt to to shut down NYU ads transparency research on the basis of privacy just before the 2020 election, and Google’s firing of artificial intelligence ethicist Timnit Gebru have made clear that independent sources of transparency accountability are sorely needed to understand and bring light to harms that interactive computer services knowingly accelerate for consumerscompetitors, and society. As such, Congress should provide mechanisms and reform existing law to enable diverse, independent researchers and consumer watchdogs to, in line with appropriate privacy practices, access data needed to research, understand, and report on the effects of platform data collection and usage, algorithmic recommendations, policies, and enforcement decisions. Developing such mechanisms would help expose and prevent further bad faith efforts to evade accountability for opaque but long-reaching consequences of online business models and content moderation.

The bill is not without its concerns, however. Because notice-and-takedown regimes expose companies to liability for content once they are made aware of it, even the best-resourced companies will be incentivized toward taking down potentially controversial content rather than risk litigation in leaving it up. The brunt of a platform’s moderation resources would likely be devoted to moderating with that very litigation risk in mind: prioritizing those takedown requests from the powerful: those most likely and able to bring expensive lawsuits — disproportionately affecting those without such resources: everyday consumers, whistleblowersactivists, and members of already-marginalized communities. These considerations must factor into the discussion and, if indeed a new notice-and-takedown regime is ultimately the best way to pursue reform of Section 230, Congress must provide strong counter-incentives to over-enforcement (mandatory appeals, as the PACT Act enables, for example, as a bare minimum), and provide mechanisms for diverse independent researchers and consumer watchdogs to be able to study, audit, and publish on the realities of platform enforcement decisions in aggregate.

The resource-based distinction language,“within a reasonable period of time based on the size and capacity of the provider,” in the small business exemption is important, given the thorough framework and operational cost that the PACT Act would introduce. Indeed, it would be prudent to explain — and ultimately, perhaps raise — the given small business threshold which, at first glance, could fail to cover a variety of online entities far too small to be able to afford the operational costs required. Such entities’ existence in various niches of the online marketplace could preserve consumer choice in smaller online services and forums that may not have such capacity, but whose existence in various niches of the online marketplace is a boon for consumers. A 14 day turnaround may prove too slow to effectively curb policy-rooted problems — viral misinformation campaigns, for instance — while perhaps the largest platforms should be held even to a higher standard. But for smaller platforms, even the 14 day could be cost-prohibitive, given a well-coordinated spam attack; and a 24 hour turnaround could prove enormous, perhaps existential. A good faith standard should perhaps apply to all but the most dominant platforms. Yet even while a 24-hour turnaround time is likely feasible for the most dominant platforms, it may prove inadvisable absent other good faith catch-alls, as shorter turnaround times can be powerful incentives toward over-moderation, which can disproportionately impact marginalized communities.

Finally, the creation of a notice-and-takedown regime demonstrates the understanding of the complexities and importance of content moderation for the modern internet, and considering employee and vendor whistleblower protections are a good starting point. However, while designing these solutions, Congress should also work further to prioritize, protect, and learn from the commercial content moderation workers doing the psychologically dangerous and taxing work spending their days engaging with and making complex decisions about the most vile, disturbing content online in order to keep consumers safe. empowering moderators with autonomy to continue, expand on, and improve the ecosystem .

While the ideal end result for Section 230 may not mirror the present-day PACT Act, it certainly may bear a resemblance, and the bill provides a thoughtful basis for productive further discussion.

H.R.8636: Protecting Americans from Dangerous Algorithms Act

Sponsored and Introduced by Rep. Tom Malinowski (D-NJ-7) & Rep. Anna Eshoo (D-CA-18)

H.R.8636, aimed at reducing the algorithmic amplification of extremist content, creates an exception to (c)(1) immunity. This legislation opens large platforms (>50,000,000 users or uniquely monthly visitors) up to private actions for claims of (1) conspiracy to interfere with civil rights, or, having knowledge, failure to prevent the same and (2) international terrorism — so long as those platforms algorithmically ranked, ordered, promoted, recommended, or amplified displayed information directly relevant to the claim. But services would maintain their (c)(1) immunity if the algorithms displaying the information in question are “obvious, understandable, and transparent to the user” (chronological, alphabetical, ranked by rating, etc.) or are used to furnish information for which a user specifically searched.

CR’s Analysis

The Protecting Americans from Dangerous Algorithms Act is a solid starting point for the discussion of Section 230 as it relates to algorithmic amplification, recommended content, and input-transparent displays for consumers. Narrowed specifically to algorithmic promotion of material concerning international terrorism or conspiracies to interfere with civil rights, the bill would incentivize large interactive computer services to act in one of two ways. First, it removes (c)(1) immunity for algorithmically displayed or recommended material that promotes international terrorism or conspiracies to interfere with civil rights — and extends liability to “every person who, having knowledge” with the power to act to prevent or help prevent, neglects to act to prevent the latter. This, on its own, would motivate large platforms toward algorithmic redesign and more aggressive moderation of potentially related content. Alternatively, it allows large platforms to retain the original (c)(1) immunity if they opt to deliver or display information in an obvious, transparent manner — chronologically, for instance — prescriptively discouraging all algorithmic recommendations. While this approach does a good job balancing systemic problems with online user-generated content with legitimate concerns about over-moderation. While this may come closer to addressing the systemic problems in question without concern for over-moderation, it remains a very difficult call whether this framework — broadly disincentivizing all algorithmic recommendations to mitigate the clear harms that come from unaccountable recommendations of extremist content — ultimately strikes the ideal balance for Section 230 reform. It is also further worth consideration from a broader group of stakeholders whether the narrow claims available are the only claims that should be, or are best suited to their intended purpose.

As a practical matter, even with the most careful curation of algorithms used online, if the bill were to stand as-is, there would also likely need to be, at minimum, an adjustment period to account for, for instance, the potential for near-instantaneous virality of newly competitive apps, services, and platforms that cross the user threshold before the effects of algorithms functioning as expected at one size can be fully understood (e.g. Pokemon Go reaching the 50-million download mark only 19 days after launching), to ensure a vibrant consumer ecosystem of services unafraid to responsibly innovate. More research and data would be useful to fully understand the scope of how the changes ultimately implemented would alter the broader ecosystem of harm, speech, and extremism online, as would further examination of how such a bill’s design could best account for potential First Amendment challenges. All told, while the bill’s present form requires further discussion, it drives a critical conversation very much worth having.

S. 4534: Online Freedom and Viewpoint Diversity Act

Sponsored and Introduced by Sen. Graham (R-SC), Sen. Wicker (R-KS), & Sen. Blackburn (R-TN)

The full text of this bill was also incorporated into the Online Content Policy Modernization Act, which has since been withdrawn.

The Online Freedom and Viewpoint Diversity Act would eliminate (c)(1) immunities for content moderation decisions, driving service providers to rely exclusively on (c)(2) protections for moderation decisions before having to litigate them. At present, the (c)(2) immunities shielding users and services from civil liability for content moderation include a crucial catch-all as a basis: considering material “otherwise objectionable.” After limiting moderation immunity to (c)(2), the bill also significantly narrows (c)(2)’s scope, striking “otherwise objectionable” and replacing it with “promoting self-harm, promoting terrorism, or un-lawful.” It further constrains the realm of protected moderation, shifting from what a service or user “considers to be [lewd/obscene/harassing/etc.]” to “has an objectively reasonable belief is [lewd/obscene/harassing/etc].”

Finally, it alters the definition of what constitutes an information content provider, but in a way that only clarifies what was already true: services are responsible and liable for content they create — fact-checks, for instance — though the altered definition may result in further lawsuits on the matter. The threat of new litigation could chill services’ attempts to provide important context around content on their platforms.

This bill is similar in substance to a wide swath of other proposals, documented in Category 2 of this chart, that would specifically seek to limit (c)(2) immunities.

CR’s Analysis

Taken together, the bill would pose a significant threat to consumer safety: these changes would significantly impair platforms’ ability to moderate for medical and civic misinformation, spam, phishing, dangerous products, and a variety of other harmful online content at a time when consumers are relying on online services for safe, reliable information and commerce more than ever before. Furthermore, losing the “otherwise objectionable” catch-all could even open consumers themselves, as user moderators, up to a host of frivolous lawsuits for things as benign as trying to keep a themed forum on-topic. And while dominant platforms could perhaps afford such litigation, smaller competitors and individual users likely could not. This could chill attempts to curate content across the internet, and even further consolidate the power of already-dominant platforms, depriving consumers of their choice across a variety of online spaces. In short, the bill would be a one-two punch to truth, safety, and fairness in the online marketplace and must not advance.

S. 4337: Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act:

Sponsored and Introduced by Sen. Hawley (R-MO)

The BAD ADS Act, Senator Hawley’s most recent bill addressing Section 230, takes aim at the business models of the largest dominant online platforms, removing 230 immunities for interactive computer services with greater than 30 million US users, 300 million worldwide users, or $1.5 billion in annual revenue. The bill invalidates both (c)(1) and (c)(2) immunities for a 30 day period beginning anytime a provider displays “behavioral advertising” to a user or provides data about a user to another entity with the knowledge it will be used for behavioral advertising.

CR’s Analysis

Addressing the present business model of data collection and behavioral advertising and its effects on consumers and society should be a priority for consumer advocates everywhere. The BAD ADS Act targets the largest platforms. Yet while these entities certainly deserve significant scrutiny, and resource-based distinctions would be critical to many potential Section 230 reforms, in this context, the bill could inadvertently create a mass market for small-to-medium sized firms still permitted to carry out the invasive data collection, dark pattern design, and selling of data that underlie behavioral advertising. This bill is an attempt to use Section 230 immunities as an incentive to change an admittedly concerning business model, but the issues are likely best addressed separately — and indeed, tying the two could further complicate and frustrate attempts to reform Section 230, or could render itself obsolete as an incentive upon further Section 230 reform. A stronger argument to consider instead, as far as advertising and Section 230 are concerned, would be removing Section 230 immunity for paid ads, paired with comprehensive federal privacy reform to address concerns around data collection and usage.

Senator Hawley also introduced two prior bills concerning Section 230: S. 3983, and S. 1914, both of which would require neutrality in content moderation in order to retain Section 230 immunities. S.3983 provides that large online platforms would lose (c)(1) if they fail to maintain written terms of service or if they ‘intentionally selectively enforce’ those same terms. While companies should have some obligation to adhere to their own written policies, the standard of “selective enforcement” is unduly vague and could chill legitimate moderation of dangerous medical misinformation and extremist conspiracy theories. S. 1914 would require the FTC to certify platform moderation practices as politically unbiased — including a requirement that content moderation not disproportionately affect a given candidate or political party — in order to maintain platform immunity. Empowering a government body to certify whether or not private content moderation decisions are unbiased would raise serious First Amendment concerns, and ultimately, would similarly serve to chill moderation of dangerous and harmful misinformation.

S. 5020 — A bill to repeal section 230 of the Communications Act of 1934.

Sponsored and Introduced by Sen. Graham (R-SC)

Senator Graham’s proposal, in conjunction with calls from other lawmakers and executives to repeal Section 230 entirely — even rumors of including such a repeal in the must-pass NDAA — rounded out the end of 2020. Senator Graham’s bill would repeal Section 230, effective January 1, 2023, and aimed to incentivize timely action on Section 230 reform in the 117th Congress. The implication here in that incentive is that Congress would not actually wish to repeal Section 230 in full. Let’s explore why.

CR’s Analysis

Section 230 discourages frivolous lawsuits against both interactive computer services and their users in two broad categories: hosting, displaying, and transmitting third-party content, and moderating that content. A significant portion of platform and user actions that Section 230 covers is already protected by the First Amendment, with Section 230 acting as a safeguard preventing expensive litigation. Without it, internet users could be sued for forwarding emails, retweeting Tweets, or moderating online forums. Total repeal would open internet services up to the possibility of expensive lawsuits for both hosting and moderating content — even more so for especially controversial content, or in response to especially powerful, well-resourced, litigious parties affronted by content or moderation choices. Such costs might shutter smaller platforms unable to afford potential litigation based on user-generated-content entirely, reducing consumer choice on an already highly centralized internet. And the shift would incentivize platforms with capacity to curate and curtail a great deal of content — not all of it harmful, and plenty from marginalized communities — while at the same time ultimately giving monied and powerful interests with the capacity to threaten expensive litigation more power to determine which content stays online. While certain modifications to Section 230 and related legal regimes, as discussed in part above, merit serious consideration to incentivize more responsible platform design and accountability, total repeal is not in consumers’ or society’s best interests.

What’s next

 

Above, we examined a variety of last year’s legislative proposals concerning Section 230. Looking ahead at 2021, Consumer Reports is encouraged to already see thoughtful proposals both from Representative Clarke and from Senators Warner, Hirono, & Klobuchar. We look forward to engaging further both with these and future proposals, as we work with Congress and our colleague organizations to strive toward a fairer, more just internet ecosystem for all consumers.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy