The Future of Section 230 | What Does It Mean For Consumers?

Section 230 of the Communications Decency Act of 1996[1] was enacted to provide social media platforms with legal protections for the content that users post on their sites, while allowing the sites to serve as forums for free speech. By specifying that internet platforms would not be treated as publishers of third-party content, the Act ensured that these platforms would not be held liable for user-posted content. But in recent years, some critics have called this immunity into question as a policy matter. Some critics say the platforms are failing to appropriately remove harmful content such as misinformation or hate speech, while others contend that the platforms remove constitutionally protected political speech in a biased fashion. In consumer protection cases, enforcers have been looking at whether the platforms’ recommendation algorithms violate consumers’ rights or reinforce biases that unfairly disadvantage consumers.[2]

One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party. Recently, the Supreme Court heard two cases involving allegations that social media giants Twitter and Google aided and abetted terrorists who posted content to their platforms. Many expected the Court to take the opportunity to address the issue of whether social media companies can be held liable for their targeted recommendation algorithms. Despite expectations, the Court resolved the cases on different grounds, thus leaving the issue in limbo for now. The question is likely to return to the Court soon, though, and the answer could have major implications for consumers worldwide.

In Twitter, Inc. v. Taamneh,[3] the Court held that Twitter was not liable for deaths that occurred during a 2017 ISIS terrorist attack on a nightclub in Istanbul, Turkey. In a unanimous opinion written by Justice Thomas, the Court concluded that the plaintiffs—the family of a victim in the attack—had failed to state a sufficient claim. The plaintiffs had sought to hold Twitter liable for aiding and abetting the terrorist attack, thereby violating the Antiterrorism Act.[4] Under the Antiterrorism Act, as amended by the Justice Against Sponsors of Terrorism Act, U.S. nationals who are injured in an act of terrorism can sue anyone who “aids and abets” international terrorism “by knowingly providing substantial assistance.”[5] In Taamneh, the plaintiffs alleged that Twitter had provided substantial assistance to the 2017 terrorist attack by knowingly allowing ISIS and its supporters to use Twitter’s recommendation algorithms as tools for recruiting and fundraising.

The Supreme Court rejected this argument on the grounds that the link between Twitter and the 2017 ISIS attack was too attenuated to justify holding Twitter liable. There was no evidence that ISIS used Twitter or other social-media platforms to plan the specific attack in question, and mere knowledge of a terrorist group’s use of Twitter to promote terroristic activities generally was deemed insufficient to qualify as aiding and abetting. As Justice Thomas explained, “defendants’ relationship with ISIS and its supporters appears to have been the same as their relationship with their billion-plus other users: arm’s length, passive, and largely indifferent.”

To establish aiding and abetting liability under the Antiterrorism Act, there must be a “conscious, voluntary, and culpable participation in another’s wrongdoing.”[6] According to the Court, there is no reason to think that Twitter consciously intended to help or encourage the terrorist attack that resulted in the death of the plaintiffs’ son. Twitter’s alleged failure to prevent ISIS from taking advantage of its recommendation algorithms was inadequate to establish such culpability. As the Court stated, “[t]he fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting.”[7]

Based on its decision in Taamneh, the Court also issued a summary per curiam decision in a companion case, Gonzalez v. Google.[8] In this case, the plaintiffs—parents of a 2015 ISIS terrorist attack victim in Paris—alleged that online platforms, including YouTube (which is owned by Google), should be held liable for recommending third-party content to users through its algorithms. More specifically, they claimed that Google knowingly provided support to ISIS by permitting the terrorist organization to post hundreds of videos on YouTube, as well as by recommending similar content to viewers through its algorithms.

The case arose in the Ninth Circuit, and that court held that the claim was barred by Section 230. The plaintiffs claimed that, because the platform’s algorithm directed specific content to specific users, the platform was essentially a content creator, rather than a mere distributor, and therefore Section 230’s immunity did not apply.[9] The Supreme Court had been expected to address this Section 230 issue in Gonzalez, having granted certiorari to consider the issue as framed in the cert. petition:

“Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?”

Despite that expectation, the Court declined to address the issue. Instead, the Court remanded the case back to the Ninth Circuit in light of the Taamneh decision. In its brief unsigned opinion, the Court wrote, “We decline to address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief.” Because the facts did not support a claim for liability under the Antiterrorism Act, there was no need for the Court to determine whether such a claim would be barred by Section 230.

For now, Section 230 remains intact, and social media companies will continue to benefit from broad protections against liability for content posted on their sites.[10] Had the Court decided to address Section 230 as many had expected, the way that social media platforms operate might have been significantly affected, potentially opening the door to a flood of litigation. After all, algorithms are a key part of the infrastructure of social media platforms.[11] On the other hand, though, according to an amicus brief filed by several state attorneys general, “Reining in Section 230 immunity thus would not open the floodgates; it would simply allow cases to proceed ‘where a plaintiff’s injury is causally connected to third-party-created content, [and] the defendant’s alleged wrongdoing is not based on a failure to moderate that content.’”[12]

Government Officials Seek Reinterpretation and Reform of Section 230

Government officials at both the federal and state level have taken aim at tech platform immunity through court filings that attack the broad interpretation which courts have generally afforded Section 230, as well as through statutory proposals intended to limit Section 230’s scope.

In an amicus brief for Gonzalez, the United States Department of Justice argued that the use of targeted recommendation algorithms shifts social media platforms from mere publishers of online content to information content providers.[13] As content providers, the companies would not be protected by Section 230 immunity, and thus could be subject to liability for harmful consequences of the recommendation algorithms.[14] In other words, according to the Justice Department, “A claim premised on YouTube’s use of its recommendation algorithms thus falls outside of Section 230(c)(1) because it seeks to hold YouTube liable for its own conduct and its own communications . . . .”[15]

Led by Tennessee Attorney General Jonathan Skrmetti, a bipartisan coalition of more than two dozen state Attorneys General also filed an amicus brief in the Gonzalez case, asserting that Section 230’s far-reaching scope of immunity, as interpreted by the Ninth Circuit and other courts, prevents states from allocating “losses for internet-related wrongs.”[16] The attorneys general argued that Section 230 should be interpreted narrowly in order to “preserve the States’ broad authority” and to prevent the “widespread preemption of state laws and the concomitant erosion of traditional state authority to allocate loss among private parties.”[17] Section 230’s broad scope, they argued, is antithetical to fundamental principles of federalism. The brief contends that the broad interpretation embraced by many courts not only prevents consumers from holding internet companies liable for harms, but also displaces state consumer protection laws.[18] According to the attorneys general, online platforms should be subject to liability for harms that transpire as a result of their recommendation algorithms.

Many government officials, including Senate Judiciary Committee Chairman Dick Durbin, have called for Congressional reforms to Section 230 as well. In a statement, Senator Durbin said, “Congress must step in, reform Section 230, and remove platforms’ blanket immunity from liability.”[19] In a similar vein, U.S. Representative Paul A. Gosar has stated that “the broad and undue immunity for content and user removal granted by Section 230 must be reined in by Congress.”[20] From the attorney general ranks, Virginia Attorney General Jason Miyares commented, “In order for our technology laws to be effective and ensure consumers are protected, these laws must modernize as technology does to ensure that social media companies claiming Section 230 immunity are not exploiting users.”[21] Similarly, in a statement, California Attorney General Rob Bonta said:

“Under the lower courts’ current, overly broad interpretation of Section 230, states are severely hampered from holding social media companies accountable for harms facilitated or directly caused by their platforms. This was certainly not Congress’s intent when it carved out a narrow exception in the Communications Decency Act. Companies like Google are not just publishing material from users, they are exploiting it to make a profit. I urge the Supreme Court to adopt a more reasonable view of ‘publisher immunity’ under the Communications Decency Act that is in line with Congress’s intent.”[22]

Does Section 230 Harm Consumers?

Censorship and Hate Speech

Critics claim that Section 230 immunity provides social media companies with too much power over consumers. For example, social media companies have broad discretion in controlling what information can be posted and what content is removed,[23] which is characterized by some critics as censorship. Some states have enacted statutes to address this concern.[24] According to former District of Columbia Attorney General Karl Racine, in a 2021 brief supporting an effort to hold Facebook accountable for failing to remove hate speech, “no company is entitled to mislead consumers, and there is nothing in local or federal law that shields companies like Facebook from the consequences of their own deception.”[25] In the case for which the brief was filed, a consumer advocacy group alleged that Facebook had violated the D.C. Consumer Protection Procedures Act by publicly misrepresenting its efforts to remove offensive anti-Muslim content online.[26]

In order to maximize profit, social media companies seek to maximize user engagement, and they do so by keeping users engrossed. Recommendation algorithms feed users content based on what is likely to captivate their attention, even if that content includes harmful speech or misinformation. According to Guillaume Chaslot, a former software engineer at both Microsoft and Google, with a Ph.D. in artificial intelligence, recommendation algorithms that are designed to maximize engagement create “filter bubbles.”[27] This means that, once a user watches a certain video, the algorithm will continue to recommend the same kind of video over and over. For example, Chaslot observed these filter bubbles on YouTube during the 2019 demonstrations in Cairo, Egypt. He described this phenomenon, saying, “You would see a video from the site of the protesters, and then it will recommend another video from the site of protesters. So you would only see the site of protesters. If you start with the side of the police, you would only see the side of the police. Then you had only one side of reality. You couldn’t see both sides.”

Bias and Discrimination

Another concern is the perpetuation of bias and discrimination online through targeted advertising. For example, researchers at Northeastern University found that, on Facebook, “ads for supermarket jobs were shown primarily to women, while ads for jobs in lumber (sic) industry were presented mostly to men.”[28] Moreover, online advertisers may be able to easily exclude certain groups—including ethnic affinity groups—from seeing their advertisements, thereby targeting white audiences only.[29] Although Facebook’s new algorithm is designed to distribute housing ads in a non-discriminatory way,[30] critics worry that the company will continue to claim immunity even when advertisements on its site violate anti-discrimination laws such as the Civil Rights Act and Fair Housing Act.[31] Even when algorithms are facially neutral, critics contend that they can produce discriminatory effects by prioritizing certain content over others.[32]

Effects on Children

A particular area of concern regarding recommendation algorithms is the exposure of children to harmful or dangerous content on websites such as YouTube and apps like TikTok. According to a research study funded by the European Union, “Young children are not only able, but likely to encounter disturbing videos when they randomly browse the platform starting from benign videos.”[33] Because of Section 230’s far-reaching protections, there is substantial uncertainty about whether YouTube and other social media platforms are subject to any liability when their algorithms lead children to such content. Parents and school districts have expressed concern that social media websites use algorithms that “exploit the psychology and neurophysiology of their users,” a technique which is “particularly effective and harmful” to children.[34] According to research from the Pew Research Center, three out of five YouTube users reported that they have seen “videos that show people engaging in dangerous or troubling behavior.”[35] This suggests that there is a high probability that YouTube algorithms will lead children to dangerous or disturbing content, and will then keep them engaged for as long as possible.

Holding Social Media Companies Accountable

So far, social media companies have not been held liable for the consequences of their recommendation algorithms. But critics have suggested that social media companies should not retain immunity when their algorithms lead to harmful consequences. For example, if an online user comes across a YouTube video promoting a fraudulent business, and then the platform continues to recommend similar content, perhaps YouTube should be held liable if the user then decides to engage with the business and is ultimately defrauded.

Liability for companies aggressively marketing harmful or deceptive messaging in the consumer protection context is not a novel concept. For example, advertising companies and consulting firms have been found liable for causing harm to consumers. In 2021, McKinsey & Company, one of the world’s leading consulting firms, agreed to pay $573 million in a settlement of claims that the firm helped fuel the opioid addiction crisis. Attorneys general alleged that McKinsey, in its work for Purdue Pharma, had pushed aggressive strategies to increase Oxycontin sales.[36]

Moreover, in the past two years, there have been dozens of legislative proposals aimed at reforming Section 230. While some legislators have called for repealing Section 230 entirely, others have suggested reforms, such as establishing carve-outs for larger online companies or for certain types of content, requiring online platforms to remove certain content upon receiving notice that such content is unlawful, and adding exemptions for state criminal law or expanding federal criminal laws. There are also proposals to limit Section 230’s scope by eliminating platforms’ immunity for paid advertisements and other sponsored content, thus allowing victims of online fraud to hold platforms accountable under certain conditions.[37] For example, just as print newspapers may be held liable when they knowingly publish fraudulent or deceptive content,[38] perhaps online platforms should be subject to such liability as well.

Although most consumers support some kind of reform,[39] the partisan disagreements and lack of consensus regarding how to best reform the statute suggest that Congress is unlikely to pass any meaningful Section 230 reform in the near future.[40] And because of federal preemption issues, states are limited in their ability to narrow the scope of section 230 immunity on the state-level. For now, this means that section 230 is here to stay.

Section 230 as a Safeguard for Free Speech

For many analysists, this is a good thing. Many of those in favor of maintaining the current section 230 immunity, claim that imposing liability on social media companies could lead to increased censorship of online content. Those in favor of keeping Section 230 intact assert that its protection is essential for the free flow of information on the internet.[41] If social media companies could be held liable for user-posted content, they may be less likely to allow any content that might be considered controversial or offensive. Aside from the difficulties in determining what counts as offensive, this would make it more challenging for users to share their ideas freely.[42] Many proponents of Section 230 contend that social media platforms, as they currently exist, serve as channels for promoting creativity and community.

Moreover, many civil liberties groups, including the ACLU, have praised the Taamneh and Gonzalez rulings as a win for free speech. According to Patrick Toomey, deputy director of ACLU’s National Security Project, “Today’s decisions should be commended for recognizing that the rules we apply to the internet should foster free expression, not suppress it.” Without Section 230, Toomey agues, social media platforms would be more likely to censor user-posted content.[43] For example, after the Fight Online Sex Trafficking Act (FOSTA)[44] was enacted in 2018, many critics of the law claimed that, rather than preventing the exploitation of sex trafficked persons online as intended, the law led to increased censorship of online discussions regarding sex work and prevented law enforcement from locating victims and prosecuting traffickers. In short, by imposing liability on platforms for hosting certain content, the law, in effect, has led to widespread censorship online and other undesirable, unintended consequences.

Another argument made by supporters of keeping Section 230 intact is that recommendation algorithms help users by customizing and optimizing their online experience. For example, social media websites use their algorithms to create personalized search results by looking at users’ browsing history. Consumers trying to make informed online purchases may benefit from targeted advertisements for products or services for which they have searched. Although some consumers may view this activity as suspicious or worry that companies are collecting and tracking their personal information, they may also benefit from customized advertisements and search results. But it could be possible to preserve these positive effects while reducing the harms that the recommendation algorithms create. In 2021, Congressman Frank Pallone and other members of the House Energy and Commerce Committee introduced legislation that would amend Section 230 by imposing liability when online platforms knowingly or recklessly make personalized recommendations that directly cause harm to a user.[45] In effect, this would allow social media companies to retain immunity for the use of recommendation algorithms, except for when they knowingly or recklessly contribute to an injury.

What’s Next?

The Supreme Court’s recent rulings have done little to resolve the debate over Section 230. In Gonzalez and Taamneh, the Court held that the causal links between the recommendation algorithms and the terrorist harms were too attenuated to impose liability on the social media companies. That ruling does not mean that social media companies can never be held accountable for the consequences of their use of algorithms. In a case where the causal chain is more direct—for example, in a case concerning liability for discriminatory advertisements on social media platforms—the Court may finally address just how far Section 230’s scope extends.

In fact, the Court is likely to grant certiorari next year in two other cases concerning social media companies and their authority, under the First Amendment, to moderate content.[46] Both cases challenge the constitutionality of state laws that restrict social media companies’ ability to moderate content. In particular, NetChoice, a coalition of social media companies and trade associations, is challenging state laws in Florida and Texas that require social media companies to host all lawful speech—regardless of viewpoint—and to provide users with notice and explanation of any content they remove.[47] NetChoice contends that these laws violate the First Amendment by interfering with social media companies’ discretion to regulate their own platforms.[48] According to NetChoice, these restrictions would harm consumers online by preventing social media companies from removing harmful or offensive content.[49] But the attorneys general of Florida and Texas argue that these laws are necessary to protect free speech and prevent the censorship of certain viewpoints online.[50]

In the Texas case, the Fifth Circuit held that Section 230 does not give social media companies the ability to exercise “editorial judgment” over user-posted content.[51] If the Supreme Court agrees, there could be major changes in the way that social media companies are able to moderate content on their platforms. Although the NetChoice cases do not address the scope of Section 230 directly, they implicate many of the same issues, and could have significant effects on the regulation of free speech online and the protection of consumers online.

For now, the future of Section 230 remains unclear. Although the Supreme Court did not address the issue this time, the Court will almost certainly have another opportunity to weigh in soon. There is no easy solution, but as algorithms begin to play an increasingly significant role in our online experiences, lawmakers will need to find a balance between free speech and consumer safety. Social media platforms are valuable forums for individuals to freely express and share their ideas, but if the platforms’ algorithms directly harm children and consumers, traditional notions of fairness demand accountability. There is widespread agreement that reform is necessary, but determining what that reform looks like is a much more difficult question to answer.

[6] Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023).

[9] See 47 US.C. § 230(c)(1) (1996) (stating “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, the law treats interactive computer services, such as social media platforms, as distributors of content, rather than as publishers. As distributors—like bookstores and newsstands—the sites are not liable for the content that third-parties post.)

[12] Brief for the States of Tennessee et al. as Amici Curiae in Support of Petitioners at 9, Gonzalez v. Google LLC, 143 S. Ct. 762 (2023) (no. 21-1333).

[16] Brief for the States of Tennessee et al. as Amici Curiae in Support of Petitioners at 9, Gonzalez v. Google LLC, 143 S. Ct. 762 (2023) (no. 21-1333).

[19] Press Release, U.S. Senate Committee on the Judiciary, Durbin Statement on Supreme Court Ruling in Gonzalez v. Google (May 18, 2023).

[21] Press Release, Virginia Office of the Attorney General, Attorney General Miyares Joins Bipartisan Multistate Coalition in U.S. Supreme Court to Hold Big Tech Accountable (Dec. 7, 2022).

[22] Press Release, California Department of Justice, Attorney General Bonta Urges U.S. Supreme Court to Allow Social Media Companies to be Held Liable for Recommending Harmful Third-Party Content, Narrow Interpretation of Communications Decency Act (Dec. 7, 2022).

[24] See, e.g., Fla. Stat. §106.72 (2023), Tex. Bus. & Com. Code Ann. §120.001 et seq.

[28] Piotr Sapiezynski et al., Algorithms that “Don’t See Color”: Measuring Biases in Lookalike and Special Ad Audiences, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (May 31, 2022).

[29] Julia Angwin & Terry Parris Jr., Facebook lets advertisers exclude users by race, ProPublica (Oct. 28, 2016); see also, Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (holding that the website acted as a content creator by using discriminatory search mechanisms to match users with rooms in a way that violates the Fair Housing Act, and is therefore not immune from liability under Section 230 of the Communications Decency Act).

[32] Id; see also Aylin Caliskan et al., Semantics derived automatically from language corpora contain human-like biases, 356 American Association for the Advancement of Science 6334 (2017).

[33] Kostantinos Papadamou, et al., Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children, 14 Proceedings of the International AAAI Conference on Web and Social Media 1 (2020).

[34] Complaint at 2, Seattle School District No. 1 v. Meta Platforms, Inc. et al., (W.D. Wash. 2023) (No. 2:23CV00032).

[36] See Press Release, Office of Attorney General Maura Healey, AG’s Office Secures $573 Million Settlement With McKinsey for ‘Turbocharging’ Opioid Sales and Profiting From the Epidemic (Feb. 4, 2021); see also Press Release, FTC, Payment Processor for MOBE Business Coaching Scheme Settles FTC Charges (June 1, 2020) (explaining that, in 2020, the FTC alleged that Qualpay, a payment processor, had ignored obvious signs, over the course of several years, that its client was operating a fraudulent business scheme. As a consequence, the FTC reached a settlement with Qualpay, barring the company from processing payments for business coaching companies).

[39] Jordan Marlatt, Consumers Say Tech Companies Should Be Liable for Content on Their Platforms, Morning Consult, (Apr. 12, 2023), (according to a 2023 survey, “a majority of U.S. adults (67%) said companies should be legally liable for some, if not all, content found on their platforms.”).

[40] Benjamin Wittes et al., The future of Section 230 reform, Panel at The Brookings Institution (Mar. 14, 2022).

[41] Section 230: An Overview, Congressional Research Center, R46751 (Apr. 7, 2021).

[44] H.R.1865 – 115th Congress (2017-2018): Allow States and Victims to Fight Online Sex Trafficking Act of 2017, H.R.1865, 115th Cong. (2018) (amending Section 230 of the Communications Act of 1934 to clarify that the Act does not prevent law enforcement officers from holding online platforms liable for hosting content that promotes or facilitates sex work or sex trafficking).

[45] H.R.5596 – 117th Congress (2021-2022): Justice Against Malicious Algorithms Act of 2021, H.R.5596, 117th Cong. (2021).

[47] NetChoice, LLC, v. Att’y General of Fla., No. 22-393, Petition for a Writ of Certiorari, 143 S. Ct. 744 (2022).

[51] NetChoice, LLC v. Paxton, 49 F.4 th 439 (5 th Cir. 2022).