Online speech less protected, thanks to (checks notes) the First Amendment?

Caitlin Vogus Headshot

Senior Advisor

A drawing of two people conversing with speech bubbles, one containing a cellphone with blacked out and redacted text, and the other with speech that is blacked out and redacted.

A new decision by a federal court of appeals on Section 230 isn’t just nonsense; it could seriously undermine free speech online, including by journalists.

Gregory Baldwin/Ikon Images via AP Photo

If it sounds backward to use the First Amendment to undermine a law meant to protect free speech, that’s because it is.

Yet that’s just what’s been done in a recent decision on Section 230 of the Communications Decency Act — the federal law that shields online services from legal liability for posts made by their users. A federal court of appeals used the First Amendment itself to sweep away many of the law’s protections for online content, including posts by journalists.

Get Notified. Take Action.

In late August, the court held that Section 230 doesn’t apply to claims based on platforms’ recommendation algorithms. Its decision allowed a case against TikTok to go forward, based on its algorithm recommending a “blackout challenge” video to a child who later died attempting it.

Distorting the First Amendment

The court of appeals claimed it was being guided by the Supreme Court, despite the fact that the justices have never weighed in on Section 230’s applicability to recommendation algorithms and even went so far as to sidestep the question last year.

Nevertheless, the appeals court said that a different Supreme Court decision holding that the First Amendment protects platforms’ choices about whether and how to display online content means Section 230 doesn’t protect them from being sued for those very choices.

Many have pointed out how nonsensical the court’s reasoning is, especially because Congress passed Section 230 to ensure First Amendment rights were protected online.

It’s possible the decision will be reversed, but if not, its practical effects could have dire consequences for journalists and everyone else who uses the internet.

To appreciate why, you need to understand two things: First, how much online speech the court of appeals’ decision applies to, and second, how platforms will likely respond to the court’s ruling.

Inevitable censorship of news

On the first point, the court of appeals’ decision seems to apply to everything posted on social media, because it’s all been sorted by some kind of algorithm. Platforms have to make choices about what content to display and how to display it, and they use algorithms to do it. If Section 230 doesn’t apply to content that platforms recommend, it’s hard to see what content it applies to at all.

Platforms can’t avoid this result by shutting off what most people think of when they hear about recommendation algorithms: those annoying systems that push content that users never asked to see. Even simple algorithms — like ones that show you only content posted by the people you follow or display content in reverse-chronological order — are still types of recommendation algorithms.

On the second point, how are platforms likely to respond to this decision? Because it means that platforms can’t rely on Section 230 for any user-generated content they recommend (which, again, is all content), they’ll be much more likely to aggressively remove content they believe could get them sued. But they’re not going to have a legal team parse every single post for liability risk — that would be prohibitively expensive. They’ll use flawed technological tools to detect risk and will err on the side of takedowns.

That, in turn, will lead to the overremoval of news from social media, especially news that’s critical of wealthy or powerful people or corporations who may sue. Imperfections in content moderation will also mean that platforms will overremove news stories about controversial or illegal matters even more than they already do.

It’s not just news on social media that will be impacted. Other online services, like search engines, also rely on Section 230 and also use recommendation algorithms. If Google or DuckDuckGo know they can be sued for “recommending” a news report that is potentially defamatory by ranking it highly in a user’s search results, they may delist it, making it much harder for users to find.

Without Section 230’s protections for content they algorithmically recommend, platforms will also remove content posted by regular internet users for fear of potential liability, meaning that everyday people will be less able to make their voices heard online. It will also mean that journalists may have a harder time finding information and sources about controversial topics online because it’s been removed.

The right way to respond to algorithmic abuses

While the court of appeals’ decision will be disastrous for online free speech if allowed to stand, that doesn’t mean that anger and concern over how platforms moderate content or use recommendation algorithms are unjustified. The toxic content allowed and recommended by platforms is horrifying, as journalists and researchers have repeatedly exposed.

The press must continue to investigate recommendation algorithms to uncover these problems. The public must pressure platforms to improve. Congress needs to pass comprehensive privacy legislation that prevents platforms from hoovering up the private information that powers some of the most noxious recommendation algorithms.

But excluding algorithmically sorted content from Section 230 won’t end recommendation algorithms, which are baked into how platforms sort and display content online. Instead, it creates a strong incentive for powerful platforms to silence journalism and the voices of individual users.

The court of appeals’ decision isn’t the end of online algorithms. But it may be the beginning of the end of online free speech.

Donate to support press freedom

Your support is more important than ever.