pull down to refresh

Recent court cases show a growing gap between constitutional protections and how speech actually functions online.Recent court cases show a growing gap between constitutional protections and how speech actually functions online.

In Murthy v. Missouri (2024), the Supreme Court examined whether federal officials improperly pressured social media companies to moderate certain content. The Court ultimately ruled that the plaintiffs lacked standing, leaving unresolved questions about where lawful government communication ends and unconstitutional coercion begins.

Around the same time, the Court reviewed state laws from Texas and Florida that sought to limit how large platforms moderate content (NetChoice v. Paxton/Moody). Rather than issuing a broad ruling, the Court sent the cases back to lower courts, signaling that existing free speech doctrine may not yet fit the modern platform environment.

Together, these cases point to a legal reality that is still unsettled:

The First Amendment restricts government action,not private platforms

Online platforms now function as primary public forums

Courts are struggling to apply old frameworks to new forms of power

As a result, speech is rarely banned outright. Instead, its reach is shaped by ranking systems, moderation policies, and enforcement discretion tools that exist largely outside constitutional review.

This does not mean free speech no longer exists. It means its practical impact increasingly depends on private governance rather than public law.

The long term question facing the U.S. legal system is not whether platforms have rights they do but whether democratic norms can survive when the modern public square is governed primarily by terms of service.

The law is still catching up. Until it does, the boundaries of free expression will remain unclear.

How should courts balance platform rights with their role as modern public forums?