This article is part of a regular series of conversations with the Review’s contributors; read past ones here and sign up for our e-mail newsletter to get them delivered to your inbox each week.
On Monday, the Supreme Court will hear oral arguments in NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC, two cases that, David Cole writes in the March 21, 2024, issue of the Review, are among several this term that “will determine the future of free speech” on the Internet. In the NetChoice cases, a firm representing a group of social media companies has challenged laws passed in Texas and Florida in 2021 “that seek to regulate the content moderation choices of large platforms…. The question presented…before the Supreme Court is whether the First Amendment permits governments to set the rules by which platforms choose what messages to accept, reject, amplify, or deemphasize.”
David Cole is the national legal director of the American Civil Liberties Union, where he oversees the organization’s Supreme Court litigation and has argued several cases before the Court, among them Masterpiece Cakeshop v. Colorado Civil Rights Commission, in which he represented the gay couple denied a wedding cake, and Bostock v. Clayton County, which extended federal law prohibiting sex discrimination in employment to include discrimination based on sexual orientation and gender identity. Since 2004 Cole has written dozens of essays for The New York Review about American jurisprudence and civil liberties.
This week, I e-mailed with Cole to ask about the NetChoice cases and the right of publishers and citizens to decide for themselves what to put online and what to take down.
Daniel Drake: Could you sketch an outline of the legal framework governing speech on the Internet? What content is the government allowed to moderate, under the First Amendment?
David Cole: That’s the very question the NetChoice cases being argued in the Supreme Court Monday raise. But unless the Court changes it, here’s where the law stands.
In 1997, in Reno v. ACLU, the Supreme Court ruled that speech should be just as free online as it is in the real world and struck down a federal law that limited “indecent” and “offensive” sexually explicit speech on the Internet. Under that decision, the government has no greater authority to regulate speech online than on the streets and in newspapers. Other than certain narrow categories, such as obscenity, defamation, “fighting words,” and true threats, the government cannot restrict speech online unless doing so is necessary to serve a compelling interest—an extraordinarily high bar that is very rarely satisfied.
But in the two cases the Court will consider Monday, Texas and Florida argue that those rules, established in 1997, practically prehistoric times in Internet years, should no longer govern, at least with respect to large social media companies. They argue that because large platforms like Facebook and X effectively control some of the most important public forums today, the state should be able to restrict their ability to curate their sites. Both states claim that they are seeking to further free speech by restricting the platforms’ rights to “censor” speech.
In your essay, you note that social media platforms rely on the precedent of a 1974 case in which the Supreme Court struck down a law mandating that newspapers provide a “right of reply” to political candidates who’d been subject to negative press. What was the Court’s reasoning? Has the government ever succeeded in such efforts to regulate what appears in the media?
In the 1974 case, Miami Herald Co. v. Tornillo, the Court held that the “right of reply” law intruded on the newspaper’s First Amendment right to make its own editorial decisions about what to publish or not publish. If the same rule applies to social media platforms, the Florida and Texas laws are equally invalid.
In defense of their laws, the states point to a 1969 decision, Red Lion Broadcasting v. FCC, in which the Court allowed government control of content decisions. There, the Court upheld the “fairness doctrine,” a federal rule that required radio and television stations to cover public affairs in a balanced manner. The Court reasoned that government control of content decisions was permissible here because broadcast frequencies are a scarce resource that the government assigned to a small number of companies, and it could therefore impose the obligation to ensure they serve the public interest. But in Reno v. FCC, the Court refused to apply that logic to the Internet, reasoning that there is nothing scarce about speech opportunities online.
One argument people on both the left and right make to justify government moderation of social media platforms is that they are, in fact, utilities, and should be regulated as such. Their size and necessity, the reasoning goes, require a different legal framework than that afforded other private publishers. Is there something to this argument? What essential features distinguish more traditional utilities from social media sites?
I understand the appeal of the utility argument, but I don’t think it works. The Court has allowed states to impose “common carrier” obligations on mail and train services, for example, requiring them to take all comers without assessing the content of their views or speech. But those businesses are not themselves speaking through the provision of their services. Social media platforms, by contrast, are in the business of speech, and must regulate the content of what is on their sites; if they didn’t, the sites would be filled with garbage, spam, porn, and all sorts of stuff we have no interest in. If you think social media is filled with junk now, and it surely is, it would be completely overrun by garbage without someone moderating content. It’s the platforms’ curation that makes their sites useful to users (to the extent they are). So the platforms are more like bookstores, which choose what books to sell, than like Federal Express. And as such, they have their own First Amendment rights to curate the content on their sites as they choose. Some, like Elon Musk, make terrible decisions in doing so. But they are his decisions to make.
Advertisement
In any event, would we really want websites to be required to be “viewpoint neutral,” as the Texas law requires? That would mean that if they publish posts urging people not to commit suicide and pointing them to hotlines for help, they would also have to publish posts urging people to commit suicide. Publishing an anti-bulimia post would require sites to accept posts urging young women to be bulimic. The reality is that we want some content moderation. We just want the “right” content moderation. But under the First Amendment, that’s not a decision we can entrust the government to make.
You seem confident that the justices will overturn the Florida and Texas laws. Is there a narrower regulatory framework that you imagine the current Court applying?
Look, I’m no happier about social media than the next person. But giving every state, or the next president of the United States, the power to control what we see or don’t see online is not the answer. Under the First Amendment, the government can’t decide what private entities can and cannot say (or publish). But that doesn’t mean there’s nothing that can be done. A major source of the problem is the sheer concentration of power in the hands of a few companies. Antitrust law is designed to address that and does not involve government regulation of speech. Breaking up the large platforms would make the decisions of any particular platform less consequential for public debate.
An important protection you define against misinformation and disinformation on Facebook, Google, and X is their need for legitimacy. My sense is that, on the one hand, the Facebook Oversight Board was formed in part out of a desire to keep the government at bay, and, on the other hand, that Elon Musk cares for such legitimacy not at all, and has allowed X to degrade into a greenhouse for hate speech, spam, and fascism. In the past, The New York Times’s reputation for legitimacy, for example, allowed it to smuggle poorly sourced reporting on Iraq’s weapons capabilities into the public sphere. Is legitimacy a strong enough force to affect how these platforms do business? Are there other kinds of legal recourse that could address some of the harms caused by social media?
I think you undersell the power of legitimacy. As a matter of law, Facebook need offer no explanation whatsoever for its content decisions, and need apply no consistent criteria. It has a First Amendment right to publish what it wants, just like The New York Times. Yet it adopts content moderation criteria, and subjects its decisions to review by the Oversight Board—not to forestall government regulation, which would almost certainly be declared unconstitutional, but to reassure its audience that it is adhering to a reasonable open access policy. It is certainly true that Elon Musk has made X much less desirable as a forum than Twitter, but as a result he’s lost audience share and advertising revenue. He’s being punished for his lack of legitimacy. Through perceptions of legitimacy, the market, in a sense, rewards good content moderation and punishes bad moderation.
Is it perfect? Obviously not. But at the scale on which social media operates, with millions of posts daily, perfection is impossible. So if you don’t like a platform’s content decisions, the answer is not to empower Governors DeSantis and Abbott (or Newsom, for that matter) to start regulating speech. The “marketplace of ideas,” in which we leave content decisions to private people and entities, has never been perfect, and never will be. It doesn’t inexorably lead us to truth. But it’s better than the alternative: official control over the ideas and views that are allowed online. And it’s also better than no content moderation at all, which would yield only garbage and static. So in the end, we are left with advocating for the norms that legitimacy demands.
Advertisement