On January 10 the Supreme Court will hear argument in an unprecedented First Amendment case that will determine the future of TikTok, a social media platform used by about 170 million Americans, or more than half the country’s population. In 2023 alone Americans uploaded 5.5 billion videos to the platform, which were viewed thirteen trillion times. The content ranges from dance videos to cooking and makeup tutorials, but TikTok is also a central locus for political speech. Nearly 40 percent of American adults under thirty regularly get news from the site, which offers information, political and otherwise, from across the globe. During the presidential campaign, Biden and Trump campaign TikToks were viewed over 6 billion times.1
Unless the Supreme Court intervenes, however, TikTok will effectively be off-limits to Americans in less than two weeks. Last April, President Joe Biden signed the Protecting Americans from Foreign Adversary Controlled Applications Act, which provides that unless TikTok’s owner—a private company called ByteDance headquartered in China—sells it to a new owner approved by the US government before January 19, 2025, it will become illegal for any mobile application or Internet service provider, such as Google, to host the platform. Everyone agrees that ByteDance won’t sell: its algorithm is central to TikTok’s success, and without it TikTok would be far less attractive to both users and any potential buyer. Absent the Supreme Court’s intervention, then, the US will soon become the latest nation to ban the platform, joining Afghanistan, India, Iran, Jordan, Kyrgyzstan, Nepal, North Korea, Senegal, Somalia, Uzbekistan, and, ironically, China itself—not a group of countries known for their fealty to free speech.
In May 2024 TikTok, ByteDance, and a number of its users challenged the law as a violation of the First Amendment. After the US Court of Appeals for the D.C. Circuit unanimously ruled against them, they asked the Supreme Court to intercede on an emergency basis. Shortly before Christmas the Court agreed to hear the matter and ordered expedited briefing over the holidays, to preserve the option of halting the law from taking effect—and TikTok from being taken down—before the statute’s deadline.
The case pits the speech interests of half the country’s citizens and a major US media outlet against the national security concerns of Congress and the executive branch. Federal officials of both parties have worried for some years that China’s government could exploit the platform to undermine the country’s national security, either by mining the personal data the platform collects on American users or by deploying its recommendation algorithm to manipulate what users see. The Trump and Biden administrations both made efforts to contain those risks, and the law now in effect passed with broad bipartisan support. (Although Trump ordered TikTok’s divestiture in 2020, he has now filed what is essentially a vanity brief in the Supreme Court taking no position on the legal issues but urging the Court to let him fix it all once he takes office on January 20. “President Trump is one of the most powerful, prolific and influential users of social media in history,” the brief boasts, and he “alone possesses the consummate dealmaking expertise…to negotiate a resolution.”)
The government notes that even though TikTok is a US corporation, headquartered in California, its algorithm for recommending content is managed by ByteDance, which is chartered in the Cayman Islands but headquartered in Beijing. ByteDance is entirely privately owned, but the US asserts that, like all companies in China, it is vulnerable to influence by the Chinese government. The US offers no evidence that China has ever actually tried to coerce TikTok to do its bidding or sought access to its customers’ data—through ByteDance or otherwise. But it cites other instances in which China has engaged in cyberespionage, including hacking the US Office of Personnel Management and stealing financial data on 147 million people from a credit-card reporting company. It also claims that China has directed ByteDance to censor content in other countries, although it cites no evidence in the open record to support that charge.
*
The TikTok case is unprecedented in multiple ways. Never before has the government attempted to shut down an entire speech forum, much less one used by half the nation. Never before, however, has such a major platform been subject to the potential influence of a foreign adversary. Concerns about China’s hacking are well-founded—although they are hardly limited to TikTok, as the country’s prior data-breach operations illustrate. Nor is it fanciful that foreign enemies might exploit the Internet to sow discord here; one need only recall the former FBI director Robert Mueller’s finding that Russia carried out “sweeping and systematic” interference in the 2016 presidential election, including spreading disinformation via social media and hacking the Clinton campaign’s internal emails.2 As the Internet makes national borders more porous, the US’s foes can exploit its commitment to free speech in ways inconceivable not just to the Constitution’s founders but to virtually all our forebears.
Advertisement
At the same time, this is far from the first instance in which the government has invoked foreign influence and national security to justify suppressing speech. As James Madison warned in a 1798 letter to Thomas Jefferson, “Perhaps it is a universal truth that the loss of liberty at home is to be charged to provisions agst. danger real or pretended from abroad.” Indeed, today’s First Amendment jurisprudence was forged in response to such efforts, especially during the Cold War, when government officials targeted millions for their suspected sympathies with the Communist Party. Congress declared that the American Communist Party was part of an international movement led by the Soviet Union and dedicated to the violent overthrow of the US, even though the Party engaged in widespread lawful conduct, including advocacy on behalf of civil rights and economic justice. Congress made it a crime to associate with the Party in any way, and the executive branch subjected millions of Americans who worked for the government or its contractors to loyalty inquiries, examining their reading habits, opinions, and associations for any evidence of the taint of communism.
Initially the Supreme Court did little to slow, much less halt, this campaign of political repression. In hindsight, however, the Court recognized that it had been too deferential to the government’s national security assertions. Drawing on that lesson, it eventually developed robust speech protections against anticommunist hysteria—ruling, for example, that the government could not punish association with the Party absent proof that an individual specifically intended to further its illegal ends, and that even advocating overthrow of the government was protected without proof that the threat was likely to come to pass imminently.
Those later precedents contributed to one of the Court’s finest moments. In 1971, by a 6–3 vote, it denied the government’s effort to block publication of the Pentagon Papers, forthrightly rejecting claims about the threat they posed to national defense. The question now is whether the Court will learn from its earlier mistakes in overindulging invocations of national security—or whether it will repeat them.
*
The principal doctrinal questions in the TikTok case concern, first, what level of scrutiny the Court should apply to the law and, second, whether the government’s national security rationales justify the law’s restriction on free speech under the applicable standard. The first question should be easily resolved. As a general rule the Court applies its most skeptical analysis to laws that discriminate on the basis of content and applies more deferential review to laws that are content-neutral. That’s because the First Amendment above all demands that the government remain neutral in regulating speech; the content of public debate is to be shaped by the people, not official fiat.
The government insists that the TikTok law is content-neutral because it does not single out particular messages or views for prohibition: it targets who controls the platform, not what appears on it. All the law demands is that TikTok divest; if it does, the government contends, the material on the app can remain the same. But that is too cute by half. The US government has stated explicitly that Congress passed the law because it fears China will covertly manipulate what TikTok users see on the platform—an interest that is inextricably related to the platform’s content. If Congress forced a sale of The Washington Post because it feared that Jeff Bezos might be influenced by foreign governments to manipulate the paper’s coverage in ways that further foreign interests, no one would treat that law as content-neutral. The only reason the government is concerned about China’s potential manipulation is the content that would result. The TikTok law is therefore content-based and should be subject to the Court’s most demanding review, or “strict scrutiny.”
The harder question is whether the law satisfies that standard. Statutes can survive strict scrutiny only if they are the “least restrictive means” to further a “compelling interest.” The government cites two interests: forestalling China from covertly manipulating content and preventing the mining of Americans’ personal data on the site. The first interest, far from compelling, is directly contrary to the central premise of the First Amendment, which prohibits the government from seeking to control what Americans see, hear, or read. That is true whether the source of the information is foreign or domestic: in 1965 the Court held that Americans had a First Amendment right to receive Communist literature from foreign governments abroad.
The government maintains that its concern is justified because China may manipulate content “covertly.” But that ought not change the result. While it’s true that algorithms are particularly obscure, they are not categorically different in this respect from everyday editorial decisions. The reader of a newspaper, after all, sees only the final text, not the process that led to that result. The fact that The Washington Post’s current publisher is a foreign national and that his staff’s editorial decisions covertly shape the paper’s content on a daily basis does not give the government a legitimate interest in ousting him.
Advertisement
The second interest—protecting Americans’ private data—is indisputably compelling. But is shutting down TikTok the “least restrictive means” to purse this aim? When Congress enacted the TikTok law, it also prohibited data brokers from transferring any personal data to “foreign adversaries,” among them China. Extending such a prohibition to TikTok would be far less restrictive than banning the platform altogether.
The government objects that it cannot trust TikTok to obey such a law because of its corporate parent’s vulnerability to China’s influence. But laws prohibiting conduct are not built on trust; they are based on the threat of penalties. We don’t “trust” people not to commit arson, murder, or fraud. We prohibit the conduct and back the prohibition with sanctions. That approach applies equally to individuals and businesses with no foreign ties and to those with such ties. The government has not shown why this approach would not work here—especially as it offers no evidence that TikTok has shared any of its data with China to date, even in the absence of a legal prohibition. So while protecting privacy is a compelling interest, closing down TikTok is not the least restrictive way to achieve that end.
Strict scrutiny is meant to be the most demanding standard the Court applies. The Court has only ever found that three laws restricting speech survive it. One was in a case I argued, Holder v. Humanitarian Law Project, on which the government relies heavily here. In that case I represented a humanitarian law nonprofit that worked with the Kurdistan Workers Party (PKK) in Turkey to help it file human rights complaints about Turkey’s mistreatment of the Kurdish minority. The group challenged a federal law that makes it a crime to provide “material support” to foreign groups designated “terrorist” by the Secretary of State, including the PKK. The law defined support expansively to include my client’s efforts, even though they consisted only of speech furthering lawful ends.
The Court agreed that the law had to satisfy strict scrutiny, rejecting the government’s argument that it was content-neutral and therefore warranted more deferential review. But the Court then applied an extraordinarily anemic version of strict scrutiny, ostensibly because the case involved foreign affairs and national security. It reasoned that it had to defer to the government’s claim that literally any support to a terrorist group, even filing a human rights complaint, could ultimately further its terrorist aims. It did so even though the government cited no evidence that human rights advocacy had ever led to terrorism, and even though the Court had previously rejected a similarly broad contention that any association with the Communist Party furthered its illegal ends, instead requiring proof that the supporter in question specifically intended to further such ends. As a result, it is now a crime in the United States to advocate for human rights if you do so with a group the government disfavors.
If the Court again applies a version of “strict scrutiny” that is strict in name only, the government may well prevail, and 170 million Americans will be denied access to TikTok based on nothing more than fears about what might happen in the future. But the justices would do better to require a more tailored response; strict means strict, after all, and the standard is designed to ensure that speech restrictions are truly necessary and as narrowly framed as possible. As our history demonstrates, the federal government has all too often pointed to speculative threats from abroad to justify the unnecessary punishment of speech at home.