To understand why a number of Silicon Valley tech moguls are supporting Donald Trump’s third presidential campaign after shunning him in 2016 and 2020, look no further than chapter three, bullet point five, of this year’s Republican platform. It starts with the party’s position on cryptocurrency, that ephemeral digital creation that facilitates money laundering, cybercrime, and illicit gun sales while greatly taxing energy and water resources:

Republicans will end Democrats’ unlawful and unAmerican Crypto crackdown and oppose the creation of a Central Bank Digital Currency. We will defend the right to mine Bitcoin, and ensure every American has the right to self-custody of their Digital Assets, and transact free from Government Surveillance and Control.

The platform then pivots to artificial intelligence, the technology that brings us deepfake videos, voice cloning, and a special kind of misinformation that goes by the euphemistic term “hallucination,” as if the AI happened to accidentally swallow a tab of LSD:

We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI development rooted in Free Speech and Human Flourishing.

According to the venture capitalist Ben Horowitz, who, along with his business partner Marc Andreessen, is all in for the former president,1 Trump wrote those words himself. (This may explain the random capitalizations.) As we were reminded a few months ago when Trump requested a billion dollars from the oil and gas industry in exchange for favorable energy policies once he is in office again, much of American politics is transactional. Horowitz and Andreessen are, by their own account, the biggest crypto investors in the world, and their VC firm, Andreessen Horowitz, holds two AI funds worth billions. It makes sense, then, that they and others who are investing in or building these nascent industries would support a craven, felonious autocrat; the return on investment promises to be substantial. Or, as Andreessen wrote last year in a five-thousand-plus-word ramble through his neoreactionary animating philosophy called “The Techno-Optimist Manifesto,” “Willing buyer meets willing seller, a price is struck, both sides benefit from the exchange or it doesn’t happen.”2

The fact is, more than 70 percent of Americans, both Democrat and Republican, favor the establishment of standards to test and ensure the safety of artificial intelligence, according to a survey conducted by the analytics consultancy Ipsos last November. An earlier Ipsos poll found that 83 percent “do not trust the companies developing AI systems to do so responsibly,” a view that was also held across the political spectrum. Even so, as shown by both the Republican platform and a reported draft executive order on AI prepared by Trump advisers that would require an immediate review of “unnecessary and burdensome regulations,” public concern is no match for corporate dollars.

Perhaps to justify this disregard, Trump and his advisers are keen to blame China. “Look, AI is very scary, but we absolutely have to win, because if we don’t win, then China wins, and that is a very bad world,” Trump told Horowitz and Andreessen. (They agreed.) Pitching AI as “a new geopolitical battlefield that must somehow be ‘won,’” to quote Verity Harding, the former head of public policy at Google DeepMind, has become a convenient pretext for its unfettered development. Harding’s new book, AI Needs You, an eminently readable examination of the debates over earlier transformational technologies and their resolutions, suggests—perhaps a bit too hopefully—that it doesn’t have to be this way.

Artificial intelligence, a broad category of computer programs that automate tasks that might otherwise require human cognition, is not new. It has been used for years to recommend films on Netflix, filter spam e-mails, scan medical images for cancer, play chess, and translate languages, among many other things, with relatively little public or political interest. That changed in November 2022, when OpenAI, a company that started as a nonprofit committed to developing AI for the common good, released ChatGPT, an application powered by the company’s large language model. It and subsequent generative AI platforms, with their seemingly magical abilities to compose poetry, pass the bar exam, create pictures from words, and write code, captured the public imagination and ushered in a technological revolution.

It quickly became clear, though, that the magic of generative AI could also be used to practice the dark arts: with the right prompt, it could explain how to make a bomb, launch a phishing attack, or impersonate a president. And it can be wildly yet confidently inaccurate, pumping out invented facts that seem plausibly like the real thing, as well as perpetuating stereotypes and reinforcing social and political biases. Generative AI is trained on enormous amounts of data—the early models were essentially fed the entire Internet—including copyrighted material that was appropriated without consent. That is bad enough and has led to a number of lawsuits, but, worse, once material is incorporated into a foundational model, the model can be prompted to write or draw “in the style of” someone, diluting the original creator’s value in the marketplace. When, in May 2023, the Writers Guild of America went on strike, in part to restrict the use of AI-generated scripts, and was joined by the Screen Actors Guild two months later, it was a blatant warning to the rest of us that generative AI was going to change all manner of work, including creative work that might have seemed immune from automation because it is so fundamentally human and idiosyncratic.

Advertisement

It also became apparent that generative AI is going to be extremely lucrative, not only for the billionaires of Silicon Valley, whose wealth has already more than doubled since Trump’s 2017 tax cuts, but for the overall economy, potentially surpassing the economic impact of the Internet itself. By one account, AI will add close to $16 trillion to the global economy by 2030. OpenAI, having shed its early idealism, is, by the latest accounting, valued at $157 billion. Anthropic, a rival company founded by OpenAI alumni, is in talks to increase its valuation to $40 billion. (Amazon is an investor.) Meta, Google, and Microsoft, too, have their own AI chatbots, and Apple recently integrated AI into its newest phones. As the cognitive scientist Gary Marcus proclaims in his short but mighty broadside Taming Silicon Valley: How We Can Ensure That AI Works for Us, after ChatGPT was released, “almost overnight AI went from a research project to potential cash cow.”

Arguably, artificial intelligence’s most immediate economic effect, and the most obvious reason it is projected to add trillions to the global economy, is that it will reduce or replace human labor. While it will take time for AI agents to be cheaper than human workers (because the cost of training AI is currently so high), a recent survey of chief financial officers conducted by researchers at Duke University and the Federal Reserve found that more than 60 percent of US companies plan to use AI to automate tasks currently done by people. In a study of 750 business leaders, 37 percent said AI technology had replaced some of their workers in 2023, and 44 percent reported that they expected to lay off employees this year due to AI. But in the MIT computer scientist Daniela Rus’s new book, The Mind’s Mirror: Risk and Reward in the Age of AI, written with Gregory Mone, she offers a largely sunny take on the digital future:

The long-term impact of automation on job loss is extremely difficult to predict, but we do know that AI does not automate jobs. AI and machine learning automate tasks—and not every task, either.

This is a semantic feint: tasks are what jobs are made of. Goldman Sachs estimates that 300 million jobs globally will be lost or degraded by artificial intelligence.

What does degraded mean? Contrary to Rus, who believes that technologies such as ChatGPT “will not eliminate writing as an occupation, yet they will undoubtedly alter many writing jobs,” consider the case of Olivia Lipkin, a twenty-five-year-old copywriter at a San Francisco tech start-up. As she told The Washington Post, her assignments dropped off after the release of ChatGPT, and managers began referring to her as “Olivia/ChatGPT.” Eventually her job was eliminated because, as noted in her company’s internal Slack messages, using the bot was cheaper than paying a writer. “I was actually out of a job because of AI,” she said.

Lipkin is one person, but she represents a trend that has only just begun to gather steam. The outplacement firm Challenger, Gray and Christmas found that nearly four thousand US jobs were lost to AI in May 2023 alone. In many cases, workers are now training the technology that will replace them, either inadvertently, by modeling a given task—i.e., writing ad copy that the machine eventually mimics—or explicitly, by teaching the AI to see patterns, recognize objects, or flag the words, concepts, and images that the tech companies have determined to be off-limits.

In Code Dependent: Living in the Shadow of AI, the journalist Madhumita Murgia documents numerous cases of people, primarily in the Global South, whose “work couches a badly kept secret about so-called artificial intelligence systems—that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it.” They include displaced Syrian doctors who are training AI to recognize prostate cancer, college graduates in Venezuela labeling fashion items for e-commerce sites, and young people in Kenya who spend hours each day poring over photographs, identifying the many objects that an autonomous car might encounter. Eventually the AI itself will be able to find the patterns in the prostate cancer scans and spot the difference between a stop sign and a yield sign, and the humans will be left behind.

Advertisement

And then there is the other kind of degradation, the kind that subjects workers to horrific content in order to train artificial intelligence to recognize and reject it. At a facility in Kenya, Murgia found workers subcontracted by Meta who spend their days watching “bodies dismembered from drone attacks, child pornography, bestiality, necrophilia and suicides, filtering them out so that we don’t have to.” “I later discovered that many of them had nightmares for months and years,” she writes: “Some were on antidepressants, others had drifted away from their families, unable to bear being near their own children any longer.” The same kind of work was being done elsewhere for OpenAI. In some of these cases, workers are required to sign agreements that absolve the tech companies of responsibility for any mental health issues that arise in the course of their employment and forbid them from talking to anyone, including family members, about the work they do.

It may be some consolation that tech companies are trying to keep the most toxic material out of their AI systems. But they have not prevented bad actors from using generative AI to inject venomous content into the public square. Deepfake technology, which can replace a person in an existing image with someone else’s likeness or clone a person’s voice, is already being used to create political propaganda. Recently the Trump patron Elon Musk posted on X, the social media site he owns, a manipulated video of Kamala Harris saying things she never said, without any indication that it was fake. Similarly, in the aftermath of Hurricane Helene, a doctored photo of Trump knee-deep in the floodwaters went viral. (The picture first appeared on Threads and was flagged by Meta as fake.) While deepfake technology can also be used for legitimate reasons, such as to create a cute Pepsi ad that Rus writes about, it has been used primarily to make nonconsensual pornography: of all the deepfakes found online in 2023, 98 percent were porn, and 99 percent of those depicted were women.

For the most part, those who do not give permission for their likenesses to be used in AI-generated porn have no legal recourse in US courts. Though thirteen states currently have laws penalizing the creation or dissemination of sexually explicit deepfakes, there are no federal laws prohibiting the creation or consumption of nonconsensual pornography (unless it involves children). Section 230 of the Communications Decency Act, which has shielded social media companies from liability for what is published on their platforms, may also provide cover for AI companies whose technology is used to create this material.3 The European Union’s AI Act, which was passed in the spring, has the most nuanced rules to curb malicious AI-generated content. But, as Murgia points out, trying to get nonconsensual images and videos removed from the Internet is nearly impossible.

The EU AI Act is the most comprehensive legislation to address some of the more egregious harms of artificial intelligence. The European Commission first began exploring the possibility of regulating AI in the spring of 2021, and it took three years, scores of amendments, public comments, and vetting by numerous committees to get it passed. The act was almost derailed by lobbyists working on behalf of OpenAI, Microsoft, Google, and other tech companies, who spent more than 100 million euros in a single year trying to persuade the EU to make the regulations voluntary rather than mandatory. When that didn’t work, Sam Altman, the CEO of OpenAI, who has claimed numerous times that he would like governments to regulate AI, threatened to pull the company’s operations from Europe because he found the draft law too onerous. He did not follow through, but Altman’s threat was a billboard-size announcement of the power that the tech companies now wield. As the political scientist Ian Bremmer warned in a 2023 TED Talk, the next global superpower may well be those who run the big tech companies:

These technology titans are not just men worth 50 or 100 billion dollars or more. They are increasingly the most powerful people on the planet, with influence over our futures. And we need to know: Are they going to act accountably as they release new and powerful artificial intelligence?

It’s a crucial question.

So far, tech companies have been resisting government-imposed guidelines and regulations, arguing instead for extrajudicial, voluntary rules. To support this position, they have trotted out the age-old canard that regulation stifles innovation and relied on conservative pundits like James Pethokoukis, a senior fellow at the American Enterprise Institute, for backup. The real “danger around AI is that overeager Washington policymakers will rush to regulate a fast-evolving technology,” Pethokoukis wrote in an editorial in the New York Post.

We shouldn’t risk slowing a technology with vast potential to make America richer, healthier, more militarily secure, and more capable of dealing with problems such as climate change and future pandemics.

The tech companies are hedging their bets by engaging in a multipronged effort of regulatory capture. According to Politico,

an organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

If they succeed, the fox will not only be guarding the henhouse—the fox will have convinced legislators that this will increase the hens’ productivity.

Another common antiregulation stance masquerading as its opposite is the assertion—like the one made by Michael Schwarz, Microsoft’s chief economist, at last year’s World Economic Forum Growth Summit—that “we shouldn’t regulate AI until we see some meaningful harm that is actually happening.” (A more bizarre variant of this was articulated by Marc Andreessen on an episode of the podcast The Ben and Marc Show, when he said that he and Horowitz are not against regulation but believe it “should happen at the application level, not at the technology level…because to regulate AI at the technology level, then you’re regulating math.”) Those harms are already evident, of course, from AI-generated deepfakes to algorithmic bias to the proliferation of misinformation and cybercrime.

Murgia writes about an AI algorithm used by police in the Netherlands that identifies children who may, in the future, commit a crime; another whose seemingly neutral dataset led to more health care for whites than Blacks because it used how much a person paid for health care as a proxy for their health care needs; and an AI-guided drone system deployed by the United States in Yemen that determined which people to kill based on certain predetermined patterns of behavior, not on their confirmed identities. Predictive systems, whose parameters are concealed by proprietary algorithms, are being used in an increasing number of industries, as well as by law enforcement and government agencies and throughout the criminal justice system. Typically, when machines decide to deny parole, reject an application for government benefits, or toss out the résumé of a job seeker, the rebuffed party has few, if any, remedies: How can they appeal to a machine that will always give them the same answer?

There are also very real, immediate environmental harms from AI. Large language models have colossal carbon footprints. By one estimate, the carbon emissions resulting from the training of GPT-3 were the equivalent of those from a car driving the 435,000 or so miles to the moon and back, while for GPT-4 the footprint was three hundred times that. Rus cites a 2023 projection that if Google were to swap out its current search engine for a large language model, the company’s “total electricity consumption would skyrocket, rivaling the energy appetite of a country like Ireland.” Rus also points out that the amount of water needed to cool the computers used to train these models, as well as their data centers, is enormous. According to one study, it takes between 700,000 and two million liters of fresh water just to train a large language model, let alone deploy it. Another study estimates that a large data center requires between one million and five million gallons of water a day, or what’s used by a city of 10,000 to 50,000 people.

Microsoft, which has already integrated its AI chatbot, Copilot, into many of its business and productivity products, is looking to small modular nuclear reactors to power its AI ambitions. It’s a long shot. No Western nation has begun building any of these small reactors, and in the US only one company has had its design approved, at a cost of $500 million. To come full circle, Microsoft is training an LLM on documents relating to the licensing of nuclear power plants, in an effort to expedite the regulatory process. Not surprisingly, there is already opposition in communities where these new nuclear plants may be located. In the meantime, Microsoft has signed a deal with the operators of the Three Mile Island nuclear plant to bring the part of the facility that did not melt down in 1979 back online by 2028. Microsoft will purchase all of the energy created there for twenty years.

No doubt Gary Marcus would applaud the EU AI Act and other attempts to hold the big tech companies to account, since he wrote his book as a call to action. “We can’t realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,” he writes. “We can’t count on governments driven by campaign finance contributions to push back. The only chance at all is for the rest of us to speak up, really loudly.”

Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data is used to train LLMs and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalties, and passing stricter product liability laws, among other things. Marcus also suggests—as does Rus—that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight. As he told the Senate when he testified in May 2023:

The number of risks is large. The amount of information to keep up on is so much…. AI is going to be such a large part of our future and is so complicated and moving so fast…[we should consider having] an agency whose full-time job is to do this.

It’s a fine idea, and one that a Republican president who is committed to decimating the so-called administrative state would surely never implement. And after the Supreme Court’s recent decision overturning the Chevron doctrine, Democratic presidents who try to create a new federal agency—at least one with teeth—will likely find the effort hamstrung by conservative jurists. That doctrine, established by the Court’s 1984 decision in Chevron v. Natural Resources Defense Council, granted federal agencies the power to use their expertise to interpret congressional legislation. As a consequence, it gave the agencies and their nonpartisan civil servants considerable leeway in applying laws and making policy decisions. The June decision reverses this. In the words of David Doniger, one of the NRDC lawyers who argued the original Chevron case, “The net effect will be to weaken our government’s ability to meet the real problems the world is throwing at us.”

A functional government, committed to safeguarding its citizens, might be keen to create a regulatory agency or pass comprehensive legislation, but we in the United States do not have such a government. In light of congressional dithering,4 regulatory capture, and a politicized judiciary, pundits and scholars have proposed other ways to ensure safe AI. Harding suggests that the Internet Corporation for Assigned Names and Numbers (ICANN), the international, nongovernmental group responsible for maintaining the Internet’s core functions, might be a possible model for international governance of AI. While it’s not a perfect fit, especially because AI assets are owned by private companies, and it would not have the enforcement mechanism of a government, a community-run body might be able, at least, to determine “the kinds of rules of the road that AI will need to adhere to in order to protect the future.”

In a similar vein, Marcus proposes the creation of something like the International Atomic Energy Agency or the International Civil Aviation Organization but notes that “we can’t really expect international AI governance to work until we get national
AI governance to work first.” By far the most intriguing proposal has come from the Fordham law professor Chinmayi Sharma, who suggests that the way to ensure both the safety of AI and the accountability of its creators is to establish a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. “What if, like doctors,” she asks in the Washington University Law Review, “AI engineers also vowed to do no harm?”

Sharma’s concept, were it to be adopted, would overcome the obvious obstacles currently stymieing effective governance: it bypasses the tech companies, it does not require a new government bureaucracy, and it is nimble. It would accomplish this, she writes,

by establishing academic requirements at accredited universities; creating mandatory licenses to “practice” commercial AI engineering; erecting independent organizations that establish and update codes of conduct and technical practice guidelines; imposing penalties, suspensions or license revocations for failure to comply with codes of conduct and practice guidelines; and applying a customary standard of care, also known as a malpractice standard, to individual engineering decisions in a court of law.

Professionalization, she adds, quoting the network intelligence analyst Angela Horneman, “would force engineers to treat ethics ‘as both a software design consideration and a policy concern.’”

Sharma’s proposal, though unconventional, is no more or less aspirational than Marcus’s call for grassroots action to curb the excesses of Big Tech or Harding’s hope for an international, inclusive, community-run, nonbinding regulatory group. Were any of these to come to fruition, they would be likely targets of a Republican administration and its tech industry funders, whose ultimate goal, it seems, is a post-democracy world where they decide what’s best for the rest of us.5 The danger of allowing them to set the terms of AI development now is that they will amass so much money and so much power that this will happen by default.


An earlier version of this article misstated OpenAI CEO Sam Altman’s threat to European regulators.