On November 2, 2022, I attended a Google AI event in New York City. One of the themes was responsible AI. As I listened to executives talk about how they aligned their technology with human values, I realized that the malleability of AI models was a double-edged sword. Models could be tweaked to, say, minimize biases, but also to enforce a specific point of view. Governments could demand manipulation to censor unwelcome facts and promote propaganda. I envisioned this as something that an authoritarian regime like China might employ. In the United States, of course, the Constitution would prevent the government from messing with the outputs of AI models created by private companies.
This Wednesday, the Trump administration released its AI manifesto, a far-ranging action plan for one of the most vital issues facing the country—and even humanity. The plan generally focuses on besting China in the race for AI supremacy. But one part of it seems more in sync with China’s playbook. In the name of truth, the US government now wants AI models to adhere to Donald Trump’s definition of that word.
You won’t find that intent plainly stated in the 28-page plan. Instead it says, “It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.”
That’s all fine until the last sentence, which raises the question—truth according to whom? And what exactly is a “social engineering agenda”? We get a clue about this in the very next paragraph, which instructs the Department of Commerce to look at the Biden AI rules and “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” (Weird uppercase as written in the published plan.) Acknowledging climate change is social engineering? As for truth, in a fact sheet about the plan, the White House says, “LLMs shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity.” Sounds good, but this comes from an administration that limits American history to “uplifting” interpretations, denies climate change, and regards Donald Trump’s claims about being America’s greatest president as objective truth. Meanwhile, just this week, Trump’s Truth Social account reposted an AI video of Obama in jail.
In a speech touting the plan in Washington on Wednesday, Trump explained the logic behind the directive: “The American people do not want woke Marxist lunacy in the AI models,” he said. Then he signed an executive order entitled “Preventing Woke AI in the Federal Government.” While specifying that the “Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,” it declares that “in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.” Since all the big AI companies are courting government contracts, the order appears to be a backdoor effort to ensure that LLMs in general show fealty to the White House’s interpretation of history, sexual identity, and other hot-button issues. In case there’s any doubt about what the government regards as a violation, the order spends several paragraphs demonizing AI that supports diversity, calls out racial bias, or values gender equality. Pogo alert—Trump’s executive order banning top-down ideological bias is a blatant exercise in top-down ideological bias.
Marx Madness
It’s up to the companies to determine how to handle these demands. I spoke this week to an OpenAI engineer working on model behavior who told me that the company already strives for neutrality. In a technical sense, they said, meeting government standards like being anti-woke shouldn’t be a huge hurdle. But this isn’t a technical dispute: It’s a constitutional one. If companies like Anthropic, OpenAI, or Google decide to try minimizing racial bias in their LLMs, or make a conscious choice to ensure the models’ responses reflect the dangers of climate change, the First Amendment presumably protects those decisions as exercising the “freedom of speech and expression” touted in the AI Action Plan. A government mandate denying government contracts to companies exercising that right is the essence of interference.
You might think that the companies building AI would fight back, citing their constitutional rights on this issue. But so far no Big Tech company has publicly objected to the Trump administration’s plan. Google celebrated the White House’s support of its pet issues, like boosting infrastructure. Anthropic published a positive blog post about the plan, though it complained about the White House’s sudden seeming abandonment of strong export controls earlier this month. OpenAI says it is already close to achieving objectivity. Nothing about asserting their own freedom of expression.
In on the Action
The reticence is understandable because, overall, the AI Action Plan is a bonanza for AI companies. While the Biden administration mandated scrutiny of Big Tech, Trump’s plan is a big fat green light for the industry, which it regards as a partner in the national struggle to beat China. It allows the AI powers to essentially blow past environmental objections when constructing massive data centers. It pledges support for AI research that will flow to the private sector. There’s even a provision that limits some federal funds for states that try to regulate AI on their own. That’s a consolation prize for a failed portion of the recent budget bill that would have banned state regulation for a decade.
For the rest of us, though, the “anti-woke” order is not so easily brushed off. AI is increasingly the medium by which we get our news and information. A founding principle of the United States has been the independence of such channels from government interference. We have seen how the current administration has cowed parent companies of media giants like CBS into apparently compromising their journalistic principles to favor corporate goals. Extending this “anti-woke” agenda to AI models, it’s not unreasonable to expect similar accommodations. Senator Edward Markey has written directly to the CEOs of Alphabet, Anthropic, OpenAI, Microsoft, and Meta urging them to fight the order. “The details and implementation plan for this executive order remain unclear,” he writes, “but it will create significant financial incentives for the Big Tech companies … to ensure their AI chatbots do not produce speech that would upset the Trump administration.” In a statement to me, he said, “Republicans want to use the power of the government to make ChatGPT sound like Fox & Friends.”
As you might suspect, this view isn’t shared by the White House team working on the AI plan. They believe their goal is true neutrality, and that taxpayers shouldn’t have to pay for AI models that don’t reflect unbiased truth. Indeed, the plan itself points a finger at China as an example of what happens when truth is manipulated. It instructs the government to examine frontier models from the People’s Republic of China to determine “alignment with Chinese Communist Party talking points and censorship.” Unless the corporate overlords of AI get some backbone, a future evaluation of American frontier models might well reveal lockstep alignment with White House talking points and censorship. But you might not find that out by querying an AI model. Too woke.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous coverage from Steven Levy here.