Until Anthropic’s Mythos model triggered alarm bells, the Trump administration labelled efforts to put guardrails around artificial intelligence as “woke.” Now it is scrambling to erect some of its own.
Earlier this week, the White House said it had reached an agreement with Google, Microsoft and xAI that would enable the US Commerce Department’s Centre for AI Standards and Innovation to “conduct pre-deployment evaluations and targeted research” on AI models before they were released.
Those reviews, it said, would enable the government to assess the capabilities of frontier models and their national security implications. Donald Trump is considering issuing an executive order to formalise early government access to the models.
Anthropic and OpenAI voluntarily signed similar agreements with the Biden administration to allow the government to assess and counter safety, security and privacy risks and help develop standards for AI development.
Almost as soon as he regained office, however, Trump (whose major campaign donors included Silicon Valley billionaires) signed executive orders that effectively demolished the Biden administration’s cautious approach to AI in favour of a laissez-faire, “America First” race for AI supremacy.
He directed the Federal Trade Commission to identify and remove any regulations that retarded the development or deployment of AI and directed federal agencies to take account of unfavourable state AI regulations when awarding funding.
Any mention of climate change or diversity, equity and inclusion in assessments of the risks of AI was to be excised, and federal agencies were told they couldn’t procure any models developed with “woke” inputs.
The White House’s AI “czar,” David Sacks, cited first amendment (free speech) concerns for attacking Democrat-led states for their AI regulations.
“We don’t like seeing blue states trying to insert their woke ideology in AI models, and we really want to try to stop that,” Sacks, who relinquished his position last month, said last year.
Trump himself has said the US would do “whatever it takes” to lead the world in AI and ordered US agencies to eliminate any policy that might “hinder American AI dominance”.
When Anthropic sought to prevent the administration from using its tools for autonomous control of weaponry or mass domestic surveillance, Trump described it as “a radical left, woke company” full of “left-wing nutjobs” and banned it from doing business with the government and companies that do business with the government.
So, why the abrupt change of stance on AI?
It’s all down to Anthropic’s release of its Claude Mythos Preview tool last month, which has the ability to identify and exploit flaws in every operating system and browser, at a scale and at speeds beyond human capabilities.
Mythos is capable of attacking systems that would bring down critical nature infrastructure like power, water, health and financial systems.
So powerful is Mythos that Anthropic hasn’t released it generally, instead offering access to it to a key group of about 40 US companies so that they can identify and remedy the vulnerabilities in their own systems before hackers, whether state or individual, gain access to Mythos or to similar capabilities in other AI tools.
Not surprisingly, the release of Mythos and Anthropic’s assessment of its capabilities has alarmed, not just the Trump administration, but governments and institutions worldwide.
Abruptly, the administration’s approach to AI regulation has shifted 180 degrees, from one of unfettered development, framed as a contest with China, to one driven by the threats to national security it could pose.
With Sacks no longer within the government, the relationship with Anthropic has thawed and the administration is considering elevating the role and authority of the Centre for AI Standards and Innovation, which had previously largely relegated to the sidelines.
It is also considering creating a standards-setting body for the most powerful AI models, reintroducing some of the Biden administration’s guardrails that Trump revoked in his first days in office.
That about-face, given that Trump denigrates everything that Biden did (and many that he didn’t), underscores how much of a shock the revelation of Mythos’ capabilities has been to the administration – and how much of a wake-up call it is for the rest of the world.
It provides a context for Elon Musk’s ominous claims about AI’s potential, most recently made in the court case he has brought against OpenAI for, he says, betraying its original altruistic goals for personal enrichment.
“The worst-case scenario,” he told the jury, “is a Terminator situation where AI kills us all.”
Musk isn’t the only one who has characterised AI as a potentially existential threat to humanity.
Anthropic’s Dario Amodei has said that AI tools have enormous economic value, but if they aren’t built carefully, “they can kill you.” OpenAI’s Sam Altman has also advocated regulation of a sector “making the most consequential decisions about the shape of the future”.
The release of Mythos and the development of agentic AI – models that can operate and act autonomously – has signalled that AI development has reached the stage where the risk that regulation might slow development is of far less consequence than the risk of allowing an AI free-for-all.
The US, with access to the most advanced memory chips, has been aggressively pursuing, initially, agentic AI, but with the goal of soon achieving artificial general intelligence – human, or super-human, levels of intelligence.
China, without access to the most powerful chips because of US export bans and with more limited access to risk capital, has adopted a different approach, focusing more on model efficiency and on integrating AI into almost every aspect of its economy. Nevertheless, it is not that far behind the US.
When Trump and Xi Jinping meet later this month, technology is supposed to be a major theme in their discussions. In an ideal world, they would both commit to developing some common standards for the safe development of AI.
With Trump already boasting that he will tell Xi that the US has AI leadership, that might be a forlorn hope.
Xi has said that AI must be “safe, reliable, and controllable”, and has been actively promoting the concept of global AI safety and governance standards.
Mythos is capable of attacking systems that would bring down critical nature infrastructure like power, water, health and financial systems.
China has pursued an open-source model for AI development in order to leverage off the world’s developers in the contest with the US, whereas the US has been pursuing a purely commercial, self-interested model. In the absence of US leadership, that could confer China the most influential role in promoting global standards.
The Trump administration has finally recognised the threats that AI development could pose if that development is left unsupervised and unchecked.
It’s not “woke” to worry about existential threats to national economies, critical infrastructure and financial systems and to try to do something to protect them.
Whether Trump is capable of reigning in his wealthiest and most generous supporters and their “winner takes all” approach to AI, or taking some of the sizzle out of the sector underpinning the US sharemarket and (with planned AI investments of more than $US725 billion ($1 trillion this year) the US economy, may determine how consequential that reluctant and belated recognition of AI’s risks might be.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.