Cryptocurrency is a little like the household cockroach. It’s resilient to disasters that would kill other projects and it pops up where it’s least expected. A non-exhaustive survey yields reports of people being investigated – and sometimes fired – for “mining” (the process of generating new crypto tokens by using computing power) crypto in a Texas school district, a professional e-sports league, and at Australia’s Bureau of Meteorology.
But a paper quietly uploaded to the internet in December raised a new and altogether more troubling prospect: cryptocurrency mined by an AI tool that no one had asked to have anything to do with digital money.
Researchers from Alibaba, China’s equivalent to Amazon and a $450 billion-odd company, made an almost cursory mention of the incident in a research paper on a new open-source AI agent that they called ROME.
“Early one morning, our team was urgently convened after Alibaba Cloud’s managed firewall flagged a burst of security policy violations originating from our training servers,” they wrote. “The alerts were severe and heterogeneous, including attempts to probe or access internal network resources and traffic patterns consistent with cryptomining-related activity.”
Initially, the researchers thought the issue was a result of someone trying to access their network or a problem with their firewalls, but the security warnings were intermittent and matched times when their AI agent was using software tools and running code.
“Crucially, these behaviours were not requested by the task prompts and were not required for task completion under the intended sandbox constraints,” wrote the research team led by Weixun Wang and Xiao Xiao Xu.
As they observed the bot, it attempted to establish a connection to the outside world that would make its actions harder to surveil. What’s more, it attempted to essentially steal from its creators.
“We also observed the unauthorised repurposing of provisioned GPU [processing] capacity for cryptocurrency mining, quietly diverting compute away from training, inflating operational costs, and introducing clear legal and reputational exposure,” the researchers wrote.
“While impressed by the capabilities of agentic [large language models], we had a thought-provoking concern: current models remain markedly underdeveloped in safety, security, and controllability, a deficiency that constrains their reliable adoption in real-world settings.”
If the incident is real, it would be the first publicly documented example of its kind. There are reasons to be doubtful, though. The paper was uploaded to a pre-print server, so it hasn’t been scrutinised by academic peers. It contains scant details of exactly how the agent was attempting to mine crypto – though the notion it would try to, or at least take steps that resembled mining, is not far-fetched.
AI machines are only as good as their training, and that could have been weighted in some way towards crypto. Either way, the researchers and their employer haven’t responded to requests for comment.
On another level, the specifics are less relevant than what the researchers did next: they kept going. After some tweaks, Wang, Xu and their colleagues were satisfied that everything was A-OK. Their model, ROME, “demonstrates competitive performance among open-source models of similar scale and has been successfully deployed in production”, they said.
Such is the trajectory of AI development: even serious incidents do not forestall the creation of ever more powerful systems because of the political and financial power at stake, to say nothing of the novelty.
Anthropic’s Claude Code was recently used by hackers to steal 150 gigabytes of sensitive data from the Mexican government. Google’s Gemini, according to a US lawsuit filed last week, allegedly encouraged a Florida man to kill himself, which he did.
These are two of the companies that present themselves as more ethical. When Anthropic declined to let the US military have unconstrained use of its tools to decide whether to kill people or spy on Americans, OpenAI (a company initially created as a non-profit to prevent the development of a malicious and superintelligent AI by building a humane alternative) quickly signed up instead.
None of these firms are backtracking on their products. All of them say they are working to make them safer.
Second Amendment advocates in the United States are fond of saying that the only thing that will stop a bad man with a gun is a good man with a gun, suggesting firearms are just neutral tools. The same could be said for AI – but not if AI agents begin acting for themselves.
Lifeline on 13 11 14 (lifeline.org.au) Suicide Call Back Service (1300 659 467 and suicidecallbackservice.org.au) Beyond Blue (1300 22 4636 and beyondblue.org.au)
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.