Mark Zuckerberg spent the better part of a billion dollars – some reports suggest considerably more – wooing AI researchers to dinner at his Lake Tahoe compound over the past year. He handed out nine-figure pay packages – in some cases to Australians – and effectively blew up his entire AI operation. He fired people, hired new people, and installed a 29-year-old as the man responsible for salvaging his company’s reputation in the field he has declared the defining bet of his career.
On Thursday, the first results arrived.
Muse Spark, Meta’s new AI model, landed with enough fanfare to push the company’s stock up 6.5 per cent in a single day. It’s the first step in what Zuckerberg has promised investors will be “personal superintelligence for everyone.”
The model itself is good, by all accounts, in what’s a fiercely competitive race with incredibly high stakes. Independent testers who got early access say Meta is now, credibly, a competitive AI lab for the first time in years. The model performs well on reasoning benchmarks, handles health queries with depth and draws on social content – think Instagram posts, Facebook threads, and Reels – that no competitor can replicate.
That last piece is potentially very valuable and is underpinning Meta’s AI competitive strategy. It is also, depending on your view of the tech giant’s relationship with privacy and personal data, something worth watching carefully.
What it isn’t, and what Meta has acknowledged directly, is cutting-edge across the board. Coding remains a weak spot while long-horizon agentic tasks – the kind where an AI works autonomously through complex, multi-step problems – are still a work in progress.
To understand why Wednesday’s announcement mattered so much you have to go back just over a year, to when Meta’s previous model Llama 4 arrived with a thud.
It underperformed, and was followed by revelations that Meta had been quietly gaming a third-party benchmark used by the industry to rank models – essentially, fiddling the scoreboard. The company later admitted as much. Then, the tech giant’s biggest planned model, something called Behemoth, was shelved without ever being released publicly. Senior AI staff left, with some heading straight to competitors.
It was, by any reading, an embarrassing period for a company that had loudly proclaimed its open-source AI strategy as a gift to humanity and a competitive masterstroke simultaneously.
Zuckerberg, as he usually does, didn’t respond with humility but instead with money, and quick reorganisation. He brought in Alexandr Wang, the former Scale AI chief executive whose data-labelling startup Meta had just been valued at $US14 billion, and handed him the keys to a new entity called Meta Superintelligence Labs. Wang assembled a tight inner circle of elite researchers and set them to work.
Their first publicly released work – Muse Spark – is what came out today. It’s a closed model, meaning that Meta holds on to the underlying code, training data and model weights. Meta says it hopes to open-source future versions.
Meanwhile, across town in San Francisco just a couple of days earlier, Anthropic set off a different kind of alarm.
Where Meta was managing investor expectations and talking up health chatbot features, Anthropic was doing something unusual: announcing a model it says is too dangerous to release to the public.
Claude Mythos Preview, the company disclosed this week, is already capable of finding and exploiting so-called zero-day vulnerabilities in software – flaws that even the software’s own developers don’t know exist.
Those capabilities weren’t merely hypothetical, either – the model has reportedly already located a 27-year-old bug in OpenBSD, the operating system quietly underpinning a significant chunk of the world’s secure network infrastructure. It also found a 16-year-old flaw in video encoding tool FFmpegg and has catalogued vulnerabilities in Linux software running on most of the world’s servers.
Rather than a public launch, Anthropic is routing access through something it calls Project Glasswing — a coalition of more than 40 companies, including Apple, Amazon, Microsoft, Google, CrowdStrike and Palo Alto Networks, backed by $US100 million in usage credits. The idea is to use AI to find and patch vulnerabilities before malicious actors do the same thing.
The juxtaposition of these two stories in the same week is something worth sitting with.
Meta is celebrating because it has rebuilt itself into a credible AI competitor after a humiliating collapse. While Anthropic – which announced its own annual revenue had tripled to more than $US30 billion – is simultaneously ringing the loudest alarm bell the industry has heard in some time, warning that AI’s capabilities have now outpaced the security protecting some of the world’s critical infrastructure.
Zuckerberg’s model can recommend nearby restaurants while Anthropic’s model, which the company has decided you cannot be trusted with yet, can dismantle security systems that have been considered safe for decades.
Both things are true at the same time, and sum up what you need to know about the current state of AI in April 2026.
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.