
The Death of the Junior Dev?
Entry-level developer roles are disappearing fast. The data from Stanford's 2026 AI Index is stark. But there's a twist coming that nobody's really talking about yet.
Read Story→Toni Martin
April 23, 2026 · 6 min read

Earlier this month, something unusual happened in the world of AI.
Anthropic announced a new model called Claude Mythos Preview - and then immediately said you couldn't have it.
Not because it wasn't ready. Not because it needed more testing in the usual sense. But because during development, they discovered it was capable of something that made them stop and think very carefully about what they'd built.
featured
Join 500+ founders who are vibe-coding their customer support with Relavo.
Try Relavo FreeMythos can find and exploit previously unknown security vulnerabilities - in every major operating system and every major web browser - faster and more autonomously than almost any human expert. It found bugs that had been sitting undetected for decades. The oldest was a 27-year-old flaw in OpenBSD, an operating system known specifically for its security. It didn't just find these vulnerabilities. It built working exploits to take advantage of them.
Anthropic's own red team described the findings as warranting "substantial coordinated defensive action across the industry."
So instead of a public release, Anthropic launched something called Project Glasswing - a controlled programme giving early access to a select group of around 50 organisations including Amazon, Apple, Google, Microsoft, Nvidia and JPMorgan. The idea: give defenders access first, so they can find and patch vulnerabilities in their own systems before bad actors get anywhere near a model with these capabilities.
This is where it gets interesting for anyone who thought Mythos was a niche AI story.
Bank of England Governor Andrew Bailey told the BBC he was having to look "very carefully" at what this development could mean for cybercrime risk. Barclays CEO CS Venkatakrishnan - who confirmed Barclays has access to the model - told the BBC: "It's serious enough that people have to worry." He added, with what sounds like resigned pragmatism: "This is what the new world is going to be."
Canadian Finance Minister François-Philippe Champagne, speaking at the IMF spring meetings in Washington, described Mythos as an "unknown, unknown" - more concerning than a conventional geopolitical risk like the Strait of Hormuz because at least with that, you know where it is and how large it is. With Mythos, the full implications are still being mapped.
Federal Reserve Chair Jerome Powell and US Treasury Secretary Scott Bessent held discussions with major US bank CEOs about the risks. The UK's Cross Market Operational Resilience Group - which includes the Bank of England, the FCA, HM Treasury and the National Cyber Security Centre - convened to brief major banks and financial institutions. The UK Government's own AI Security Institute independently evaluated Mythos and concluded it represented "a step up over previous frontier models."
These are not excitable tech commentators. These are the people responsible for the stability of the global financial system. When they hold emergency meetings about an AI model, it is worth paying attention.
It is a fair question and worth asking honestly.
Anthropic has built its entire brand around being the safety-conscious AI company. Warning that your own product is too dangerous to release is, if nothing else, extremely on-brand. And this is not the first time an AI company has done it. In 2019, OpenAI claimed its GPT-2 model was too powerful to release publicly. That decision is now widely regarded as having been overblown - GPT-2 looks relatively modest compared to what followed.
Ciaran Martin, former head of the UK's National Cyber Security Centre, told the BBC that the claims had "really shaken people" - but also acknowledged the tension: "For some this is an apocalyptic event, for others it seems to be a lot of hype."
Here is my honest take. When central bank governors, finance ministers and the CEOs of the world's largest banks are holding emergency meetings and going on record with named quotes about something, it is probably not purely a marketing exercise. You can question Anthropic's motives. You cannot easily dismiss Andrew Bailey, Christine Lagarde and Jerome Powell simultaneously.
And even if you remain sceptical about Mythos specifically, the broader point stands. Anthropic did not train Mythos to have these capabilities. They emerged as a side effect of general improvements in reasoning and coding. That means similar capabilities will emerge in other models - from other companies, with different attitudes to safety. The question is not whether this kind of AI exists. It is who has it and what they do with it.
Let's bring this back to where most of us actually live - running small businesses, building products, trying to understand and use AI without getting lost in the noise.
The honest answer is that Mythos itself is not an immediate threat to your business. You are not a high-value target for nation-state level cyberattacks. The organisations in the crosshairs right now are banks, critical infrastructure providers and government agencies - the kinds of systems that run on decades-old code with vulnerabilities that have never been found because finding them at scale was previously impossible.
The first is a reminder about security basics. If you are building products with AI - particularly anything that handles customer data, payments or sensitive information - the security of what you build matters. It has always mattered. What Mythos tells us is that the tools available to bad actors are becoming significantly more powerful. That is not a reason to panic. It is a reason to make sure you are not taking shortcuts. Row-level security, proper access controls, not shipping code you don't understand - these things matter now more than they ever did.
The second is a bigger picture point. What we are watching with Mythos is AI doing something that was not explicitly designed into it. The capabilities emerged. That is remarkable and it is also the thing that makes this moment different from previous technology waves. The pace of what AI can do is not following a predictable roadmap. It is surprising the people who build it.
As a founder using these tools every day, that is both exciting and worth sitting with. Not with fear - but with the kind of attention it deserves.
Navigating the AI landscape as a founder? Come and join the conversation at Vibe Coding Lab - a free community for founders building in the AI era.
More from The Vibe Check

Entry-level developer roles are disappearing fast. The data from Stanford's 2026 AI Index is stark. But there's a twist coming that nobody's really talking about yet.
Read Story→
I built a business teaching automation. Now I'm using it less than ever. A refund request made me stop and ask why - and whether the tools I used to swear by still have a future.
Read Story→
"Claude Mania" is real. Founders and enterprises are going all in on one tool. But is that actually a smart strategy - or are we sleepwalking into a dependency we haven't thought through?
Read Story→