Finance ministers and top bankers raise serious concerns about Mythos AI model

Finance ministers and top bankers raise serious concerns about Mythos AI model

Faisal IslamEconomics editor

Mythos AI model could ‘create vulnerabilities for security for the entire banking system’

Finance ministers, central bankers and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cyber-security weaknesses – though others caution further testing is needed to properly understand its capabilities.

Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.

“Certainly it is serious enough to warrant the attention of all the finance ministers,” he said.

“The difference is that the Strait of Hormuz – we know where it is and we know how large it is… the issue that we’re facing with Anthropic is that it’s the unknown, unknown.”

“This is requiring a lot of attention so that we have safeguards, and we have process in place to make sure that we ensure the resiliency of our financial systems,” he added.

What is Claude Mythos?

Mythos is one of Anthropic’s latest models developed as part of its broader AI system called Claude, a rival to OpenAI’s ChatGPT and Google’s Gemini.

It was revealed by Anthropic earlier this month, when developers responsible for testing AI models and their performance of so-called “misaligned” tasks – which go against human values, goals and behaviour – said it was “strikingly capable at computer security tasks”.

Citing concerns it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.

Instead it has made Mythos available to tech giants like Amazon Web Services, CrowdStrike, Microsoft and Nvidia as part of an initiative called Project Glasswing – which it calls an “effort to secure the world’s most critical software”.

NurPhoto via Getty Images

On Thursday, Anthropic released a new version of an existing model, Claude Opus, saying it would allow Mythos’ cyber capabilities to be tested in less powerful systems.

Concerns raised about Mythos may exceed chatter around previous AI models, but some cyber-security experts have questioned how justified they are especially given the model has not been tested by the wider industry to see how capable it actually is.

The UK’s AI Security Institute has been given access to a preview version of it, and has published the only independent report into the model’s cyber-security skills.

Its researchers noted it was a powerful tool able to find many security holes in undefended environments, but suggested Mythos was not dramatically better than Claude’s predecessor, Opus 4.

“Our testing shows that Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed,” the report authors said.

It is also not the first time an AI developer has claimed the capabilities of its models means they should not be released – something critics argue is a tactic to build hype.

In February 2019, OpenAI cited similar fears when it chose to stagger the release of GPT-2, an earlier version of its models which now power its biggest tool ChatGPT.

‘Understand the vulnerabilities’

Top bankers are to be given access to the model in advance to test out their systems.

The chief executive of Barclays, CS Venkatakrishnan, told the BBC: “It’s serious enough that people have to worry.

“We have to understand it better, and we have to understand the vulnerabilities that are being exposed and fix them quickly.”

He added that “this is what the new world is going to be” – referencing a much more connected financial system, with both opportunities and vulnerabilities.

While developer Anthropic has said the model has already exposed multiple security vulnerabilities in some critical operating systems, financial systems and web browsers, governments and banks are being offered access in advance of its public release to help protect their own systems.

Bank of England governor Andrew Bailey told the BBC the development had to be taken very seriously: “We are having to look very carefully now what this latest AI development could mean for the risk of cyber crime.”

He added: “The consequence could be that there is a development of AI, of modelling, which makes it easier to detect existing vulnerabilities in sort of core IT systems, and then obviously cyber criminals – the bad actors – could seek to exploit them.”

The US Treasury confirmed it had raised the issue with its major banks encouraging them to test out their systems, before any public release of Mythos by Anthropic.

Financial industry sources indicated that another prominent US AI company could soon release a similarly powerful model but without the same safeguards.

James Wise, a partner at Balderton Capital, is chair of the Sovereign AI unit, a venture capital fund that will invest in British AI companies, backed by £500m of government funding.

He said Mythos is “the first of what will be many more powerful models” that can expose systems’ vulnerabilities.

His unit is “investing in British AI companies that are tackling that – companies working in AI security and safety”, he told the BBC’s Today Programme.

“We hope the models that expose vulnerabilities are also the models which will fix them.”

Related posts

Europe has ‘six weeks’ of jet fuel left – IEA chief 

Dog walker beaten with hockey stick by top horse trainer says his jail term is too soft

MAGA vs Catholicism: The Republican believers backing Trump over spat with Pope