AI safety leader says ‘world is in peril’ and quits to study poetry
Liv McMahon,Technology reporterand
Ottilie Mitchell

Getty Images
An AI safety researcher has quit US firm Anthropic with a cryptic warning that the “world is in peril”.
In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.
He said he would instead look to pursue writing and studying poetry, and move back to the UK to “become invisible”.
Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company’s move to include adverts for some users.
The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.
Sharma led a team there which researched AI safeguards.
He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching “how AI assistants could make us less human”.
But he said despite enjoying his time at the company, it was clear “the time has come to move on”.
“The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote.
He said he had “repeatedly seen how hard it is to truly let our values govern our actions” – including at Anthropic which he said “constantly face pressures to set aside what matters most”.
Sharma said he would instead look to pursue a poetry degree and writing.
He added in a reply: “I’ll be moving back to the UK and letting myself become invisible for a period of time.”
Those departing AI firms which have loomed large in the latest generative AI boom – and sought to retain talent with huge salaries or compensation offers – often do so with plenty of shares and benefits intact.
Eroding principles
Anthropic calls itself a “public benefit corporation dedicated to securing [AI’s] benefits and mitigating its risks”.
In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.
It has released reports on the safety of its own products, including when it said its technology had been “weaponised” by hackers to carry out sophisticated cyber attacks.
But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.
Like OpenAI, the firm also seeks to seize on the technology’s benefits, including through its own AI products such as its ChatGPT rival Claude.
It recently released a commercial that criticised OpenAI’s move to start running ads in ChatGPT.
OpenAI boss Sam Altman had previously said he hated ads and would use them as a “last resort”.
A former OpenAI researcher who resigned this week, in part due to fears of the use of advertising on ChatGPT, has told BBC Newsnight she feels “really nervous about working in the industry”.
Zoe Hitzig said her concerns stemmed from the possible psychosocial impacts of a “new type of social interaction” that were not yet understood.
She noted “early warning signs” that dependence on AI tools were “worrisome” and could “reinforce certain kinds of delusions” as well as negatively impacting users’ mental health in other ways.
“Creating an economic engine that profits from encouraging these kinds of new relationships before we understand them is really dangerous,” she continued.
“We saw what happened with social media” she said, noting “there’s still time to set up the social institutions, the forms of regulation that can actually govern this”. It was, she said, a “critical moment”.
Responding to BBC News, a spokesperson for OpenAI pointed to the firm’s principles which state: “Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission and making AI more accessible.”
They add: “We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers.”


