April 01, 2026 06:51 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Bengal SIR progress: 47 lakh of 60 lakh adjudicated cases disposed of, Supreme Court informed | Amit Shah to join Suvendu Adhikari on Bhabanipur nomination day; BJP plans mega roadshow | Fuel prices rise: Premium petrol, diesel hiked amid oil price surge | Commercial LPG up Rs 195.50 as global oil prices rise; domestic rates unchanged | Layoff alert: Oracle cuts 30,000 jobs globally, 12,000 hit in India | ‘Unsubstantial allegations’: Calcutta HC dismisses plea on ECI’s officer transfers in Bengal | Tennis icon Leander Paes joins BJP ahead of Bengal polls | 8 killed, several injured in crowd crush at Bihar temple in Nalanda | Trump signals exit from Iran war even as Strait of Hormuz remains shut: Report | Mystery death in Pakistan: JeM chief Masood Azhar’s brother found dead
Anthropic Safeguards head Mrinank Sharma. Photo: LinkedIn Profile.

'World in peril’: Anthropic Safeguards head Mrinank Sharma quits, citing AI safety concerns

| @indiablooms | Feb 10, 2026, at 08:58 pm

Mrinank Sharma, the head of safeguards research at artificial intelligence firm Anthropic, has resigned, triggering widespread debate in the tech community over whether commercial pressures are eclipsing AI safety priorities.

Sharma announced his decision in a post on X on Monday, February 9.

His post, written in a reflective and poetic tone referencing writers Rainer Maria Rilke and William Stafford, was quickly dissected by AI researchers and commentators, many of whom suggested that compromises on safety may have prompted his exit.

In his resignation note, Sharma said it had become clear to him that it was time to move on, warning that the world faces danger not only from AI but from “a whole series of interconnected crises unfolding in this very moment”.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he wrote.

While Sharma did not cite specific incidents or decisions, he pointed to sustained pressure that made it difficult to uphold core values.

“I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he said. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”

Sharma added that one of his final projects focused on examining how AI assistants may “make us less human or distort our humanity”, a comment that fuelled online speculation that Anthropic’s recent push to accelerate product releases may have come at the cost of safety safeguards.

His resignation comes just days after Anthropic launched Claude Opus 4.6, an upgraded model aimed at improving office productivity and coding performance.

The company is also reported to be in talks to raise fresh funding that could value it at around $350 billion.

Sharma is not the only senior figure to have recently exited Anthropic. Harsh Mehta from the research and development team and AI scientist Behnam Neyshabur also announced last week that they had left the company to pursue new ventures, said reports.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.