March 12, 2026 11:18 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
America’s flip-flop on Russian oil: How Washington sends conflicting signals to India | Big diplomatic win! Iran allows Indian oil tankers through the Strait of Hormuz | ‘It was over in the first hour’: Trump declares victory in Iran war, says ‘nothing left to target’ | Indian-origin shopkeepers face targeted attacks in Wembley; Somali men suspected | Iran pulls out of 2026 FIFA World Cup amid war with US-Israel | Supreme Court allows first-ever passive euthanasia for 32-year-old man in coma for 13 years | As Iran-US war disrupts global gas supply, India issues guidelines to manage shortages | LPG crisis hits metros: Commercial cylinder shortage triggers panic as govt prioritises domestic supply | Iran war disrupts LPG supplies, restaurants in major Indian cities edge towards shutdown | ‘How dare you question judicial officers?’: SC raps Bengal SIR pleas, orders appellate tribunals for voter list appeals
Anthropic Safeguards head Mrinank Sharma. Photo: LinkedIn Profile.

'World in peril’: Anthropic Safeguards head Mrinank Sharma quits, citing AI safety concerns

| @indiablooms | Feb 10, 2026, at 08:58 pm

Mrinank Sharma, the head of safeguards research at artificial intelligence firm Anthropic, has resigned, triggering widespread debate in the tech community over whether commercial pressures are eclipsing AI safety priorities.

Sharma announced his decision in a post on X on Monday, February 9.

His post, written in a reflective and poetic tone referencing writers Rainer Maria Rilke and William Stafford, was quickly dissected by AI researchers and commentators, many of whom suggested that compromises on safety may have prompted his exit.

In his resignation note, Sharma said it had become clear to him that it was time to move on, warning that the world faces danger not only from AI but from “a whole series of interconnected crises unfolding in this very moment”.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he wrote.

While Sharma did not cite specific incidents or decisions, he pointed to sustained pressure that made it difficult to uphold core values.

“I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he said. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”

Sharma added that one of his final projects focused on examining how AI assistants may “make us less human or distort our humanity”, a comment that fuelled online speculation that Anthropic’s recent push to accelerate product releases may have come at the cost of safety safeguards.

His resignation comes just days after Anthropic launched Claude Opus 4.6, an upgraded model aimed at improving office productivity and coding performance.

The company is also reported to be in talks to raise fresh funding that could value it at around $350 billion.

Sharma is not the only senior figure to have recently exited Anthropic. Harsh Mehta from the research and development team and AI scientist Behnam Neyshabur also announced last week that they had left the company to pursue new ventures, said reports.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.