February 10, 2026 11:47 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Bangladesh poll manifestos mirror India’s welfare schemes as BNP, Jamaat bet big on women, freebies | Drama ends: Pakistan makes U-turn on India boycott, to play T20 World Cup clash as per schedule | ‘Won’t allow any impediment in SIR’: Supreme Court pulls up Mamata govt over delay in sharing officers’ details | India-US trade deal: ‘Negotiations always two-way’, says Amul MD amid farmers’ concerns | Khamenei breaks 37-year-old ritual for first time amid escalating Iran-US tensions | India must push for energy independence amid global uncertainty: Vedanta chairman Anil Agarwal | Kanpur horror: Lamborghini driven by businessman’s son rams vehicles, injures six | ‘Namaste Trump beat Howdy Modi’: Congress slams PM Over India-US trade deal | Historic India-US trade pact: Tariffs cut, $500B market opportunity unlocked! | Big call from RBI: Repo rate stays at 5.25%, neutral stance continues
Anthropic Safeguards head Mrinank Sharma. Photo: LinkedIn Profile.

'World in peril’: Anthropic Safeguards head Mrinank Sharma quits, citing AI safety concerns

| @indiablooms | Feb 10, 2026, at 08:58 pm

Mrinank Sharma, the head of safeguards research at artificial intelligence firm Anthropic, has resigned, triggering widespread debate in the tech community over whether commercial pressures are eclipsing AI safety priorities.

Sharma announced his decision in a post on X on Monday, February 9.

His post, written in a reflective and poetic tone referencing writers Rainer Maria Rilke and William Stafford, was quickly dissected by AI researchers and commentators, many of whom suggested that compromises on safety may have prompted his exit.

In his resignation note, Sharma said it had become clear to him that it was time to move on, warning that the world faces danger not only from AI but from “a whole series of interconnected crises unfolding in this very moment”.

“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” he wrote.

While Sharma did not cite specific incidents or decisions, he pointed to sustained pressure that made it difficult to uphold core values.

“I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he said. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”

Sharma added that one of his final projects focused on examining how AI assistants may “make us less human or distort our humanity”, a comment that fuelled online speculation that Anthropic’s recent push to accelerate product releases may have come at the cost of safety safeguards.

His resignation comes just days after Anthropic launched Claude Opus 4.6, an upgraded model aimed at improving office productivity and coding performance.

The company is also reported to be in talks to raise fresh funding that could value it at around $350 billion.

Sharma is not the only senior figure to have recently exited Anthropic. Harsh Mehta from the research and development team and AI scientist Behnam Neyshabur also announced last week that they had left the company to pursue new ventures, said reports.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.