March 12, 2026 11:24 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
America’s flip-flop on Russian oil: How Washington sends conflicting signals to India | Big diplomatic win! Iran allows Indian oil tankers through the Strait of Hormuz | ‘It was over in the first hour’: Trump declares victory in Iran war, says ‘nothing left to target’ | Indian-origin shopkeepers face targeted attacks in Wembley; Somali men suspected | Iran pulls out of 2026 FIFA World Cup amid war with US-Israel | Supreme Court allows first-ever passive euthanasia for 32-year-old man in coma for 13 years | As Iran-US war disrupts global gas supply, India issues guidelines to manage shortages | LPG crisis hits metros: Commercial cylinder shortage triggers panic as govt prioritises domestic supply | Iran war disrupts LPG supplies, restaurants in major Indian cities edge towards shutdown | ‘How dare you question judicial officers?’: SC raps Bengal SIR pleas, orders appellate tribunals for voter list appeals
Anthropic
Anthropic logo. Photo: Anthropic/X

US-based AI company Anthropic has unveiled Claude Code Security, a new feature within its web-based Claude Code platform designed to detect and fix software vulnerabilities.

Why does it matter?

According to Anthropic, Claude Code Security can scan entire codebases for security flaws and suggest targeted patches for human review. The goal is to help security teams identify and resolve vulnerabilities that traditional rule-based tools often miss.

Security teams today face a growing imbalance: too many vulnerabilities and too few experts to address them. While existing tools detect known patterns, they struggle with subtle, context-dependent flaws — the kind attackers frequently exploit. Anthropic claims Claude can reason about code like a human researcher and detect novel, high-severity vulnerabilities.

At the same time, the company acknowledges the dual-use risk. “The same capabilities that help defenders find and fix vulnerabilities could help attackers exploit them,” it said, adding that the tool is designed to place this power firmly in the hands of defenders.

Claude Code Security is being released as a limited research preview for Enterprise and Team customers, with expedited access for maintainers of open-source repositories to ensure responsible deployment.

How does it work?

Instead of simply scanning for known signatures, Claude analyzes how components interact, traces data flows through applications, and identifies complex vulnerabilities that rule-based systems overlook.

Each finding undergoes a multi-stage verification process. Claude attempts to validate or refute its own results to reduce false positives, assigns severity ratings, and provides a confidence score. Findings appear in a dedicated dashboard, where teams can review suggested patches. Nothing is implemented automatically — human approval is required for every fix.

Built on real-world testing

Anthropic says the tool builds on over a year of cybersecurity research. Its Frontier Red Team has tested Claude in competitive Capture-the-Flag events and collaborated with institutions such as Pacific Northwest National Laboratory to explore AI-driven defense of critical infrastructure.

Using Claude Opus 4.6, the company claims it identified more than 500 previously undetected vulnerabilities in production open-source codebases — some of which had gone unnoticed for decades. Anthropic is currently working through responsible disclosure with maintainers.

The company says it also uses Claude internally to secure its own systems and aims to raise the overall security baseline across the industry.

“This is a pivotal time for cybersecurity,” Anthropic said, warning that attackers will increasingly use AI to find weaknesses faster — but defenders who act quickly can do the same and patch systems before exploitation occurs.

Claude Code Security is currently available in limited preview for Enterprise and Team customers.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.