April 14, 2026 02:07 pm (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
'ECI deviated from Bihar procedure': Supreme Court raises concerns over voter deletion in Bengal SIR | Noida workers’ protest turns violent: Stones pelted, vehicles damaged over wage hike demand | Oil prices jump above $103 a barrel as US moves to block Iran-linked shipping | I don’t care if they come back or not, says Trump after Iran talks collapse | Legendary singer Asha Bhosle suffers cardiac arrest, hospitalised | Big boost to India–Mauritius ties: S. Jaishankar hands over 90 e-buses | Middle East tension: Iranian delegation arrives in Islamabad for major talks, 10,000 security personnel deployed | Ranveer Singh visits RSS HQ amid Dhurandhar 2 success, triggers speculation | ED raids ex-Bengal minister Partha Chatterjee; SSC scam resurfaces ahead of polls | Amit Shah promises UCC, ₹3,000 aid per month for women and youth in BJP’s Bengal manifesto
Anthropic
Anthropic logo. Photo: Anthropic/X

Claude Code Security: Anthropic is now using AI to find flaws in software

| @indiablooms | Feb 21, 2026, at 05:15 pm

US-based AI company Anthropic has unveiled Claude Code Security, a new feature within its web-based Claude Code platform designed to detect and fix software vulnerabilities.

Why does it matter?

According to Anthropic, Claude Code Security can scan entire codebases for security flaws and suggest targeted patches for human review. The goal is to help security teams identify and resolve vulnerabilities that traditional rule-based tools often miss.

Security teams today face a growing imbalance: too many vulnerabilities and too few experts to address them. While existing tools detect known patterns, they struggle with subtle, context-dependent flaws — the kind attackers frequently exploit. Anthropic claims Claude can reason about code like a human researcher and detect novel, high-severity vulnerabilities.

At the same time, the company acknowledges the dual-use risk. “The same capabilities that help defenders find and fix vulnerabilities could help attackers exploit them,” it said, adding that the tool is designed to place this power firmly in the hands of defenders.

Claude Code Security is being released as a limited research preview for Enterprise and Team customers, with expedited access for maintainers of open-source repositories to ensure responsible deployment.

How does it work?

Instead of simply scanning for known signatures, Claude analyzes how components interact, traces data flows through applications, and identifies complex vulnerabilities that rule-based systems overlook.

Each finding undergoes a multi-stage verification process. Claude attempts to validate or refute its own results to reduce false positives, assigns severity ratings, and provides a confidence score. Findings appear in a dedicated dashboard, where teams can review suggested patches. Nothing is implemented automatically — human approval is required for every fix.

Built on real-world testing

Anthropic says the tool builds on over a year of cybersecurity research. Its Frontier Red Team has tested Claude in competitive Capture-the-Flag events and collaborated with institutions such as Pacific Northwest National Laboratory to explore AI-driven defense of critical infrastructure.

Using Claude Opus 4.6, the company claims it identified more than 500 previously undetected vulnerabilities in production open-source codebases — some of which had gone unnoticed for decades. Anthropic is currently working through responsible disclosure with maintainers.

The company says it also uses Claude internally to secure its own systems and aims to raise the overall security baseline across the industry.

“This is a pivotal time for cybersecurity,” Anthropic said, warning that attackers will increasingly use AI to find weaknesses faster — but defenders who act quickly can do the same and patch systems before exploitation occurs.

Claude Code Security is currently available in limited preview for Enterprise and Team customers.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.