AI company Anthropic has released a new security tool, Claude Code Security, in beta. It scans codebases for security vulnerabilities and suggests targeted software patches for human review.
"Security teams face a common challenge: too many software vulnerabilities and not enough people to address them. Existing analysis tools help, but only to a point, as they usually look for known patterns. Finding the subtle, context-dependent vulnerabilities that are often exploited by attackers requires skilled human researchers, who are dealing with ever-expanding backlogs. AI is beginning to change that," says Anthropic.
Rather than scanning for known patterns, its makers claim that Claude Code Security reads and reasons about the client organisation's code the way a human security researcher would, by understanding how components interact and catching complex vulnerabilities that rule-based tools miss. Each finding goes through a multi-stage verification process. Claude re-examines each result, attempting to prove or disprove its own findings and filter out false positives. Findings are also assigned severity ratings so human security teams can focus on the most important fixes first.
Panic in the financial sector
Anthropic recently hit the headlines and caused panic in the financial sector when its advanced AI model, Mythos, demonstrated its ability to autonomously identify and exploit software vulnerabilities in the banking sector, sparking panic among central banks and government officials concerned about systemic risks to the financial system.
Bank of England Governor Andrew Bailey, speaking at Columbia University in New York, warned that Mythos could "crack the whole cyber risk world open" and called on regulators to urgently assess the extent to which the model can identify and exploit vulnerabilities in financial infrastructure. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with major U.S. bank chief executive officers to discuss Mythos's cyber risk implications.
Having shown how AI is capable of unearthing hitherto unseen vulnerabilities, it is now offering organizations an AI-powered tool to tackle cyber-vulnerabilities.
"Claude Code Security is intended to put this power squarely in the hands of defenders and protect code against this new category of AI-enabled attack," says Anthropic.
Claude Code Security is, however, only currently being released as a limited research preview to customers. Anthropic then intends to work with customers to refine the software's capabilities and set guardrails to ensure it is deployed responsibly.