Claude Security — Anthropic Sells the Cure, Not the Audit
Anthropic opened Claude Security to Enterprise customers. The same team behind Mythos zero-day discovery now offers to defend your code. That conflict is the product.
Anthropic opened Claude Security — its Opus 4.7-powered vulnerability scanner — to public beta for all Claude Enterprise customers on April 30, 2026. The product had been quietly running as “Claude Code Security” in a limited research preview since February, tested by hundreds of organizations. Now it is live at claude.ai/security, with Team and Max plan access listed as “coming soon.” The timing is not subtle: three weeks earlier, Anthropic’s Frontier Red Team published Mythos Preview results showing their models had identified thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser. The company that proved it can break everything is now selling the patch layer.
TL;DR
- What: Claude Security hit public beta for Enterprise customers — Opus 4.7 scanning codebases for vulnerabilities with fix generation via Claude Code
- How it differs: Stochastic data-flow tracing across entire codebases, not deterministic pattern matching like Semgrep or Snyk
- The catch: No explicit Zero Data Retention guarantee — your codebase flows through Anthropic’s infrastructure with tenant isolation and non-training commitments, but not the formal ZDR documentation some auditors require
- Action: Evaluate if your compliance posture allows it; if it does, this is the most capable code scanner available today
Why This Matters
I’ve watched security tooling vendors try to “AI-wash” static analysis for three years. Bolt a language model onto Semgrep rules, call it “AI-powered,” ship it. Claude Security is different in one specific way: it does not match patterns. It traces data flows across files, reads component interactions, and reasons over entire codebases the way a human security researcher would — because the model doing the reasoning was trained by the same team that built the most capable autonomous exploit-discovery system on earth.
Traditional static analysis tools like Semgrep and Snyk operate deterministically. You run them twice, you get the same results. That is comforting and also fundamentally limited — they can only find what their rules describe. Claude Security is stochastic by design. Each scan adapts its analysis path, which means consecutive runs may surface different findings. For teams accustomed to deterministic tooling, this feels like a bug. For anyone who has watched a human pentester work through a codebase, it feels correct. Real vulnerabilities hide in the interactions between components, not in isolated pattern matches against known signatures.
The false-positive problem is where this gets interesting. Snyk’s historical false-positive rate burned security team trust so badly that many orgs stopped triaging its output. Anthropic’s answer is a multi-stage validation pipeline where Claude challenges its own findings before surfacing them. Every result comes with a confidence rating and a detailed explanation of why the model believes the vulnerability is real. The architecture — model argues with itself before reporting — is fundamentally different from “pattern matched, here’s your ticket.”
Scans are stochastic. Running the same scan twice may produce different findings. This is by design, not a defect — but it means Claude Security cannot serve as a deterministic compliance gate the way traditional SAST tools do. Plan your workflow accordingly.
The scan-to-fix loop is the workflow bet that matters most. Findings from Claude Security link directly into a Claude Code session where you can apply patches in context. Anthropic’s pitch is “detection to remediation in a single sitting” — no more filing a Jira ticket, waiting for a sprint, losing context. Whether this actually closes the loop or just shortens the handoff depends on your team’s review discipline, but collapsing the time between “found it” and “fixed it” from days to minutes is a genuine structural advantage if the fixes are sound.
Then there is the ecosystem play. CrowdStrike, Palo Alto Networks, SentinelOne, TrendAI, and Wiz are all integrating Opus 4.7 into their cybersecurity platforms as technology partners. Snyk — which you would expect to be a competitor — announced its own Claude integration on May 7, one week after Claude Security’s public beta launch, embedding Claude for automated vulnerability discovery and prioritization within Snyk’s platform. The security toolchain is consolidating around Anthropic’s models faster than most engineering leaders have noticed. This is not a product launch. It is a platform play.
If your org already uses CrowdStrike, Palo Alto Networks, or Snyk, check whether Opus 4.7 is already running in your security stack through a partner integration. You may be closer to Claude Security’s capabilities than you think — without needing to pipe your codebase directly into claude.ai.
The data retention question is the blocker nobody wants to talk about. Anthropic’s Enterprise terms confirm that code is not used to train future models and remains tenant-isolated. But an explicit Zero Data Retention guarantee — the kind SOC 2 Type 2 auditors and HIPAA compliance officers need to see documented in writing — has not been publicly committed for scan artifacts. Anthropic may retain scan data where required by law or for Usage Policy enforcement. For teams operating in regulated environments or running air-gapped infrastructure, this is not a “we’ll figure it out later” issue. It is a deal-breaker until Anthropic publishes formal ZDR commitments that auditors can point to.
Compare this to how Semgrep or Snyk deploy: on-premise runners, no data leaving your network, deterministic audit trails. Claude Security cannot offer that today. The capability gap is real — Claude Security sees things those tools cannot — but so is the compliance gap.
Research preview feedback already drove meaningful improvements: scheduled scans, the ability to dismiss findings with documented reasons, and CSV/Markdown exports for integration into existing workflows. These are table-stakes features for any security tool that wants enterprise adoption, and their addition signals Anthropic is listening to the hundreds of preview organizations that tested it.
The Take
Here is what nobody in the security vendor space will say out loud: Anthropic is simultaneously the company most capable of attacking your code and the one offering to defend it. The team behind Mythos Preview — which found thousands of zero-days across every major OS and browser, including 271 vulnerabilities in Firefox alone that shipped as fixes in Firefox 150 — is the same team whose model now powers the scanner you are being asked to trust with your codebase. That is not evil. It is probably the most defensible security product positioning of 2026. Who better to find your vulnerabilities than the people who proved they can find everyone else’s?
But defensible positioning is not the same as blind trust. Before you pipe your entire codebase into claude.ai/security, answer three questions. First: does your compliance posture allow code to traverse a third-party API without formal ZDR guarantees? Anthropic promises tenant isolation and non-training use, but if your auditor needs an explicit Zero Data Retention clause in writing, it is not there yet. Wait. Second: can your team handle stochastic findings that differ between runs, or does your audit process require deterministic reproducibility? If the latter, Claude Security supplements your SAST tools — it does not replace them. Third: are you comfortable with a vendor relationship where the provider’s offensive capabilities are a feature, not a conflict?
If the answer to all three is yes, Claude Security is the most capable vulnerability scanner available today. Not because it is perfect — the data retention gap alone disqualifies it for significant segments of the market — but because it reasons about code the way your best security engineer does, and your best security engineer cannot scan your entire codebase before lunch.
The security toolchain is consolidating around Anthropic’s models whether you choose Claude Security directly or not. Your Snyk instance may already be running Claude under the hood. The question is not whether Anthropic’s models will analyze your code. It is whether you want to be deliberate about how.