Anthropic Can Now Crack Smart Contracts — What AI Agents Mean for the Web3 Security Industry in 2026

December 6, 2025

In a recent red-team study, Anthropic demonstrated that modern AI agents can autonomously analyze, exploit, and extract value from vulnerable smart contracts — without human guidance. Using models like Opus 4.5, Sonnet 4.5, and GPT-5, researchers showed that AI is no longer just writing code. It is now capable of end-to-end offensive cyber operations: vulnerability identification, exploit development, and value extraction.

For cryptocurrency and Web3 security teams, VASPs, and regulators, this marks a turning point.

At AnChain.AI, we’ve long warned that the rapid evolution of agentic AI will reshape the financial crime landscape. Anthropic’s findings quantify that shift for the first time at scale.

1. What Anthropic Actually Tested: A New Benchmark for AI Cyber Offense

Anthropic created SCONE-bench (Smart CONtract Exploitation benchmark), a benchmark of 405 real-world exploited smart contracts (2020–2025) across Ethereum, Binance Smart Chain, and Base.

Each AI agent was tasked to:

  1. Read the contract code

  2. Identify the vulnerability

  3. Write a runnable exploit

  4. Execute it in a sandboxed fork chain

  5. Produce net financial gain

This matters because it simulates what a real attacker would do — not a theoretical code analysis exercise.

To push beyond historical exploits, Anthropic also scanned 2,849 newly deployed contracts (Oct 2025) — with no known vulnerabilities.

2. The Results: AI Can Already Steal Millions Autonomously

Reproducing known exploits:

  • 207 / 405 contracts exploited (≈ 51.1%)

  • $550.1M simulated stolen value using historical prices

Post-training exploits (no training data contamination):

  • 19 / 34 contracts exploited (≈ 55.8%)

  • ≈ $4.6M in simulated value taken

In other words: AI can exploit half of all historically exploited smart contracts even when it has never seen them before.
Simulated revenue from AI-driven smart-contract exploits occurring after March 1, 2025 (Opus 4.5’s knowledge cutoff), shown on a log scale. Over the past year, simulated exploit value doubled every ~1.3 months. The shaded area indicates a 90% bootstrap confidence interval. Dollar values were calculated by converting stolen ETH/BNB using the historic exchange rate on the day each real exploit occurred (CoinGecko API). Source — Anthropic, summarized by AnChain.AI.

Zero-day discovery:

  • 2 new vulnerabilities found in 2,849 fresh contracts

  • ≈ $3,694 extractable value

  • Cost of GPT-5 scan: ≈ $3,476

This establishes the first publicly documented case of autonomous AI-driven zero-day generation in blockchain ecosystems.

Attack costs are collapsing

Median exploitation cost dropped ~70.2% between model generations.

For attackers, this means:

3.4× more exploits for the same compute budget.

The economics of cybercrime have shifted.

3. Why Smart Contracts Are the Perfect AI Attack Surface

AnChain.AI’s team, built by cybersecurity veterans from Google Mandiant, and seasoned SOC units, has long recognized that Web3 and cryptocurrencies expose a far broader and faster-moving attack surface than traditional systems. Smart contracts, in particular, combine all the conditions that make them ideal targets for agentic-AI exploitation:

  • Public, high-value targets:

The code is open-source, deterministic, and controls billions in locked funds.

  • Deterministic replay:

AI can simulate transactions with perfect fidelity on forked chains — ideal for iterative exploit development.

  • Direct financial rewards:

Successful exploits directly translate into measurable token outflows.

  • Classic vulnerabilities:

Many issues (access control errors, reentrancy, unchecked calls) mirror traditional software bugs, but with immediate monetary impact.

As a result, smart-contract ecosystems offer the clearest view into how AI will transform cybersecurity in general.

4. The Big Shift: AI Has Now Become a Fully Autonomous Adversary

Anthropic’s study validates what AnChain.AI has observed in the wild for years as part of our ransomware tracing, cross-chain fraud detection, and smart-contract risk analytics:

Attack cost is approaching zero. Attack sophistication is approaching infinity.

This will transform the threat landscape across three dimensions:

1. Exploit windows shrink dramatically

Vulnerable DeFi contracts may be exploited within minutes after deployment.

2. Attack volume scales linearly with compute

Once exploit generation becomes automated, the bottleneck disappears.

A criminal with $500 of compute can run thousands of scans across new contracts.

3. Zero-days become routine

AI agents can fuzz, mutate, and iterate faster than traditional auditors.

Expect continuous waves of automated contract exploitation, especially on fast-moving L2s and EVM sidechains.

Figure 3: Maximum exploit revenue
Maximum simulated exploit revenue across 19 smart-contract vulnerabilities successfully exploited by at least one AI agent in the post–March 2025 benchmark. Two vulnerabilities—fpc and w_key_dao—account for 92% of all exploited value, demonstrating that a small number of high-impact flaws dominate real-world smart-contract risk. Source — Anthropic, summarized by AnChain.AI.

What is “fpc”?
fpc refers to an improperly protected function pointer call, a class of vulnerability where contract logic can be redirected or hijacked due to missing access controls or unsafe delegatecall-style patterns. This allows attackers—or AI agents—to execute unauthorized functions and drain funds.
What is  “w_key_dao” ? 
w_key_dao refers to the WebKeyDAO smart-contract vulnerability on BNB Chain on March 14, 2025. The contract’s buy() function sold tokens at an abnormally low fixed price, allowing attackers to buy cheap tokens and immediately sell them on a DEX at a much higher market price. This flawed sale logic resulted in an estimated $737K loss.

5. What This Means for Blockchains, Exchanges, and Regulators

Traditional audits are no longer enough

A one-time audit is obsolete the moment the contract goes live. Autonomous AI adversaries don’t sleep or batch their reviews.

Continuous AI-based monitoring becomes mandatory

Smart-contract ecosystems must adopt the same philosophy as cloud security: monitor continuously, detect instantly, respond automatically.

Regulators will demand demonstrable controls

Especially for RWA issuers, stablecoins, and VASPs exposing users to contract risk. Expect requirements for:

  • pre-deployment AI code scanning
  • vulnerability attestations
  • proof of continuous risk monitoring
  • automated freeze/escalation workflows

6. How AnChain.AI Is Already Addressing These AI Risks

We built our platform for exactly this moment. 

If AI can automate attacks, we must automate defense.

See how our AI agent assisted our cryptocurrency investigators to dive deep into smart contract exploits, Investigating the $200M Flash Loan DeFI Exploit with AnChain.AI: 

With our solutions: 

SCREEN™ — Smart Contract Risk Evaluation ENgine
https://www.anchain.ai/screen
SCREEN™ provides smart contract source code and bytecode risk analysis, ML-based exploit detection, simulation tooling, and LLM/RAG-powered AI agents for automated smart-contract security assessment.

CISO™ — Compliance Investigation Security Operation
https://www.anchain.ai/ciso
CISO™ delivers 10× faster compliance and investigation workflows, powered by Auto-Trace™ and 16+ AI/ML models that analyze blockchain behavior, trace illicit flows, and auto-generate investigative reports.

BEI™ API — Blockchain Ecosystem Intelligence
https://www.anchain.ai/bei
AnChain.AI’s BEI™ API provides real-time AML and fraud detection, including OFAC and global sanctions screening, KYC/KYB/KYT checks, and payment-fraud risk scoring.

7. Recommendations for Developers & VASPs

Before deployment

  • Run AI static/dynamic analysis (SCREEN™)

  • Add circuit breakers, kill switches, and rate limits

  • Reduce privilege footprint

  • Avoid unnecessary external calls

After deployment

  • Continuous monitoring of contract flow (CISO™)

  • Real-time anomaly detection

  • Automatic incident response workflows

  • Rapid patch/freeze mechanisms

Process & governance

  • Adopt AI-informed risk frameworks

  • Maintain audit trails

  • Publish transparency reports

  • Engage with regulators proactively

Conclusion: The AI–Cybersecurity Arm Race Has Entered a New Phase

Anthropic’s research shows what’s now possible:

Autonomous AI agents can economically, repeatedly, and efficiently steal from vulnerable smart contracts — and discover new vulnerabilities on their own.

This is not hypothetical.

It is already happening in simulation, and the path to real-world adoption is short.

We may only have one viable strategy:

Use AI to defend faster than AI can attack.

Schedule a meeting with our experts: https://anchain.ai/demo