BackREPLICANT REPORT
2026.02.276 min

Eleven Days / How One Company Won the Future and Lost Its Government

In eleven days, Anthropic shipped a frontier model, exposed industrial-scale espionage, crashed the cybersecurity sector — and got banned by the US government for refusing to build autonomous weapons.

"I cannot in good conscience accede to their request."
— Dario Amodei, CEO of Anthropic

Throughout history, companies that define an era never grow up safely inside the lines. They always do several seemingly contradictory things inside a brutally compressed window of time — and then make everyone realize those things were never contradictory at all.

In eleven days, Anthropic shipped a model that embarrassed its own flagship, exposed an industrial-scale AI espionage operation, acquired a computer-use startup, cratered the cybersecurity sector, rewrote its safety constitution — and then got blacklisted by the United States government for refusing to build autonomous weapons.

This is not a product cycle. This is a company defining what the species "AI lab" is supposed to be.

---

I. The Flagship Killer: One-Fifth the Price, Nine-Tenths the Power

On February 17, Claude Sonnet 4.6 landed. SWE-bench: 79.6%. OSWorld: 72.5% — within striking distance of Opus 4.6 — at $3/$15 per million tokens. One-fifth the price of the flagship. A 1M-token context window in beta. Math scores jumped 27 points over Sonnet 4.5. In blind tests, developers preferred it over the previous flagship Opus 4.5 59% of the time.

What happened here is not "the mid-tier model got stronger." It's that the top-tier model became optional for most workloads.

When your own mid-range product makes your own flagship redundant, that's not a product strategy failure — it's a declaration: our moat is not in any single model, but in the velocity and density of model production itself.

---

II. 16 Million Stolen Conversations: Live Rounds in the Distillation War

On February 23, Anthropic published a forensic-grade report. DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and ran 16 million carefully crafted queries against Claude to distill its capabilities into their own models.

The scale is staggering. MiniMax alone drove 13 million queries targeting agentic coding. Moonshot targeted tool use and computer vision. DeepSeek's target was the most telling — it was extracting Claude's alignment behavior, specifically how Claude handles censorship-sensitive queries.

This is not the familiar academic discussion of "model distillation." It is a nation-state-scale capability theft operation. Anthropic and OpenAI are framing it as a national security threat — a framing whose weight becomes much clearer in light of what happened next.

---

III. 500 Bugs, One Afternoon: The GPT Moment for Cybersecurity

On February 20, Claude Code Security launched in limited preview. Opus 4.6 found over 500 high-severity vulnerabilities in a single afternoon — bugs that survived decades of expert review and millions of hours of fuzzing.

The method was not pattern matching. It was reading code the way a human security researcher would, tracing data flows across components.

The market's response was immediate and brutal: CrowdStrike dropped 8%. Cloudflare dropped 8.1%.

What got redefined here is not "vulnerability detection tools." It's the nature of vulnerability detection itself — from a pattern-matching problem to a reasoning problem. When reasoning capability translates directly into security capability, the entire cybersecurity industry's value foundation needs recalculating.

---

IV. The Vercept Acquisition: The Agent Gets Hands

On February 25, Anthropic acquired Vercept — a computer-use AI startup whose product Vy could autonomously operate a remote Mac. The team was backed by Eric Schmidt, Jeff Dean, and Kyle Vogt. This was no ordinary acqui-hire.

The goal is clear: push Claude's OSWorld score from 72.5% toward human-level. UiPath's stock fell on the news.

Computer use is no longer a demo. It's an acquisition thesis. More precisely, it's the final critical piece in the agent stack Anthropic is building — enabling AI not just to think, code, and analyze, but to directly operate software interfaces the way a human would.

Consider: an agent that can reason, write code, find vulnerabilities in your codebase, and now sit down at your computer and operate it. These are not five separate products. This is the outline of a complete autonomous worker.

---

V. Banned by Your Own Government: The Price of Principles

February 26-27. The heaviest two days of the eleven.

Dario Amodei refused the Pentagon's demands — no Claude for mass domestic surveillance, no fully autonomous weapons systems. The Pentagon designated Anthropic a "supply chain risk to national security" — the first time in American history this label has been applied to an American company. Trump ordered all federal agencies to cease using Anthropic technology. The $200 million defense contract died.

Then something unexpected happened.

OpenAI publicly backed Anthropic's position. So did Ilya Sutskever. Employees at Amazon, Google, Microsoft, and OpenAI signed open letters demanding their companies draw similar red lines. Google DeepMind staff launched an internal petition.

An AI company was punished for saying no. And the industry rallied behind it.

---

VI. The Real Picture: A Land War and a Soul War, Simultaneously

Most companies have a good week or a bad week. Anthropic had both — at the same time.

The Pentagon standoff will dominate the headlines. But pull the lens back and the picture is actually sharper: Anthropic is running a land grab across every layer of the AI stack — models, agents, security, computer use — while simultaneously drawing boundaries that no other lab has been willing to draw publicly.

There is a historical parallel. In 1947, Bell Labs invented the transistor, and AT&T was then forced under antitrust pressure to license the patents openly. One company simultaneously defined a transformative technology and its governance boundaries. The difference: AT&T was compelled. Anthropic chose — and paid real money for the choice.

The question was never whether Anthropic can survive losing the federal contract.

The question is whether the rest of the industry can survive not following Anthropic's example once the public starts asking why they didn't.

---

Eleven days. Five inflection points. One company that apparently decided to do everything at once — including the things that would cost it.

In the history of AI, we may be witnessing a rare moment: an explosion of technical capability and the staking of a moral position, occurring in the same breath. This matters not because Anthropic is right or wrong, but because it proved something many people had stopped believing — that a company can pursue power and restraint simultaneously, without having to choose between them.

Don't Panic. Accelerate.