# AI Intelligence Briefing โ February 24, 2026
Five stories that define the state of AI today. From Wall Street carnage to geopolitical scandal to a guy who accidentally built a robot army with a coding assistant.
1. The SaaSpocalypse: AI Agents Just Destroyed $1 Trillion in Software Value
Jefferies equity trader Jeffrey Favuzza gave it a name: the SaaSpocalypse. An apocalypse for software-as-a-service stocks. And the numbers back up the drama.
The IGV software ETF is down 23.6% year-to-date, already 30% off its late-September 2025 peak. Over $1 trillion in total software and services market capitalization has been destroyed. Price-to-sales ratios across the sector have compressed from 9x to 6x โ the lowest since the mid-2010s. Short sellers have pocketed an estimated $20-24 billion in 2026 so far.
The Trigger: Anthropic Goes After Workflows
The carnage started on February 3 when Anthropic launched Claude Cowork with legal automation plugins โ document review, risk flagging, NDA triage, compliance tracking. The significance wasn't the capability itself. It was that Anthropic shifted from model supplier to workflow owner, directly competing with enterprise software.
$285-300 billion was wiped in 24-48 hours. Thomson Reuters dropped 16-18% in a single day. LegalZoom fell 20%. CS Disco dropped 12%.
Then on February 23, Anthropic announced Claude Code for COBOL modernization. IBM crashed 13.2% โ its worst single-day drop since 2000, now down 27% in February alone. The company whose legacy systems run 95% of U.S. ATM transactions suddenly faced an existential question: what happens when understanding legacy code costs less than maintaining it?
Seats Are Dead
SaaS has been priced on a per-seat model for 20 years. More employees equals more licenses equals more revenue. AI agents break this equation completely. A company with 500 customer support licenses deploys AI agents. Same output, 50 licenses. That's a 90% seat reduction โ not a growth slowdown, but structural revenue destruction.
The scoreboard: Salesforce down 40-45% from highs. Atlassian down 50% since January. Adobe down 30-38%. Figma down 79% from IPO peak. IBM down 27% in February alone.
Anthropic is systematically going industry by industry. Legal on February 3. Cybersecurity on February 21. COBOL on February 23. Each announcement triggers another wave of selling.
2. The Scariest Graph in AI: METR Shows Agent Capabilities Doubling Every 4 Months
There's a single graph that has the AI industry losing sleep. An Anthropic safety researcher saw it and changed the direction of his research. Another Anthropic employee posted: "mom come pick me up, I'm scared."
It comes from METR (Model Evaluation & Threat Research), a nonprofit that evaluates frontier AI capabilities. Their methodology: assemble ~228 real software engineering tasks, have human experts complete them, then measure how long each takes. The AI's "time horizon" is the task difficulty level where the model succeeds roughly 50% of the time.
The Progression
- Mid-2020: GPT-3 โ ~9 seconds
- Early 2023: GPT-4 โ ~4 minutes
- Mid-2024: GPT-4o โ single-digit minutes
- Early 2025: o1/o3 โ 15-30 minutes
- Late 2025: Claude Opus 4.5 โ ~5 hours
- February 2026: Claude Opus 4.6 โ 14.5 hours
That last number means Opus 4.6 can reliably complete tasks that take a skilled human engineer nearly a full workday. The updated doubling time from 2023 onward: every ~123 days. The pace is accelerating.
Sam Altman (February 20): "The inside view at the companies โ the world is not prepared. It's going to be a faster takeoff than I originally thought."
Dario Amodei (February 13): "We are near the end of the exponential" โ meaning the endgame, not a plateau. He reported that "100% of today's SWE tasks are done by the models" at Anthropic.
If the current doubling rate holds, the trend line points to AI agents handling tasks that take humans weeks by late 2026, and months by 2027. The projection for 99% of AI R&D task automation: 2032.
The scariest part isn't any single data point. It's that every update to this graph has been ahead of schedule.
3. DeepSeek Trained on Nvidia Blackwell Chips Despite U.S. Export Ban
Chinese AI startup DeepSeek trained its latest AI model on Nvidia's most advanced chip โ the Blackwell โ despite U.S. export controls explicitly barring shipments of the processor to China. That's according to a senior Trump administration official who spoke to Reuters on Monday.
The Blackwell chips are reportedly clustered at a DeepSeek data center in Inner Mongolia. U.S. officials believe DeepSeek will attempt to remove technical indicators that might reveal its use of American AI chips โ essentially erasing the evidence.
DeepSeek's new model is expected to be released as soon as next week.
The Policy Fracture
White House AI Czar David Sacks and Nvidia CEO Jensen Huang argue that shipping chips to China actually discourages Huawei from developing alternatives. China hawks see the DeepSeek revelation as proof that export controls are being openly violated.
Chris McGuire, former Biden NSC official: "This shows why exporting any AI chips to China is so dangerous. Given China's leading AI companies are brazenly violating U.S. export controls, we obviously cannot expect that they will comply with conditions that would prohibit them from supporting the Chinese military."
If China's top AI companies can obtain banned chips regardless of controls, it forces a fundamental question: do the controls actually work, or do they just make Washington feel better while driving China to build workarounds?
4. Guy Uses Claude to Hack His DJI Vacuum, Accidentally Controls 7,000 Devices
The setup is pure internet legend: Sammy Azdoufal, an AI strategist, drops $2,000 on a DJI Romo robot vacuum. Instead of using the normal app like everyone else, he fires up Claude Code and asks it to reverse-engineer the vacuum's communication protocol so he can drive it with an Xbox controller.
Claude delivers. The app works. There's just one problem: instead of connecting to just his vacuum, it connected to over 7,000 of them.
As reported by The Guardian, Tom's Hardware, and Popular Science, the vulnerability didn't just grant motor control. It exposed floor plans of people's homes and live video feeds from the vacuums' cameras.
This is the double-edged sword of AI-powered development: the same capability that lets a hobbyist build a fun project in an afternoon can accidentally crack open the security of thousands of devices. The barrier to entry for finding vulnerabilities just dropped to "ask an AI nicely."
As one commenter put it: "This is the funniest and most terrifying IoT story of the year, and it's only February."
5. Citrini's Fictional '2028 Global Intelligence Crisis' Crashes Real Markets
On February 22, Citrini Research published a macro memo written from the perspective of June 2028, modeling what happens when AI's success becomes the very thing that crashes the economy. It carries a prominent disclaimer: "What follows is a scenario, not a prediction."
Markets didn't care about the disclaimer.
The Scenario
The fictional 2028: S&P 500 has fallen 38% from its October 2026 highs. Unemployment has hit 10.2%. The economy no longer resembles anything familiar.
The key concept: Ghost GDP โ output that shows up in national accounts but never circulates through the real economy. AI boosts productivity and corporate profits, inflating nominal GDP. But actual household income collapses because the gains come from replacing humans, not empowering them. As Citrini puts it: a single GPU cluster generating the output of 10,000 white-collar workers is "more economic pandemic than economic panacea." How much money do machines spend on discretionary goods? Zero.
The Intelligence Displacement Spiral
- AI capabilities improve โ 2. Companies need fewer workers โ 3. Layoffs increase โ 4. Displaced workers spend less โ 5. Margin pressure drives more AI investment โ Return to step 1
No natural brake.
Real Impact
TheStreet Pro attributed a wave of selling directly to the report. Indian IT stocks โ TCS, Infosys, Wipro โ dropped as the report predicted a "structural obit" for the sector.
The irony: a hypothetical scenario about AI destroying economic value actually destroyed economic value in real time. Markets moved on a thought experiment.
Citrini's piece isn't doomer fiction. It's the logical extension of a simple question: what if AI bullishness is right, and that's actually bearish? If AI truly replaces tens of millions of white-collar jobs โ the very consumers who drive 70% of GDP โ then the companies profiting from AI are simultaneously destroying their own customer base.
Whether the 2028 scenario plays out exactly as modeled is beside the point. The point is that nobody in power has a plan for when it starts.