AI data thieves cry foul when someone steals theirs
Three Chinese AI companies used thousands of fake accounts to extract Claude’s capabilities, Anthropic claims, as it pushes for tighter chip export controls
Anthropic, the company behind the chatbot Claude, has accused three Chinese AI firms of ripping off its technology. The twist? Anthropic built Claude by training on vast troves of internet data it never paid for. The irony practically writes itself.
In a blog post published on Monday, Anthropic claimed that DeepSeek, Moonshot AI and MiniMax created over 24,000 fake accounts and generated more than 16 million exchanges with Claude. Their goal was to "distill" its capabilities into their own models.
Distillation is a training method where a weaker model learns by mimicking the outputs of a stronger one. Think of it as copying someone's exam answers, except every AI company has been peeking at someone else's paper for years.
Anthropic said the three firms targeted Claude's strongest suits: agentic reasoning, tool use and coding. MiniMax was the most prolific offender, responsible for roughly 13 million of those exchanges. When Anthropic released a new model mid-campaign, MiniMax reportedly pivoted within 24 hours, redirecting half its traffic to the fresh system.
DeepSeek, meanwhile, focused on logic and finding ways around content restrictions. Moonshot targeted coding and computer vision. None of the three companies responded to requests for comment from Reuters.
Anthropic framed the issue as a national security threat, arguing that distilled models lack safety guardrails. It also used the occasion to lobby for tighter chip export controls to China.
The public was not entirely sympathetic. "They robbed the robbers," one Reddit user noted.
OpenAI and Google have lodged similar complaints in recent weeks. Whether policymakers will act remains an open question, particularly given the industry's own murky track record on intellectual property.
