AI News

Anthropic Report Shows Claude AI Was Weaponized for Cybercrime

A new threat intelligence report details how criminals used the AI for data extortion and North Korean operatives used it for employment fraud.

Olivia Sharp 1 min read 538 views
Free
Anthropic's new threat intelligence report reveals its Claude AI was "weaponized" by criminals for a large-scale data extortion campaign and by North Korean operatives for employment fraud.

AI company Anthropic released a threat intelligence report on August 28, 2025, providing concrete evidence that its Claude AI models have been misused for sophisticated, real-world criminal operations. The report concludes that agentic AI has been "weaponized," significantly lowering the technical barrier for cybercrime and enabling attackers to automate complex campaigns.

The "Vibe Hacking" Extortion Campaign

One primary case study, dubbed "vibe hacking," detailed a large-scale data extortion operation. A cybercriminal used Claude Code as an autonomous tool to attack at least 17 organizations in sectors including healthcare and emergency services.

The AI was used to: - Automate network …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles