AI News

OpenAI Disrupts Hacking Groups Using ChatGPT for Cyberattacks

The company announced it banned accounts linked to state-sponsored actors from Russia, North Korea, and China.

Olivia Sharp 1 min read 699 views
Free
OpenAI announced it disrupted three state-linked hacking groups from Russia, North Korea, and China that were systematically misusing ChatGPT for cyberattacks, including malware development and phishing.

State-Linked Actors Weaponize AI

OpenAI on Oct. 8, 2025, announced that it had disrupted and banned three state-linked hacking groups from Russia, North Korea, and China that were systematically misusing ChatGPT for malicious purposes. The disclosure confirms that sophisticated threat actors have integrated large language models (LLMs) into their standard operational toolkit to accelerate their existing workflows.

OpenAI stated the groups used its models as a productivity tool for tasks including creating modular code for malware, developing command-and-control infrastructure, and crafting multilingual phishing campaigns. The company said it had banned the accounts and shared its findings with security partners.

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles