AI News

AI Security Tools Can Be Hacked, New Research Shows

A paper from Alias Robotics and Oracle detailed how prompt injection attacks can turn defensive AI security agents into offensive weapons.

Olivia Sharp 2 min read 584 views
Free
New research published September 2 revealed that AI-powered cybersecurity tools are themselves vulnerable, showing how prompt injection attacks can compromise defensive AI agents with up to 100% success.

A new research paper published on September 2, 2025, has revealed that the AI-powered cybersecurity tools designed to protect enterprise networks are themselves vulnerable to a fundamental class of exploits known as prompt injection attacks. The study, conducted by researchers from Alias Robotics and Oracle Corporation, demonstrates how defensive AI security agents can be turned into offensive weapons, with exploitation success rates as high as 100%.[9]

Turning Defenders into Attackers

The paper, titled “Cybersecurity AI: Hacking the AI Hackers via Prompt Injection,” showed that by using a range of exploits, from simple Base64 obfuscation to sophisticated Unicode homograph attacks, …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles