AI Security Tools Can Be Hacked, New Research Shows
A paper from Alias Robotics and Oracle detailed how prompt injection attacks can turn defensive AI security agents into offensive weapons.
A new research paper published on September 2, 2025, has revealed that the AI-powered cybersecurity tools designed to protect enterprise networks are themselves vulnerable to a fundamental class of exploits known as prompt injection attacks. The study, conducted by researchers from Alias Robotics and Oracle Corporation, demonstrates how defensive AI security agents can be turned into offensive weapons, with exploitation success rates as high as 100%.[9]
Turning Defenders into Attackers
The paper, titled “Cybersecurity AI: Hacking the AI Hackers via Prompt Injection,” showed that by using a range of exploits, from simple Base64 obfuscation to sophisticated Unicode homograph attacks, …
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.