Google Intelligence Group Identifies Self-Mutating AI Malware
The malware, named "PROMPTFLUX," uses LLMs during execution to rewrite its own code and evade antivirus detection, Google reported.
A New Class of Malware
Google's Threat Intelligence Group (GTIG) reported that it has identified the first malware families that use large language models (LLMs) during execution to be "autonomous and adaptive".
This represents a significant development in AI-powered cyberattacks. State-sponsored threat actors are experimenting with using AI not just for productivity, but for "novel AI-enabled operations," GTIG said in its blog post. GTIG stated this is the "first time it has seen malware families use large language models" mid-execution to alter their behavior.
How 'PROMPTFLUX' Works
One specific malware family, named "PROMPTFLUX," was identified as being written …
Archive Access
This article is older than 24 hours. Create a free account to access our 7-day archive.