AI News

Cybersecurity Firm Details New "Model-State Hijacking" AI Attack Vector

A report from CyberTrace on August 19, 2025, described a vulnerability in open-source frameworks that can corrupt AI model behavior.

Olivia Sharp 1 min read 580 views
Free
A new report from cybersecurity firm CyberTrace on August 19, 2025, details a novel AI vulnerability called "Model-State Hijacking" that allows attackers to manipulate model outputs through crafted inputs.

Cybersecurity firm CyberTrace released a report on August 19, 2025, that details a new class of AI vulnerability named "Model-State Hijacking." The attack technique allows a bad actor to manipulate an AI model’s internal state through carefully crafted prompts, causing it to generate malicious or biased output.

The vulnerability reportedly affects several popular open source AI development frameworks. Unlike traditional software bugs, this exploit targets the fundamental mechanics of the model's attention mechanism.

The Attack Method

According to the report, an attacker can use a "poisonous prompt" to corrupt the model's internal state without triggering safety filters. Once the …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles