AI News

NIST Report Finds Security Flaws in Chinese DeepSeek AI Models

The report found DeepSeek models were 12 times more likely to be hijacked and had a 94% jailbreak rate compared to U.S. models.

Olivia Sharp 1 min read 608 views
Free
A U.S. NIST report detailed major security flaws in Chinese developer DeepSeek's AI models, which were found to be 12 times more susceptible to hijacking than U.S. counterparts.

Significant Security Shortcomings

A new report from the U.S. National Institute of Standards and Technology (NIST), released on Sep 30, 2025, detailed significant security shortcomings in models from Chinese AI developer DeepSeek. The evaluation found that AI agents built on DeepSeek's models were, on average, 12 times more likely to be hijacked by malicious instructions than the U.S. frontier models that were tested.

The report serves as a strong counterpoint to the idea that performance on capability benchmarks is the only metric that matters. It highlights that security and alignment can vary widely between models from different developers and …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles