Research

New Research Exposes Significant AI Security and Data Poisoning Risks

Two studies found that large language models can be compromised with just 250 poisoned documents and that vision systems are vulnerable to manipulation.

Olivia Sharp 1 min read 689 views
Free
Research by the UK's AI Safety Institute and Anthropic found that large language models are highly susceptible to data poisoning attacks.

Two separate research papers published on October 9, 2025, exposed significant security vulnerabilities in foundational AI technologies, highlighting a growing tension between the rapid deployment of AI systems and their underlying security. One study focused on data poisoning in large language models (LLMs), while the other demonstrated a new method for attacking AI computer vision systems.

Data Poisoning in Large Language Models

A joint study by the UK's AI Safety Institute (AISI), the AI company Anthropic, and the Alan Turing Institute revealed that LLMs are highly susceptible to "data poisoning" attacks. The research found that as few as 250

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles