AI News

Tenable Research Finds 7 New Vulnerabilities in OpenAI's ChatGPT

The report details how prompt injection attacks can steal data from the "memories" feature, even in GPT-5.

Olivia Sharp 1 min read 672 views
Free
Security firm Tenable reported that it found seven new vulnerabilities in OpenAI's ChatGPT, including methods to exfiltrate user data from its "memories" feature.

Seven New Vulnerabilities Disclosed

Tenable Research published a report disclosing seven new, persistent vulnerabilities and attack techniques. The security firm discovered the flaws in OpenAI's ChatGPT.

The vulnerabilities were found to affect all models, including the latest GPT-5.

'Prompt Injection' Attacks

The attack methods detailed by Tenable are specific to how large language models (LLMs) function. The flaws allow for attacks including "indirect prompt injection."

This method allows an attacker to feed malicious instructions to the AI. The instructions can be hidden in an external document or website that the AI is asked to analyze.

Data Exfiltration …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles