AI News

Security Researchers Jailbreak Microsoft Copilot Studio Agents

Tenable report exposes vulnerabilities in no-code enterprise AI tools.

Olivia Sharp 2 min read 692 views
Free
Tenable researchers revealed a vulnerability in Microsoft Copilot Studio on December 12, showing how AI agents can be tricked into leaking data and financial fraud.

A critical vulnerability in Microsoft Copilot Studio was exposed on December 12, 2025, by researchers at cybersecurity firm Tenable. The report details a successful "jailbreak" of an AI agent built on the platform, highlighting the severe security risks inherent in the rapidly growing field of agentic AI. The exploit allowed researchers to manipulate a travel agent bot to retrieve sensitive data and commit financial fraud.

The Anatomy of the Exploit

The researchers used a technique known as prompt injection to override the agent's safety protocols. By inputting specific natural language commands, they were able to:

  • Exfiltrate Data: The …

Archive Access

This article is older than 24 hours. Create a free account to access our 7-day archive.

Share this article

Related Articles