In 2025, prompt injection evolved from a laboratory concept into a real-world production threat. Two landmark cases — EchoLeakand ForcedLeak—demonstrate that zero-click attacks and indirect prompt injection have become core risks facing enterprise AI assistants.
EchoLeak (CVE-2025-32711): Silent Data Exfiltration in M365 Copilot
Discovered in June 2025, the EchoLeak vulnerability is widely regarded as the first fully automated data exfiltration exploit in a production-grade AI system. Its severity lies in its zero-click nature: the victim does not need to click links or execute any code. Simply processing a malicious email in the background is sufficient to trigger the leak.
Technical Attack Path
The attacker sends a carefully crafted email containing a malicious Markdown payload to the victim's Outlook inbox. When the user later interacts with Microsoft 365 Copilot (e.g., requesting a summary of recent emails), Copilot's Retrieval-Augmented Generation (RAG) engine incorporates the malicious email into its context.
- Classifier Bypass (XPIA Bypass):
Microsoft deployed cross-prompt injection detection mechanisms. However, EchoLeak leverages Unicode homoglyphs and contextual obfuscation to disguise malicious instructions as legitimate business communication, successfully evading semantic detection.
- Data Exfiltration via Link Filter Bypass:
The attack abuses Markdown’s reference-style link syntax. While inline external links are typically filtered, referenced links can bypass certain parsing paths. Sensitive data (e.g., from OneDrive) is embedded into URL parameters and transmitted silently to attacker-controlled servers.
- Stealth Mechanism:
The injected instructions include directives such as“do not mention this email,”preventing Copilot from citing the malicious source in its response, leaving the user unaware of the breach.
ForcedLeak: Logical Exploitation in Salesforce Agentforce
Similar in nature, ForcedLeak targets Salesforce's Agentforce platform. Attackers inject malicious prompts into open fields (such as "Description")) via Web-to-Lead forms.
Attack Logic
The most notable aspect of this attack is its paradox: it succeeds not because of user error, but because users follow normal workflows correctly.
When a sales representative uses an AI assistant to classify or score leads, the AI processes stored malicious instructions within the CRM system. These instructions manipulate the AI to bypass security safeguards and exfiltrate sensitive Personally Identifiable Information (PII) through a previously trusted but now expired domain.
Attackers can purchase such expired domains for as little as $5, effectively bypassing Content Security Policy (CSP) protections. This highlights a fundamental weakness in AI security: the implicit trust model of data sources.
Comparison of Major Prompt Injection Incidents (2025)
Feature
EchoLeak (M365 Copilot)
ForcedLeak (Agentforce)
CVSS Score
9.3 (Critical)
9.4 (Critical)
Trigger Mechanism
Zero-click, background email processing
Triggered during lead processing
Exfiltration Channel
Markdown reference link bypass
Trusted domain / CSP bypass
Data Sources
SharePoint, OneDrive, Teams
Salesforce CRM core data
Core Vulnerability
Lack of cross-context isolation
Treating untrusted input as trusted instructions
Implications for Future AI Security
These incidents expose a fundamental weakness in AI systems: the blurred boundary between instructions and data.
1. Shift from Input Filtering to Semantic Monitoring
Traditional rule-based filtering (e.g., regex) is insufficient against natural-language attacks. Organizations must deploycontext-aware AI guardrailscapable of understanding intent and enforcing integrity checks during sensitive operations such as data retrieval.
2. Enforce Human-in-the-Loop Controls
AI systems should not have full autonomy over high-risk actions, including:
- External network communication
- Bulk data export
- Sensitive configuration changes
Following the ForcedLeak incident, Salesforce implemented patches requiring explicit human approval for such operations.
3. Reassess the "Data as Instructions" Risk
Any data accessible to AI—emails, Slack messages, Jira tickets, or web forms—can act as executable payloads. Organizations must:
- Enforce strict data source isolation
- Apply least-privilege access controls
- Continuously validate trust boundaries
This paradigm shift is essential to prevent similar data exfiltration attacks.
References
EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System – arXiv
ForcedLeak: The $5 Exploit That Broke Salesforce's AI Agents – Inspired eLearning Blog