Cybersecurity experts have discovered a chilling new threat that’s causing ripples of shock in the community of artificial intelligence experts. This zero-click vulnerability doesn’t need users to download malware or click on malicious links โ it simply has the power to take control of artificial intelligence helpers without them realizing it. The new vulnerability shows that our most trusted digital helpers can now become unwitting facilitators of data robbery.
AI assistants become unwitting accomplices in sophisticated data theft
The glaring vulnerabilities of these current top-of-the-line AI solutions were showcased at the Black Hat USA events by security experts at Zenity Labs. The research team successfully breached the security of both ChatGPT and Microsoft Copilot Studio, as well as Salesforce Einstein and Google Gemini. These attacks leverage the underlying mechanisms of handling context by these artificial intelligence models, deceptively utilizing beneficial functionality as detrimental security vulnerabilities.
The remit affected by vulnerable systems includes 3,000 Microsoft Copilot Studio agents that were found to be at risk of internal tool and database data leakage. The OpenAI ChatGPT system suffered from email injection attacks that led to the attacker gaining access to the linked Google Drive accounts. The Salesforce Einstein system was compromised such that it redirected communications received from customers to email addresses controlled by the attacker.
Microsoft CoPilot Studio agents expose the full CRM database
Zenity Labs discovered that Microsoft’s customer-support agents could be manipulated to expose complete CRM databases, revealing sensitive customer information and internal business processes to unauthorized parties through carefully crafted prompt injection techniques.
EchoLeak vulnerability illustrates zero-click AI exploits
New zero-click cyberattack exposes vulnerabilities in AI agent systemsย because attackers can embed malicious prompts in seemingly harmless communications that activate later without any user awareness. The EchoLeak vulnerability (CVE-2025-32711) specifically targets Microsoft 365 Copilot’s contextual processing capabilities, allowing attackers to extract sensitive information through invisible HTML comments and white-on-white text.
The attack technique is quite simple yet deadly. The attackers send out spam emails that contain hidden queries in the form of HTML comments that the data retrieval and generation engine of Copilot responds to. These malicious queries get executed whenever users ask authentic queries of the AI assistant. As such, it exfiltrates sensitive information without being detected by security mechanisms.
“Manipulation of commands can change the actions of an agent altogether. The introduction of poisonous knowledge sources can change the actions of an agent very significantly. This may trigger sabotaging actions of an agent as well.”
Enterprise security frameworks struggle with AI-native attack vectors
The classic cybersecurity measures become useless against these new manipulation techniques used by such intelligent AI that evade classic detection mechanisms. The tech giants have released patches and security updates for these threats. However, the underlying problem here is that these intelligent agents process a tremendous amount of data that malicious actors can capitalize on. Microsoft released patches on the server side for EchoLeak. Others provided layered security against prompt injection attacks.
Industry collaboration addresses emerging AI security challenges
Defensive strategies include:
- Removing external email context from AI assistant configuration options
- Applying AI-oriented runtime boundaries at network layers.
- Restricting markdown rendering to reduce prompt injection risks
- Deploying specialized AI security solutions with behavioral analysis
The finding of zero-click vulnerabilities in AI systems represents a tipping point in the world of cybersecurity, as it opens the eyes of organizations to the possibilities that these vulnerabilities may pose to some of the most intelligent digital assistants they may be using. At a point when organizations have begun to adopt intelligent AI digital assistants for performing core business operations, the security risks that come with these vulnerabilities should not be limited to data compromise but should also cover disruptive operations.
