Critical Vulnerability in ChatGPT Connectors Exposes Google Drive Data

A ChatGPT Connectors vulnerability has been identified. Cybersecurity researchers disclosed a critical vulnerability in OpenAI’s ChatGPT Connectors at the Black Hat conference in Las Vegas. The attack, named “AgentFlayer,” allows attackers to exfiltrate sensitive data from connected Google Drive accounts with zero user interaction.

“This isn’t exclusively applicable to Google Drive,” the researchers noted. “Any resource connected to ChatGPT can be targeted for data exfiltration.”

How do ChatGPT Connectors Work?

Launched in early 2025, ChatGPT Connectors integrate AI with third-party applications like Google Drive, SharePoint, GitHub, and Microsoft 365. The feature enables users to search files, pull live data, and receive contextual answers based on enterprise data.

Exploiting ChatGPT Through Prompt Injection

Researchers exploited this integration using an indirect prompt injection attack. By embedding malicious instructions in documents with invisible text, attackers manipulated ChatGPT’s behaviour when processing the file.

“All the user needs to do for the attack to take place is to upload a naive looking file from an untrusted source to ChatGPT, something we all do on a daily basis,” Bargury explained. “Once the file is uploaded, it’s game over. There are no additional clicks required.”

How the Attack Unfolds

When victims upload a poisoned file or receive one in Google Drive, the hidden payload activates. Even a request like “summarize this document” can trigger data theft. ChatGPT may then search connected accounts for API keys, credentials, or confidential documents and leak them.

Can ChatGPT Embed Stolen Data into Image URLS?

The researchers demonstrated that ChatGPT can embed stolen data into image URLs. When rendered, these URLs make automatic requests to attacker-controlled servers. Initially, OpenAI tried blocking malicious URLs with its “url_safe” endpoint. But the researchers bypassed it using trusted Azure Blob Storage URLs and captured data with Azure Log Analytics.

Is There a Risk for Enterprise Environments?

This vulnerability poses severe risks for enterprise AI adoption. Companies using ChatGPT Connectors for SharePoint, HR manuals, financial data, or strategic plans face potential comprehensive data breaches. Employees are unlikely to detect the attack because the malicious document looks harmless. Traditional phishing awareness training does not protect against this attack vector.

OpenAI’s Response to the Vulnerability in Connectors

OpenAI was notified of the flaw and quickly deployed mitigations. However, the researchers warned:

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately these mitigations aren’t enough. Even safe looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it.”

What are the Broader AI Security Challenges with ChatGPT

This vulnerability highlights industry-wide issues with AI-powered enterprise tools. Microsoft’s Copilot recently suffered from the “EchoLeak” vulnerability, and other assistants face similar prompt injection attacks. The OWASP Top 10 for LLM Applications (2025) ranks prompt injection as the number one security risk.

How to Avoid a Data Breach using ChatGPT

Security experts recommend several steps to reduce risk:

  • Implement strict least privilege access controls for AI connectors.
  • Monitor AI agent activities with dedicated security solutions.
  • Train staff on risks of uploading untrusted files to AI systems.
  • Use network-level monitoring to detect unusual data patterns.
  • Regularly audit permissions for connected services.

Stay informed with us!

You can subscribe to our monthly cybersecurity newsletter to receive updates about us and the industry

Blog

Check the latest updates on threats, stories, events and analysis.

Marks&Spencer Recovers After Cyber Incident | ZENDATA

Marks&Spencer Recovers After Cyber Incident

Police Can Only Use Spyware for Serious Crimes | ZENDATA

Germany’s Top Court Rules Police Can Only Use Spyware for Serious Crimes

China cuts itself off from internet | August 2025 | ZENDATA Security

China Cuts Itself Off From the Global Internet