Blog
White Laptop with a white shield and Lock on top of the keyboard implying it is safe

ShadowLeak Attack: What You Need to Know About the 0-Click ChatGPT Agent Gmail Breach

September 21, 2025

Earlier this week, researchers uncovered a zero-click vulnerability in ChatGPT’s Deep Research agent that could silently pull sensitive data from Gmail accounts.

Here’s what happened: attackers sent a carefully crafted email that looked harmless to humans but carried invisible instructions hidden in its HTML; tiny fonts, white-on-white text, tricks you’d never notice.

When a user asked the agent to analyze their inbox, the agent dutifully read that malicious email along with legitimate ones. The hidden prompts manipulated the agent into breaking its own safeguards.

Some of the tactics used:

  • Asserting authority - claiming it had “full authorization” to fetch external URLs.
  • Disguising malicious servers - framing them as “compliance validation systems.”
  • Mandating persistence - ordering retries if connections failed.
  • Creating urgency - pressuring the agent into compliance.
  • Falsely claiming security - telling the agent to encode stolen data in Base64, as if that were protective, while actually just hiding the exfiltration.

This allowed the agent to sift through Gmail for PII like names and addresses, encoded it, and sent it to the attacker’s server, all without the user clicking a thing.

OpenAI has since patched the flaw, but the bigger story is this:
"AI agents can be socially engineered the same way humans can. And unlike humans, they can act at machine speed, with direct access to sensitive systems and data."

Why this matters

This isn’t just an “AI bug.” It’s the start of a new supply chain risk: invisible instructions embedded in places we trust: emails, docs, and web pages that manipulate AI tools sitting inside enterprise workflows.

Organizations rushing to adopt AI-powered agents need to treat them like any other high-privilege account:

  • They must be monitored,
  • Their access must be limited,
  • And their behavior must be validated continuously.

How NetraScale thinks about this

At NetraScale, we see AI agents as the next link in the SaaS supply chain. Just like a vulnerable integration or a compromised package, they create a new attack surface, one that can be exploited without you even knowing.

That’s why our approach to SaaS and supply chain risk isn’t just about software inventories and compliance checklists. It’s about mapping hidden dependencies and monitoring for abnormal behavior across apps, agents, and vendorsbefore an attacker uses them against you. (Comment “Checklist” below and I will DM you a one-page SaaS supply chain checklist you can use immediately.)