March 27, 2026

Agentic Leak: When Your Employees Use AI and You Don’t Even Know

Employees using AI tools with data streams leaking upward — representing Agentic Leak

When Your Employees Use AI and You Don’t Even Know

Cuando Tus Empleados Usan IA y Tú Ni Lo Sabes

The invisible risk that 55% of companies are already facing

A few weeks ago, the CEO of a mid-size company made an unexpected confession: he didn’t know much about artificial intelligence, but he had ended up purchasing corporate licenses for Claude. The reason? His employees were already using it on their own.

No approval. No oversight. No idea from IT.

From Shadow IT to Shadow AI

If you’ve been in technology long enough, you know the concept of Shadow IT: when employees adopt tools without going through the IT department. Dropbox instead of the corporate file system. WhatsApp instead of Teams. Trello instead of the official project management tool.

Now imagine that same phenomenon — but with artificial intelligence.

Welcome to BYOAI: Bring Your Own AI.

Employees paying for their own licenses of ChatGPT, Claude, Copilot, or Gemini. Using them every day to draft reports, analyze data, prepare presentations — with corporate information. Customer data. Internal strategies. Financial figures.

All flowing to servers the company doesn’t even know exist.

The numbers that should worry you

55%+

of employees already use AI tools not approved by IT

Source: Gartner 2025 Digital Worker Experience Survey

These aren’t isolated cases. It’s a widespread pattern happening right now, in companies of every size and sector.

What’s most striking is that many of these employees act with good intentions: they want to be more productive, solve problems faster, deliver better work. The problem isn’t the intent. It’s the invisibility.

Why we call it “Agentic Leak”

At ZeroNet, we’ve coined the term Agentic Leak to describe this phenomenon. This goes far beyond someone using ChatGPT to write an email. We’re seeing:

  • 🤖 Autonomous agents running in the background on employee laptops
  • Automated workflows sending corporate data to AI APIs without supervision
  • 🔌 Browser extensions with embedded AI processing everything on screen
  • 🧠 Personal copilots connected to email, calendars, and code repositories

It’s a continuous, automated, and in many cases, completely invisible data leak.

The dilemma: ban or channel?

Some companies have opted for a total ban. Garrigues, one of the largest law firms in Spain, does not allow the use of unauthorized AI. When you handle tax, legal, and sensitive client data, an open bar simply isn’t an option.

But banning has a cost: you lose the productivity that AI enables. And in many cases, employees simply find ways to work around the controls.

💡 The alternative is to channel it. Provide approved, secure AI tools with clear policies on what data can be processed and what cannot. But to channel, you first need to know what’s happening.

How do you detect what you can’t see?

This is where ZeroNet comes in. Our platform already monitors the energy and network behavior of IT infrastructure. And it turns out that traffic to AI APIs leaves a detectable fingerprint: connection patterns, data volumes, known endpoints.

With our new Agentic Leak capability, we can:

  • Detect unauthorized use of AI tools across the corporate network
  • Quantify how many employees are doing it and how often
  • Alert IT and security teams about potentially dangerous data flows
  • Report so leadership can make decisions based on real data

This isn’t about spying on anyone. It’s about giving the organization the visibility it needs to manage a phenomenon that is already happening — with or without its permission.

What you should do on Monday

If you’re a CEO, CTO, or CISO, ask yourself these three questions:

  1. Do you know how many of your employees use unapproved AI today? If the answer is “no,” you have an Agentic Leak.
  2. Does your IT team have visibility into traffic to AI services? If not, you’re flying blind.
  3. Do you have a clear AI usage policy? And more importantly: is anyone following it?

BYOAI can’t be stopped. It can be governed.

And to govern, you first need to see.