IT leaders have spent years building security frameworks, enforcing compliance policies, and managing risk across the enterprise. Then generative AI arrived, and employees started solving problems on their own.
The result is what's now called shadow AI: unapproved consumer tools used across the organization without IT oversight, security review, or governance controls. According to Nitro's Enterprise AI report, the problem is far more widespread than most organizations realize, and the risks extend well beyond policy violations.
The true scope of the shadow AI
The survey of over 1,000 professionals found that 68% of executives and 50% of employees use unapproved AI tools at work. These aren't isolated incidents or edge cases. Unapproved AI use has become a default behavior across roles and seniority levels.
More concerning is what employees are doing with these tools. One in three employees has processed confidential data through unvetted AI platforms. That includes customer information, financial records, legal documents, and intellectual property flowing through tools with unknown data handling practices, no audit trail, and no visibility for IT or compliance teams. For organizations in regulated industries, this creates immediate compliance exposure. For all organizations, it creates data security risks that are difficult to assess and nearly impossible to remediate after the fact.
Why do employees turn to unapproved AI tools?
The instinct is to treat shadow AI as a compliance failure that requires stricter policies and stronger enforcement. But the survey data tells a different story. Employees aren't using unapproved tools because they want to circumvent IT or ignore security protocols. They're using them because the approved alternatives don't meet their needs. The sanctioned tools are slow to access, difficult to use, or simply don't exist for the tasks employees need to accomplish.
When someone needs to summarize a 50-page contract before a meeting in 30 minutes, they'll use whatever gets the job done. When the approved workflow requires submitting a ticket, waiting for access, and completing training before using an AI tool, employees will find a faster path. Shadow AI is a symptom of unmet needs and a signal that enterprise AI tools aren't keeping pace with what employees actually require.
Uncovering the usability gap driving shadow AI risk
The same survey found that 68% of executives feel pressure to deliver better AI tools to their teams, while only 57% of employees feel pressure to use the AI tools already provided. That gap reveals the core issue: organizations are investing in AI capabilities, but those capabilities aren't reaching employees in ways that fit their actual workflows.
Additional training isn't the answer. The survey data shows that when a tool is intuitive and useful, people adopt it without extensive enablement. When it requires significant effort to access or learn, they find alternatives. This creates a challenging dynamic for IT leaders. They're responsible for security and governance, but they can't secure tools they don't control. And they can't control tools if employees don't use them.
Addressing usability vs. governance in shadow AI
Closing the shadow AI gap requires working on governance and usability simultaneously. Focusing on one without the other will produce incomplete results.
On the governance side, organizations need clear policies about which AI tools are approved, how data should be handled, and what types of content should never be processed through AI. Employees need to understand the risks of using unvetted tools, particularly when handling sensitive information. And IT needs visibility into AI usage patterns to identify gaps and emerging risks.
But policies alone won't change behavior if the approved tools aren't as easy to use as the consumer alternatives employees are already reaching for. On the usability side, IT leaders need to evaluate AI tools the way employees evaluate them. Does this help me do my job faster? Can I access it without jumping through hoops? Does it work within the tools I'm already using? If the answer to any of these questions is no, adoption will suffer regardless of how strong the security features are.
How to build AI tools employees will actually use
The most effective approach to reducing shadow AI is making the secure choice the easy choice. That means building AI capabilities directly into the document workflows teams rely on every day, rather than asking employees to adopt separate tools or change their established processes.
Nitro's approach reflects this principle. Tools like Document Assistant, Table Extract, and Smart Redact are designed to solve specific, high-frequency tasks within the PDF and document workflows employees already use. There's no separate application to access, no additional login, and no learning curve that creates barriers to adoption.
Security and compliance are built into the design rather than added as constraints that limit usability. Data is processed in temporary sessions with no retention. All AI tools meet SOC 2, ISO 27001, and HIPAA standards. And because the tools operate within a governed environment, IT maintains visibility and control without creating obstacles for end users. When employees can accomplish their AI-assisted tasks through approved tools that work as well as the consumer alternatives, shadow AI becomes unnecessary.
The path forward to governed AI use in enterprises
Shadow AI represents a gap between what employees need and what organizations currently provide. IT leaders who treat it solely as a governance problem will keep fighting the same battles as new consumer tools emerge and employees continue finding workarounds.
Those who address the underlying usability gap will find that adoption follows naturally. When the sanctioned tools are genuinely easier and more effective than the alternatives, employees will use them. The security risks decrease, compliance exposure shrinks, and IT regains visibility into how AI is being used across the organization. The survey data makes the stakes clear. With two-thirds of executives and half of employees already using unapproved tools, the window for addressing shadow AI is now.
For the full findings on shadow AI, enterprise adoption, and what separates successful AI implementations from the rest, read the report: Enterprise AI: The Reality Behind the Hype.