The Proliferation of Custom AI and the Rise of "Shadow AI"
- Dean Charlton

- Aug 15, 2025
- 2 min read
In the rapidly evolving landscape of enterprise technology, a new trend is emerging with both immense potential and significant risk: the unsanctioned development of custom AI applications by employees. According to recent findings from Netskope, the use of generative AI (GenAI) platforms by enterprise end-users has surged by 50%, a clear indicator of a growing appetite among employees to create their own AI tools and agents. This employee-driven innovation, while fostering productivity, is also fueling the growth of "shadow AI," which now accounts for over 50% of all current AI app adoption.
The rise of GenAI platforms is at the heart of this shift. These foundational tools enable employees to connect custom AI applications directly to internal data stores, offering unprecedented flexibility and speed. This has made them the fastest-growing category of shadow AI. The popularity of these platforms is creating new data security risks, underscoring the critical need for robust data loss prevention (DLP) and continuous monitoring. In the last three months, network traffic associated with GenAI platforms increased by 73%. By May, 41% of organisations were already using at least one such platform, with Microsoft Azure OpenAI leading at 29% adoption, followed by Amazon Bedrock (22%) and Google Vertex AI (7.2%).

The trend extends beyond cloud-based solutions to on-premises deployments. Organisations are increasingly turning to on-premises LLM interfaces to innovate quickly. A recent study shows that 34% of organisations are using these interfaces, with Ollama being the current leader in adoption. This shift places the full burden of security on the organization, highlighting the need for internal security teams to be vigilant.
Employee experimentation with AI is also evident in the widespread use of AI marketplaces. For instance, 67% of organisations have users downloading resources from Hugging Face. The demand for AI agents—autonomous systems that can perform complex tasks—is a key driver of this behavior. GitHub Copilot is now used in 39% of organisations, and 5.5% have employees running on-premises agents built from popular frameworks.
The sheer volume of new applications is staggering. Netskope now tracks over 1,550 distinct GenAI SaaS applications, a significant jump from just 317 in February. The average number of GenAI apps used per organisation has increased from 13 to 15 in a single quarter. This proliferation of tools has also seen a consolidation around purpose-built solutions like Gemini and Copilot, which are better integrated into enterprise productivity suites. Notably, ChatGPT has experienced its first-ever decrease in enterprise popularity since 2023, while other popular apps like Anthropic Claude, Perplexity AI, Grammarly, and Gamma have seen gains. Additionally, Grok has entered the top 10 most-used applications.
The challenge for security teams is to manage this explosion of AI usage without stifling innovation. As Ray Canzanese, Director of Netskope Threat Labs, explains, "Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase." To safeguard this innovation, organisations must evolve their security strategies, focusing on overhauling AI application controls and updating DLP policies to include real-time user coaching and continuous monitoring.




Comments