Shadow AI: The Growing Risk IT Leaders Must Address
Shadow AI: The Growing Risk IT Leaders Must Address
Homepage   /    technology   /    Shadow AI: The Growing Risk IT Leaders Must Address

Shadow AI: The Growing Risk IT Leaders Must Address

Darryl K. Taft,Steve Croce 🕒︎ 2025-10-30

Copyright thenewstack

Shadow AI: The Growing Risk IT Leaders Must Address

As long as there have been guardrails, there have been people trying to find ways around them, from the Garden of Eden to that guy in finance downloading open-weight models from Hugging Face. As long as there has been IT, there has been shadow IT, the phenomenon in which employees use third-party technology without IT’s approval or oversight. Some degree of shadow IT is inevitable, of course. Unfortunately, with the rise of AI, the risks have skyrocketed. Here’s what IT leaders need to know about shadow AI and how to prepare their companies for safe and secure AI usage moving forward. For a data-backed look at these trends, download Anaconda’s 2025 “State of Data Science and AI Report.” From Shadow IT to Shadow AI By nature, IT is highly controlled. Our company’s cybersecurity depends on it. Every day, managers walk the tightrope of providing employees with the absolute best tools without risking the company or exceeding the budget. They do their best to stay ahead of new technology and requests, to ensure there is a real business need and that any new technology aligns with existing policies. The more regulated the industry — finance, healthcare, public sector, etc.— the more steps required to deploy something new. This is IT’s remit, and it’s incredibly important, but to the rest of the company, they’re the gatekeepers, the naysayers stopping everyone from finishing their work faster with the latest technology. IT doesn’t enjoy saying no, but they’re the ones managing the chaos of current systems, regular upgrades and new technology, while keeping the company secure. However, the rest of the business is on its own tightrope. Each department has pressing deadlines, budgets and jobs to get done, so it’s natural for employees to circumvent standard protocol and adopt technology outside approved networks, devices and accounts, leading to runaway risk. Sometimes, the tighter the controls, the farther employees will go out of bounds to get things done. How Did We Get Here? As long as the concept of IT has existed, shadow IT has existed to work around it. As early as the early ’80s, when employers would only pay for large mainframe computers, employees at BankAmerica Corp. bought new computers, expensing them as office supplies. As personal computers and the internet took off, so did shadow IT. The next big shift came with cloud computing, when the risks and costs of shadow IT ballooned. Suddenly, anybody with a corporate credit card could buy unsecured, unmonitored, internet-connected infrastructure. Fast forward another 20 or so years, and IT has mostly gotten its arms around BYOD (bring your own device), cloud computing and web services. However, generative AI is creating a new risk factor: shadow AI. Just as personal computers, the internet and cloud computing did in the past, AI offers massive potential for efficiency and innovation in business. Still, it’s evolving faster than IT can keep up. Cyberhaven’s 2024 analysis of 3 million employees found that 73.8% of workplace ChatGPT accounts were personal, not corporate. That means three out of every four AI interactions happen where IT teams can’t see them. And use of shadow IT is only expected to increase. A recent VentureBeat investigation found that shadow AI applications could more than double by mid-2026. Why are large language models (LLMs) and AI agents so different than past technologies? In some ways, they’re not. You’ve got rapidly evolving tools outside your control that expose you to security breaches, data and IP leaks, and poor cost controls. What’s different is the broad applicability of AI tools. Take cloud computing, for example. It was an incredible technology, but creating cloud infrastructure required technical expertise and a technical-enough problem that the cloud could solve. There was a barrier to entry. With LLMs like Claude and ChatGPT, the only obstacles to AI use are a web browser and a question. LLMs can and are being used by every role in the company. Then, there’s the blurring of lines between work and personal life. The bulky computers in the ’80s were relegated to the office. Cloud infrastructure wasn’t particularly suited to most people’s lives outside of work. Conversational AI tools live in our pockets. We’re using them just as much outside of work as we are in the office. Even more worrying, many of our personal devices have access to company data and servers. The Risks of Shadow AI The risks are real if you don’t get ahead of them. Before we get into how to address shadow AI in your organization, let’s quickly cover the risk profile. Data Exposure This is the big one. When you’re working with certain LLMs, you run the risk of sending data that these models can train on. This means sensitive company or customer data can end up in this primordial kind of AI stew, which can surface later in uncontrollable ways. This can be especially true with free AI services that many users may opt for outside of the office. Hallucinations Use AI for any length of time and you’ve likely received incorrect or fabricated answers. Deloitte recently partially refunded the Australian government for a $290,000 report riddled with factual errors and fabricated references. The problem is that employees can take this incorrect information and act on it, or push damaging code into production without understanding how or whether it works. New mandates from the EU AI Act and any future legislation will take effect over the next few years. Companies will need to have greater transparency, traceability, and auditability across their AI systems. The longer companies delay implementing stronger governance and greater visibility into their AI usage, the greater the risk of noncompliance. Malicious Models and Agents Models and agents from untrusted sources can unknowingly siphon data off to bad actors, take malicious actions on your behalf and even be used as a delivery mechanism for traditional malicious software. In addition, agents are comprised of many different tools and APIs working in concert, so when any of these external systems shift, an agent’s behavior is prone to change as well. This is a unique risk because AI agents can take action on an employee’s or an organization’s behalf. Shedding Light on Shadow AI As an IT leader, how do you navigate these growing risks and ensure your organization is using AI safely and responsibly? One solution is the absolutist approach. Block employees from using or doing anything AI-related, but the more elaborate your controls, the riskier the methods people will use to get around them. This is a never-ending battle. Also, we’re talking about a transformational tool that is critical to users’ success. Here’s a better way: Give Employees as Much Runway as Possible The best rule of thumb is to provide as much sanctioned AI usage as possible. If you give people approved tools, for example, access to enterprise LLM plans that don’t train on your data and an approved way of doing things, they will generally use that. You’ll get better compliance and run less risk of people going around doing their own thing. Create a Safe Place to Test Things At Anaconda, we have an AWS sandbox available to everyone. You can’t deploy code to production, and the sandbox gets wiped regularly, but this gives employees a place to test, prototype and try new things in a secure environment. Give employees a similar area that is locked down or erased regularly so they can try new things safely, whether that’s testing AI agents, building models or running new code. Educate Employees About the Risks The biggest risk of shadow AI comes from a lack of awareness. It’s not enough to tell employees not to use this technology in a certain way. You have to ensure they know the potential consequences of their actions, so educate them, train them and give them ways to increase their AI literacy, which is LinkedIn’s fastest-growing in-demand skill of 2025. Block What You Need to Restrict access to what’s simply too dangerous and provide alternatives if possible. While you don’t want to block everything, you will need some definitive and clear guardrails in place, and that’s OK. In fact, it’s necessary. The secret is finding the right balance. Get Enterprise Support Finally, sometimes you just need outside help. A majority of organizations (92% according to Anaconda’s 2025 “State of Data Science and AI Report”) are using open source AI tools and models. These tools are powerful and necessary for innovation, but it’s important to understand how the definition of open source has changed with AI. It used to be that you could see the code behind any piece of open source software. In other words, you could scan it, see how it worked behaviorally and determine if it was a risk. With AI, training data and training processes aren’t visible. You can only see a model’s weights, billions of numerical parameters that don’t reveal enough about how or why a model behaves the way it does. As a result, your organization must learn how to build the right guardrails around these open source AI tools. In these instances, it can help to have an outside partner specializing in enterprise AI deployment to secure your environment and mitigate any downstream consequences. Turn AI Into a Strategic Advantage AI is here to stay. IT leaders must get ahead of it and putAI tools into production in low-stakes environments and situations. You don’t want to be the one who sat on the sidelines and waited till AI was completely safe and secure. Years down the road, you’ll need to build your AI governance and implementation plans from scratch, and you’ll be that much further behind your competition. The future is here — and it needs you to lead it.

Guess You Like

Winter at The Station returns to Michigan Central this year
Winter at The Station returns to Michigan Central this year
Michigan Central will celebrat...
2025-10-30