Shadow IT Is Growing in the Age of ChatGPT

Publication date: Jul 11, 2023

Last Published: Jul 11, 2023

Table of Contents
Read Time : 6 minutes

In the exciting new era of generative artificial intelligence (AI) and tools like ChatGPT, a long-standing problem is reemerging with renewed force. This issue, known as Shadow IT, has persisted and grown for decades. It’s the somewhat hidden aspect of technology where employees, in their quest for efficiency or comfort, turn to unapproved software, apps, and devices.

As AI tools become more useful and accessible, they create a fertile ground for the expansion of shadow IT. Employees are often tempted to implement such AI solutions without waiting for the green light from their IT departments, unknowingly introducing new risks and complications into the organization’s tech ecosystem.

Download
DoD Contractor’s Guide to CMMC 2.0 Compliance

Employees Are Flocking to AI Tools

There’s no denying that AI tools like ChatGPT are becoming increasingly popular.

Their ability to streamline tasks, improve productivity, and generate novel solutions has seen them embraced by millions. According to Statista, annual users of ChatGPT worldwide rose from around 57 million in 2022 to roughly 100 million by January 2023, a staggering increase that underscores the appeal of these AI tools.

While AI tools like ChatGPT have proven to be invaluable assets, their adoption has largely sidestepped IT approval processes in many organizations. Fishbowl data shows that nearly 7 out of 10 of the employees who have adopted AI tools for work-related tasks have done so without informing their superiors.

This widespread, unsanctioned use of AI tools could potentially expose organizations to various security and privacy risks because employees are turning to them for an array of applications, from drafting sales and marketing emails to analyzing sensitive financial data.

Unsanctioned AI Tools Are a Major Threat

As beneficial as AI tools can be, their unsanctioned use presents a multitude of security and privacy threats.

Here are some of the key concerns about AI tools security:

  • Use of submitted data: When employees use AI tools without approval, they may not be aware of how their data is being used. For non-API users of tools like ChatGPT, their prompts and responses are used to improve and train the models. This means that any information inputted could potentially resurface as output to another user’s prompt. The information could then end up being shared on social media for the whole world to see.
  • Bugs and vulnerabilities: AI tools, like any other software, are not immune to bugs and vulnerabilities. For example, a bug in an open-source library used by ChatGPT resulted in a small percentage of users being able to see the titles of other users’ conversation history. While individual consumer users might simply shrug off such hiccups, organizations can’t afford the same nonchalant approach. If sensitive internal discussions, proprietary data, or confidential client information were accidentally exposed due to a similar glitch, the fallout could be severe.
  • Sensitive information in AI-generated content: AI tools excel at creating content based on the inputs they’re given. When employees use these tools to draft documents or analyze data, they may not realize that the resulting AI-generated content could contain the same sensitive information that’s been included in the prompt. Given that employees at an average 100,000-person company entered confidential business data into ChatGPT 199 times over a single week in early 2023, the chance of exposing sensitive information through AI-generated content is clearly a significant issue.

To effectively mitigate these risks, it’s imperative that organizations implement the right strategies and do so sooner rather than later because the rapid evolution of AI tools and their increasing popularity in the workplace means that these issues will likely only become more prevalent.

Generative AI Policies to Combat Shadow IT

With a staggering 72% of businesses anticipated to adopt generative AI to enhance productivity, according to an Insight survey, establishing effective AI governance is crucial.

The good news is that most businesses are not oblivious to this need—81% say their company has either established or implemented policies/strategies around generative AI, or is in the process of doing so. Those who have yet to address the threat posed by the unsanctioned use of AI tools can start by creating a basic generative AI policy that includes the following:

  • Acceptable uses: The policy should clearly define the permissible uses of AI tools, such as ChatGPT. This might involve tasks like grammar and spell-checking, sentence rephrasing, report outlining, blogpost structuring, and aiding with code snippets or Excel formulas.
  • Forbidden uses: The policy should outline the restricted uses of AI tools. This includes signing up to ChatGPT using company credentials, divulging sensitive information like passwords or addresses, drafting contracts or business-sensitive documents, analyzing job applicants’ CVs, typing out or pasting proprietary information and so on.
  • Allowed users: The policy should establish who within the organization is allowed to use generative AI tools and which departments or job roles can access the technology. Access parameters should be clearly defined, and the tool should be used only by those with proper authorization.
  • Data privacy: The policy should specify the measures taken to protect sensitive information and maintain compliance with relevant data protection laws while using generative AI tools. Especially important are protocols for handling sensitive information, such as personal data and confidential or proprietary data.
  • Liability and accountability: The policy should designate who is responsible for the consequences of AI’s actions, such as errors or misinformation. It’s also a good idea to outline the procedures for rectifying any issues.

For organizations that would like to take their fight against AI-related risks to the next level, there are already frameworks like NIST’s AI Risk Management Framework. This use-case agnostic framework is designed as a robust tool to guide organizations of all sizes, sectors, and scopes in the responsible design, development, deployment, and use of AI systems.

Conclusion on Shadow IT

The surge of AI tools like ChatGPT within workplaces has sparked a new wave of shadow IT, presenting significant security and privacy risks for organizations of all sizes. The swift development and implementation of comprehensive AI policies and strategies are the first lines of defense against these risks. This way, organizations can harness the full power of AI while safeguarding their interests.

Have more questions for your business Managed IT and Cyber Security? Schedule a call with our team for more info about how you can be protected from risks.

Related Posts: