theNet by CLOUDFLARE


Securing the AI revolution

It’s time to talk about the explosion of artificial intelligence (AI) tools and how to address them from a cybersecurity point of view within your organization.

AI tools such as Midjourney, Stable Diffusion, Dall·E, ChatGPT, Jasper, LLaMa, Rewind, and others have gone from niche to mainstream to full-on panpsychism. You have probably marveled at the imaginative use cases for research, creativity, and productivity. And if you’ve used any of the tools yourself, you know it’s hypnotic to watch them interpret prompts, generate responses, and — with only a few user actions — refine and add depth to their outputs.

The speed and ease of these tools for users of all skill levels is a real breakthrough. The outputs from large language models might be less compelling than their visual counterparts; but across the board the interactive, “generative” process is remarkable. Compared with existing open resources such as Google, StackOverflow, and Wikipedia, generative AI tools represent an incredible leap forward by delivering use-based outcomes that build on interactions, as opposed to merely providing general intelligence. By offering exceptional speed and agility, interactivity outperforms traditional search, retrieval, and synthesis methods — effectively bringing information to the forefront.

Take for instance this image below, which was “imagined” with Midjourney: “An ancient civilization wall painting where the hieroglyphics appear to show a civilization using a computer and artificial intelligence, lighting from a fire.”


Image source: Midjourney AI generated - imagined by me

There are incredible opportunities for using AI tools in the enterprise. Organizations can use them to generate computer code, develop user documentation and consumer-facing content, ease the burdens of customer service, or produce more useful knowledge bases for new-hire onboarding and cross-organizational knowledge sharing. These and other use cases could ultimately generate billions of dollars in new commerce and business value.


Some people are nervous about the obvious security and legal issues these tools present in addition to their awesome potential. Technology leaders should be asking, “Am I in sync with the entirety of my organization about the potential risks and opportunities presented by AI technologies?” and if so, “How do I ensure that our use of these tools is restrained from causing serious injury?”

Among cybersecurity leaders, there is a 50/50 split between those who are outright blocking access to these technologies and those who are embracing them. Outright blocking side steps addressing the security risk. The result is that security leaders find themselves out of step with their business and their colleagues, who invariably find ways to work around them. Thus, now is the time to address this division, to avoid being Luddites and instead grapple with the reality that people within organizations are already using these tools, and through leadership, the C-suite can permit that in a manner that reduces potential risks.

Where should technology leaders start? Setting a few key guidelines for the organizations can help reduce risk without stifling the potential that AI tools offer.

1. Keep IP out of Open Systems. It might seem obvious, but organizations cannot feed highly regulated information, controlled data, or source code into an open AI model, or one outside of enterprise control. Similarly, organizations must avoid inputting customer data, employee information, or other private data into open AI tools as well. Unfortunately, there are examples of such actions every single day. (See, for example, the code leak incident at Samsung.) The issue is that most generative AI tools do not guarantee that this information will stay private. In fact, they are explicit that they use input to conduct research and ultimately improve future results. So, at the very least, organizations might need to update confidentiality policies and train employees to help ensure sensitive information stays out of these AI tools.

2. Ensure veracity to avoid liability. If you’ve ever interacted with an AI tool, you might find that answers to prompts are often presented without context. And you also might discover that these answers are not always accurate. Some answers might not be based on current information.

Out-of-date responses don’t diminish the power of these tools, but they do help us understand the current risks more clearly, as evidenced by this example where attorneys who used AI-generated content in court filings, discovered that the content referenced completely fictitious cases.

3. Prepare for “offensive AI.” As is the case with many new technologies, cyber attackers have been quick to exploit AI tools for criminal aims. How are attackers using AI for evil? They might find an image of you or a recording of your voice on the Internet, and then create a “deepfake” using AI tools. They could use the fake version of you to embezzle company funds or phish colleagues, leaving fraudulent voicemails that ask for login credentials. AI tools could also be used to generate malicious code that rapidly learns and improves its ability to carry out its goals.

Combatting the malicious use of AI might require…AI. Security tools that incorporate AI and machine learning (ML) capabilities can offer some good defenses against the speed and intelligence of offensive AI attacks. Additional security capabilities can help organizations monitor the use of AI tools, restrict access to particular tools, or limit the ability to upload sensitive information.

So, how do you protect against the risks that AI tools might present? The first step is to recognize that AI is already here — and it’s here to stay. The generative AI tools available today show the tremendous potential of AI for helping businesses enhance efficiency, boost productivity, and even spark creativity. Still, cybersecurity leaders must be cognizant of the potential security challenges that these tools can present. By implementing the right guidelines, augmenting internal training, and in some cases, implementing new security solutions, you can reduce the likelihood that these potentially powerful AI tools will become a liability.

Cloudflare has taken a zero trust approach to enabling and securing its network and employees relative to AI. Learn more about how Cloudflare equips organizations to do the same.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.


Dive deeper into this topic.

Learn ways to reduce AI-related risks while still supporting new AI tools in your organization in the Ensuring safe AI practices ebook.

Get the ebook!



Key takeaways

After reading this article you will be able to understand:

  • How AI tools present new organizational security challenges

  • Where your organization falls in the AI revolution

  • 3 ways to reduce risk without stifling the potential that AI tools offer



Receive a monthly recap of the most popular Internet insights!