メインコンテンツにスキップ
記事

Making sense of AI’s contradictions: Security

By Susan Coleman, with insights from John Brooke, Dirk Eden, Melissa Smith, Sandra Ferrer-Nett, and Jack McCush
two coworkers security

Can GenAI help improve security around itself?

Mixed in with the excitement around generative AI are some questions about its risks. Evanta, a branch of the analyst firm Gartner, recently surveyed C-suite executives about their outlook for GenAI. The biggest concerns for these leaders—including CIOs—were data privacy and security. But the real challenges are often not what people think.

“One of the common concerns we hear,” explains Dirk Eden, senior principal at Slalom, “is the fear of sensitive data leakage and loss of intellectual property when using generative AI models. However, it’s important to understand the technology. Enterprise GenAI models typically reside within your own secure infrastructure and are isolated within your domain. In most cases, even the GenAI solution provider doesn’t have direct access to the model or its training data.”

GenAI tools also come with extensive sets of guardrails and controls for things like securing your conversations, providing data signals so prompts can’t be manipulated, and other layers of data protection throughout the GenAI model lifecycle. So, if data leakage isn’t the real issue, what are GenAI’s security challenges, and how can organizations address them? 

Slalom’s experts call out two vital areas of focus to greatly improve the security of your GenAI program:  

  • Understand your security baseline to ensure the parameters are suited to safely running GenAI.
  • Align IT and the business to adopt the right pace for your GenAI program.

As we look at these areas in more depth, we’ll also explore how GenAI itself can aid your efforts to minimize security risk.


Understand your security baseline

Adding GenAI to your existing landscape is like adding any other tool or solution. It will be governed by the same security measures you already have in place for your other technologies. It’s therefore imperative you have a thorough understanding of that baseline to ensure it can meet GenAI’s requirements.

This understanding starts with your data. “It’s critical that you know your data,” says John Brooke, Microsoft security principal at Slalom. “You need to know the sensitivity levels of the contents of your data, the identities, employees, and their devices—it’s not just people when we talk about identities; it’s people, devices, services, et cetera—and how those identities use the data.”

This becomes even more important when a GenAI tool is being used by people from different departments and in different roles that require broader or narrower access to certain types of data. “Metadata needs to exist in all of the data that the model has access to,” Brooke points out, “so that when analysts ask questions, for example, the AI gives answers with data that’s appropriate to their access levels.” When both the data and the identities accessing it are classified according to sensitivity levels, you have the ingredients for a strong baseline, and it becomes much easier to apply that to GenAI and put up the necessary security guardrails.

Establishing those guardrails requires clear protection policies regarding how GenAI tools can be used. When formulating those policies, Slalom recommends asking questions such as:

  • Is this tool contained within our security posture?
  • What data can and cannot be used in conjunction with the tool?
  • How do we either adapt our policies or adopt the right architectures to meet our current policies?

Once policies and architectures are in place, the next step is testing how well your guardrails hold up to possible incursions. Conducting penetration testing and evaluating your models’ resilience against hijacking or jailbreaking attempts will provide an added level of insight into the effectiveness of your security efforts.

With the staggering amount of data flowing in and out of organizations, however, keeping users and data safe and secure when using GenAI is no small feat. Traditional security analytics and security operations (SecOps) tools may no longer be the best options. According to a recent S&P Global article, “Security analytics and SecOps tools are purpose-built to enable security teams to detect and respond to threats with greater agility, but the ability of generative AI to comb through such volumes of data, extract valuable insight, and present it in easily consumable human terms should help alleviate this load.” In this context, the article goes on to state, GenAI will help analysts “to spend less time on data collection, correlation and triage, and to focus instead where they can be most effective.”

This sentiment is echoed by Melissa Smith, Microsoft security principal at Slalom. “Administration of the tools and the stacks is becoming more and more complex as we’re looking at threat detection and the different areas that need to be managed,” notes Smith. “When business users are generating content and data at increased speeds and volumes, we need to start thinking about AI-powered IT and AI-powered security groups.”

Security professionals are recognizing the urgency around identity and access management as it relates to GenAI. “We’re at a security inflection point right now,” observes Smith, “where if there were shortcuts taken earlier with the idea of ‘Oh, we’ll get to that later,’ well … right now is later.”



test ai

Have questions about your AI journey? 


Align IT and the business

There’s also a human element to GenAI security. Gartner may have said it best when it claimed that generative AI has democratized access to knowledge and skills. Nontechnical people now have a direct path to information that was previously unavailable to them—or only available by going through IT gatekeepers. It’s understandable then that business leaders want to act quickly and get GenAI into the hands of as many users as possible.

But this can create unnecessary risk. “A lot of times it’ll be the business that drives the need for a GenAI tool,” says Brooke, “and they’ll go down the path of trying to implement something without a full review by the IT organization.” 

And it’s no wonder. After seeing demos of what GenAI can do, businesses are clamoring for more. But the reality doesn’t always live up to the demo, which can result in problems for IT. “When reporting tools were first readily available,” says Smith, “you’d see these presentations that showed how you could present the data in beautiful charts. But when people tried to replicate that, their data looked nothing like what was presented. The same thing is happening with GenAI.” The issue with GenAI, however, is that when results don’t live up to expectations, a common reaction is to feed the model more data to increase the quality and accuracy of the output, which can result in increased risk, especially if you have issues with data and identity classification like those discussed earlier. “If your identities and information aren’t organized in such a way that you know how to protect it,” adds Smith, “you probably won’t get what you want out of the AI tools in general, and you might not be secure.”

The challenge then for security teams is finding ways to keep the business secure while still moving forward at pace and remaining competitive. Addressing this issue, according to Brooke, involves a mindset shift. “Security professionals—by and large—don't like change because it introduces new risk. They also generally don’t like things they don’t understand. They want to understand in detail. One issue with AI is that it does things you don’t understand all the time, and that’s kind of how it’s designed.” Because of this, security-based AI tools have lagged behind business-based AI tools, which creates a sort of push and pull as the business propels forward while IT attempts to rein in risky behavior. 

“What happens today may be completely different from what happens tomorrow,” says Dirk Eden. “Once we can illustrate to people how fluid this is, the realization sets in—this is beyond what we can control manually. So how do we do it?”

The answer is with GenAI tools. Whereas previously an individual would monitor a log, with GenAI you can pull typical user details such as IP addresses and locations and use them as access considerations. Instead of a reactive approach, where a human receives an alert and must then determine whether the activity is risky, with GenAI you can prevent such activities from happening in the first place. This solution gives the business the speed and agility it wants while also providing IT with the controls it needs.


Understanding your use case and what you want to accomplish at every step of the way with your GenAI application is important in helping you mitigate risk, whether it’s assessing the type of data being used, protecting that data, or managing how your employees and customers will interact with the data. We’ll help make the risk as transparent as possible so we can find the best solution for your needs.

Sarah Ferrer-Nett

Senior Principal, Sales Engineering & AI, Slalom


Learn more about our AI solutions and services.







Let’s solve together.