Amplified and Emerging security risks with AI

Executives worldwide are advocating for the rapid adoption of generative AI (GenAI) within their organizations. Some recognize its potential to enhance productivity and drive innovation, while others acknowledge the importance of staying current with new technological advancements. Especially ones that feel as monumental as this. Nonetheless, the adoption of new technology comes with challenges, which can be technical in nature or create interoperability issues at scale. The adoption of GenAI reveals that once the initial enthusiasm about its capabilities subsides, fundamental security considerations bubble up to the surface. Security leaders within complex enterprise orgs around the world are concerned about how GenAI adoption may amplify existing security risks within their organizations, and introduce new ones that have not been previously encountered. This blog post is going to explore some of the top amplified and emerging risks with generative AI, and some ideas on how to address them.

What Are Security Leaders’ Top Concerns About GenAI?

From what I can gather across a number of qualitative and quantitative findings reported by Gartner, Microsoft Security, and others, the top concerns for security leaders whose organizations are adopting GenAI are:

  1. Leakage of sensitive data and information.
  2. Oversharing of sensitive data and information.
  3. Inappropriate use or exposure of personal data and information.

As you’re probably aware, virtually every item in the list above has been a concern that keeps security leaders awake at night. These are not new risks. However, they are risks that can be amplified if GenAI adoption is leaning too far in the direction of the productivity benefits and novel use case innovation, and too far away from ensuring its use is secure and responsible.

I’m going to explore the first couple of these amplified security risks in greater detail, and discuss some of the ways you may consider addressing them in your organization. I’ll fold in the third concern into them as it overlaps with the other two.

Leakage of Sensitive Data and Information.

What is it? Data leakage (or data breach) is the unintentional or unauthorized exposure of sensitive information. For individuals this can lead to things like identity theft, whereas for organizations it can lead to reputational or financial losses.

How to address this amplified risk? Simply put, you first need to know what your sensitive information is – and through tools like Microsoft Purview Information Protection – ensure that the sensitive information you want to safeguard has the appropriate sensitivity labels applied to it.

Oversharing of sensitive data and information.

What is it? Data oversharing occurs when users inadvertently gain access to sensitive information through AI applications. This often happens because of insufficient labeling policies or inadequate access controls or permissions management in the locations where your digital information is stored (repositories such as Microsoft Teams, SharePoint Online, or OneDrive). This might lead to unauthorized exposure of sensitive or even confidential information, posing significant individual risk to your users, and risks to your organization. Without appropriate user training, the rapid proliferation of AI tools can also create environments in which users share or use data without fully understanding its sensitivity, further risking violations to regulatory compliance and even data breaches.

How to address this amplified risk? User awareness training remains key to informing your users on the responsible use of GenAI systems and tools your organization is adopting or exploring. System-wide safety measures continue to be developed and will provide greater assurance, but until such a time as their maturity helps you sleep better at night, a well informed user base is likely the most realistic way to address this amplified risk. You also have to consider any Bring Your Own AI (BYOAI) scenarios at your organization and how this amplifies the risk further. To a certain extent this risk can be blocked using firewall policies that prevent users from accessing GenAI tools that your organization has not approved for use.

In addition to the amplified risks above, there are also emerging security risks with AI adoption. I want to touch on the top 3 I feel are going to remain credible concerns in the near term.

  1. Hallucinations: where an AI model your organization is adopting generates false or misleading information, can pose risks to your org’s reputation, and in high-stakes sectors like health care, finance, or legal services, they can lead to significant challenges. Hallucinations can also create ethical and trust challenges. Your users must be able to trust that AI systems will provide accurate and reliable information, and hallucinations undermine this trust.
  2. Prompt Injections: Where a malicious input is disguised as a legitimate prompt to exploit system vulnerabilities to elicit unauthorized behavior from the GenAI model or deliberately attempt to subvert safety and security filters causing unintended actions by a GenAI system. By crafting deceptive prompts, users (or threat actors) can trick an AI model into generating outputs that include confidential information, making it challenging to detect and mitigate such threats. Direct injections can be used to overwrite system prompts, while indirect ones manipulate inputs from external sources.
  3. Excessive Agency: allows a GenAI-based system to perform harmful actions due to misinterpretations or unexpected errors in its decision-making. This vulnerability can compromise sensitive information, disrupt business operations, and result in security breaches, primarily when the model is granted too much decision-making power and autonomy. Fortunately for now, adoption of GenAI is mostly restricted to users prompting GenAI tools, verifying (hopefully!) the accuracy of the responses generated, and intervening to correct as needed. This way of working with GenAI still provides the assurance of human-in-the-middle, but excessive agency will become a bigger concern when these systems are built upon to execute some or entire business processes at any organization where GenAI tools are deployed specifically for this purpose.

Overall, the adoption of GenAI tools within your organization should continue because these capabilities have tremendous upside in any scenarios where repetitive or boiler-plate work consumes an inordinate amount of time. One way to ensure your organization is proceeding with sound security principles in place, is to avoid rushed deployments. Remaining measured and methodical about your AI deployment will allow your organization to adequately test and security vet the solutions you are adopting. This also provides a runway to your Organizational Change Management teams to support a more robust implementation planning. This helps to address any concerns they may have (“AI will replace my job” is a real concern amongst roles in many sectors) while ensuring they are trained and capable of deriving the intended benefits from the tools you are adopting.

Additional resources:

  • A top 10 list of AI risks has been compiled by the Open Worldwide Application Security Project OWASP.
  • Threat landscape for AI systems is covered in greater detail at MITRE ATLAS.

Thanks for reading, and please reach out if you have a question or just want to chat more!