AI adoption is transforming business practices across industries—from financial analysis to software engineering. Enterprises that strategically embrace this technology will gain a significant competitive advantage.
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it.” – Bill Gates
As always, security and risk management leaders must assess the impact of introducing any new technology on their organisation’s security and risk posture.
At Integrity360, we evaluate the transformative impact of AI on cyber security through four main pillars:
- Utilising AI for cyber defence and security operations. For example, using natural language for threat hunting queries and employing GenAI to create offensive scenarios for red team and tabletop exercises.
- Defending against AI-powered threats, such as deepfake detection or advanced anti-phishing.
- Securing corporate-built AI applications.
- Securing business users’ consumption of AI applications, which we will cover in this blog.
The use of AI applications for business users
AI applications, particularly GenAI, are powerful tools for businesses. They are used for market research, content creation, code assessment, and a myriad of other use cases that significantly boost productivity and unlock new potential. There are two main ways for business users to consume AI apps:
- Web Mode: In this mode, AI apps are accessed directly via web browsers or API calls. ChatGPT and Claude are widely used examples. While these apps don’t have direct access to corporate data, users can upload files or enter information that may expose sensitive business data.
- SaaS Mode: Many SaaS providers incorporate GenAI capabilities as built-in tools within their apps. Notable examples include Microsoft 365 and MS365 Copilot, Google Workspace and Gemini, and Salesforce and Einstein Copilot. In this mode, both the rewards and risks increase, as the GenAI can access enterprise data to which the user has rights.
The Risk
In web mode, the risk is similar to using other unsanctioned cloud apps or websites, revolving primarily around data leakage. Business users may use GenAI with personal accounts on corporate devices, and also from personal devices. Files and text uploaded to these apps can be accessed by the same user on a personal device or, if their account is compromised, by a threat actor. This risk is compounded by the fact that these apps may not enforce multi-factor authentication (MFA) and might not align with your organisation’s security policies.
In SaaS mode, the risk can be even higher, considering the following scenarios:
- The AI tool may use the user’s entitlements, giving it access to sensitive data that the user might not easily access manually, especially if the user has excessive permissions.
- AI-generated documents containing sensitive information can be mislabelled or over-shared.
- Malicious insiders could use AI to scan millions of files and extract credentials, personal information, financial data, and other sensitive information.
The Solution
First, your organisation should establish a clear policy for AI usage as part of the Acceptable Use Policy (AUP). This should be paired with AI security awareness training that covers the ethical, legal, and secure use of AI. Training requirements should differ based on user roles, particularly for those with access to sensitive data or privileged access, such as IT and cybersecurity roles.
From a technical standpoint, web mode and SaaS mode require different controls.
Web Mode: Similar to other unsanctioned cloud apps, the most effective way to enable your business to adopt web AI tools is by using a Secure Web Gateway (SWG) with robust data protection controls, usually as part of a Security Service Edge (SSE) solution. SSE provides security teams with full visibility into app usage across the organisation, allowing them to implement appropriate controls. Consider this scenario:
- Real-time coaching: A user tries to access ChatGPT and receives a customised pop-up explaining the safe use of ChatGPT and offering guidance to follow organisational policy.
- Data Loss Prevention (DLP) controls: A user attempts to upload a file containing sensitive data or pastes sensitive information into ChatGPT. The action is blocked, with a custom alert explaining the reason or requesting justification.
SaaS Mode: Securing SaaS AI tools requires foundational security controls around corporate SaaS security and data security. Corporate SaaS security involves monitoring and controlling access and usage of the SaaS application. This can be achieved using a Cloud Access Security Broker (CASB), another component of SSE solutions. CASB provides security teams with a comprehensive view of how AI tools access data in cloud apps, highlighting risky behaviour, protecting sensitive data, and enabling data and threat protection.
Foundational data security controls include the ability to discover, classify, and label data, review data access entitlements, and revoke excessive permissions. A Data Security Platform (DSP) provides a data-centric approach to security, allowing organisations to lock down sensitive data hosted across IaaS, PaaS, and SaaS environments from a single location.
Depending on your organisation’s needs, use cases, and risk appetite, the ideal solution for securing SaaS AI tool adoption could involve one or both of these technologies.
- Use SWG/SSE to securely adopt web AI tools for business.
- Assess your organisation’s cloud app security and data security capabilities before adopting SaaS AI tools.
- Implement AI security awareness training based on user roles.
If you’re planning to adopt SaaS AI tools like MS365 Copilot or Google Gemini, or are concerned about the current use of web AI tools such as ChatGPT, we can help you with a complementary risk assessment and recommend the most suitable solutions for your business requirements.