Dushan published at 23rd May 2024

Navigating AI Data Security Risks in Business, Why Traditional Methods Fall Short?

In the rapidly evolving digital landscape, businesses are increasingly adopting AI technologies to streamline operations and gain a competitive edge. However, with this adoption comes significant data security risks that traditional methods struggle to address. This article delves into the limitations of current data security practices and the urgent need for innovative solutions.

In March 2024, the Subrosa team conducted a survey with 269 InfoSec, Compliance and IT practitioners to understand the current measurement of AI data security in businesses. The results were telling:

Most Businesses Rely Solely on Written Policies and Training

According to our research, most companies place their trust in written policies and employee training to safeguard their data. These measures, while essential, make it nearly impossible to quantify how solid the companies data boundary actually is and whether or not employees are compliant with policies. Without actual policy enforcement and monitoring, it’s highly challenging and time consuming to ensure compliance and measure effectiveness, leaving organizations vulnerable to data breaches and AI misuse.

Traditional DLP Tools Falling Short

The majority of respondents rely on traditional Data Loss Prevention (DLP) tools. However, these tools often sit as proxies in a corporate network, they are unable to provide protection for a distributed workforce without ugly hacks, such as VPNs into the corporate network (which many companies saw collapse under load during the COVID-19 pandemic). This "hub-and-spoke" security model is also not compatible with the frontier of secure network design, which relies on mutual-authentication of all entities (commonly known as "Zero Trust"), we're pleased to see this trend growing in the industry, as the notion of a "trusted" core network is fundamentally flawed.

Some tools also lack the sophistication to monitor and control data flows between business IT systems and external AI integrations (some are not even application layer aware, dealing only in terms of "good" and "bad" IP addresses), leading to potential data leaks and non-compliance with regulations. A staggering 83% of respondents expressed the need for new solutions to govern AI adoption and enhance data security at their workplaces.

AI Privacy Impact on Company Data Security in Australia

In Australia, the Privacy Act 1988 mandates that businesses protect customer personal information by implementing robust data security measures. The introduction of AI complicates this task, as AI systems often consume vast amounts of sensitive data to produce the expected outcome, additionally, AI tools (especially those used under a free-tier) are typically being trained on ingested data.

There are a number of documented cases of public models being trained on data they were never meant to see and later returning it to other users, the Samsung leak being a noteworthy example.

According to the Australian Privacy Principles (APPs), companies must ensure that their AI applications comply with privacy laws, safeguarding personal information against unauthorized access and misuse. Failure to comply with privacy regulations can result in significant penalties and damage to a company's reputation as well as risking civil litigation.

Australian businesses, business leaders and InfoSec teams should consider conducting Privacy Impact Assessments to evaluate the potential risks associated with AI deployment. These assessments help companies identify Shadow AI (unsanctioned AI use) and plan data controls to mitigate data leak risks.

Global Privacy Risks

Globally, businesses who are adopting AI or building AI capability must navigate complex regulatory environments such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. These regulations impose strict requirements on how companies collect, process, and store personal data.

Non-compliance can lead to fines and legal consequences. The GDPR mandates transparency, data minimisation, and the right to be forgotten. While the CCPA grants consumers rights over their data, including access and deletion. Businesses using AI must ensure that their data practices align with these regulations to avoid potential legal repercussions and to maintain consumer trust.

The Path Forward: Embracing Innovative Solutions

To effectively manage AI data security and privacy risks, businesses must look for methods and tools that provide:

  • Deep understanding of AI and data boundaries: Obtain granular insights into data usage and boundaries within AI applications.
  • Comprehensive AI Governance: Collect evidence of employee AI usage to prove alignment with organisational policies, compliance frameworks and other industry regulatory requirements.
  • Policy Enforcement: Take proactive measures to protect data before it leaves a companies data boundary, automatically mitigating risks before they occur.

Conclusion

As AI technology continues to transform business operations, the need for robust data security and privacy measures are critical. Companies should be continuously re-evaluating their security posture and data boundaries, especially in light of such a rapidly moving technology as AI.

Do you know how your employees are using AI in your company and what data has been sent to AIs? We can help, get in touch with us for an assessment of AI use and gain visibility in your company today.

Protect Your Data Today

Subrosa uncovers AI usage risks across your organisation and protects your business from shadow AI and data leaks.