Unchecked AI, unseen dangers: What the DeepSeek breach means for SA companies and POPIA compliance
At a glance
- DeepSeek, a prominent competitor in the artificial intelligence (AI) marketplace, recently faced a significant security incident when an unsecured database exposed over a million lines of sensitive information.
- The DeepSeek incident illustrates the risk of AI innovation outpacing its legal regulation in the majority of jurisdictions globally.
- By instituting clear AI policies and responsible usage guidelines, organisations can harness AI’s potential while mitigating preventable compliance risks.
The breach underscores substantial security risks associated with AI companies processing large volumes of user-inputted data, including sensitive content – particularly when users have limited control or oversight over information handling and security protocols.
Global breach, local lessons
The DeepSeek incident illustrates the risk of AI innovation outpacing its legal regulation in the majority of jurisdictions globally. While South Africa has yet to adopt AI-specific laws, businesses are still accountable under existing legislation, including the Protection of Personal Information Act 4 of 2013 (POPIA), which governs personal data protection and security.
Internationally, regulators are taking decisive action. Both Ireland’s Data Protection Commission and Italy’s Garante have launched investigations into DeepSeek’s security failures. These authorities have a track record of issuing substantial penalties for data protection breaches, reinforcing that while AI operates across borders, legal accountability remains within specific geographical locations and their attendant legal structures.
For South African businesses, this underscores the importance of ensuring compliance with data protection laws, particularly in environments where employees increasingly rely on AI tools in the workplace.
POPIA implications for South African employers
The DeepSeek breach highlights a growing concern: how employees interact with AI models in the workplace, particularly when using publicly available tools like ChatGPT for work-related tasks.
POPIA mandates that organisations prevent unauthorised disclosure of personal information to third parties, and this includes AI platforms. POPIA was enacted prior to the accelerated adoption of AI platforms in the workplace and this introduces novel vulnerabilities, requiring specific consideration and guidance.
A single instance of sensitive data being input into a public AI model by an employee could breach POPIA, potentially resulting in financial, reputational and legal consequences.
Essential steps for employers
AI offers significant opportunities but introduces knowledge gaps and compliance challenges. South African employers can proactively implement several measures to protect data while maintaining compliance:
- Establish a comprehensive AI policy: Define permissible tools and outline usage guidelines that align with POPIA’s conditions, including data minimisation or redaction, valid consent, relevant declarations on AI use and secure data transfers.
- Implement regular training programmes: Conduct ongoing training addressing the risks of using AI platforms, sharing sensitive data with AI models, and ensuring that employees, contractors and service providers understand POPIA principles and legal implications.
- Create incident response protocols: Develop clear procedures for identifying, containing and reporting data breaches, emphasising prompt and transparent reporting and action.
- Maintain regular AI usage audits: Monitor organisational practices to identify unauthorised AI tool adoption to mitigate risks and ensure compliance with organisational policies.
Employee responsibilities
Employees play a crucial role in preventing AI-related data breaches. Beyond organisational exposure, employees should be aware that negligence in handling sensitive data could result in reputational damage, liability, and disciplinary action. Essential precautions include:
- Strict policy adherence: Follow organisational AI usage guidelines meticulously, treating all tools as restricted unless verified.
- Consultation with management: Obtain approval before using or implementing any AI tools, including (and especially) widely available public models, for workplace tasks.
- Data protection vigilance: Maintain absolute prohibition on inputting company, client or personal information into unauthorised platforms or authorised platforms where restrictions on usage exist.
- Proactive security reporting: Immediately notify management or IT teams of suspected AI-related vulnerabilities.
Staying ahead
The DeepSeek breach is a stark reminder that AI’s benefits come with significant risks if security and compliance are neglected. While South African businesses stand to gain from AI-driven efficiencies, data protection and appropriate usage must remain a priority.
By institutionalising clear AI policies and responsible usage guidelines, organisations can harness AI’s potential while mitigating preventable compliance risks.
The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2025 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.
Subscribe
We support our clients’ strategic and operational needs by offering innovative, integrated and high quality thought leadership. To stay up to date on the latest legal developments that may potentially impact your business, subscribe to our alerts, seminar and webinar invitations.
Subscribe