OpenAI and the Data Sovereignty Quagmire in the EU

  • 5 Dec 2025
  • 3 Mins Read
  • 〜 by Anne Ndungu

Last week, OpenAI updated its data residency options worldwide where data can be stored ‘at rest’. Eligible customers using its API or setting up new ChatGPT Enterprise or ChatGPT Edu workspaces can now select to keep their data “at rest” in Europe. This means they can choose to store their data on servers physically located in Europe. The move aims to help OpenAI comply with European data sovereignty and privacy regulations, including the General Data Protection Regulation (GDPR). It also guarantees that processing for these endpoints takes place within the EU, assisting with legal compliance like GDPR.

All data for ChatGPT Enterprise / Edu or API users was likely processed and stored on OpenAI’s global infrastructure, which could include servers outside Europe, and users had no way to guarantee that their data would stay within the European Union (EU). This was a concern for companies managing sensitive customer or employee data, government or public-sector organisations under strict European or local privacy laws, and any organisation needing to meet EU data-sovereignty requirements.

The EU enforces strict data protection laws (GDPR) and is implementing the EU AI Act. By enabling data residency in Europe, OpenAI lowers the legal risks of cross-border data transfers that could breach GDPR. Signing the EU Code of Practice for General Purpose AI demonstrates OpenAI’s proactive engagement with EU regulators, indicating that OpenAI seeks to align with European standards before being compelled to do so. It also helps OpenAI maintain legitimacy and a leadership position in a market increasingly focused on “digital sovereignty”. Past regulatory actions, such as the Italian fine in 2024, have shown that the EU actively enforces privacy laws against AI providers. In December 2024, the privacy watchdog in Italy fined OpenAI €15 million for allegedly processing personal data to train its models without an adequate legal basis and for failing to meet transparency and age verification requirements.

Even with data residency enabled, experts caution that it does not fully address all “sovereignty” issues. Concerns still exist regarding metadata, backend processing, transparency, data retention periods, and the use of personal data for model training under broader regulations. At the same time, there is a growing European movement towards “digital sovereignty”, as governments and companies in Europe seek solutions – including local AI infrastructure – that reduce reliance on non-EU providers and infrastructure. 

Furthermore, the EU AI Act, initially proposed by the European Commission in April 2021 as part of its effort to regulate artificial intelligence throughout the European Union, has been discussed in both the European Parliament and the Council of the EU. The legislation is expected to gradually come into effect over the coming years, with some transitional periods allowing companies to comply. Some risks associated with OpenAI include that certain products, such as GPT models used in recruitment, credit scoring, or law enforcement contexts, could be classified as high-risk under the EU AI Act. OpenAI may need to disclose more details about how its models are trained, the data used, and how outputs are generated. It might also have to modify its product features, contracts, and operational practices for European customers, or face fines and restricted access in Europe.

Ultimately, OpenAI’s data residency initiative is a positive step, but it is only part of a larger compliance challenge. For EU-based organisations, choosing European data storage does not automatically address all regulatory or governance issues. Companies will still need to meticulously manage metadata, audit trails, and model-training practices to comply with EU standards.

Looking ahead, the evolution of the EU AI Act, alongside the broader push for digital sovereignty, indicates that AI providers such as OpenAI will face growing pressure to localise their infrastructure, show greater transparency, and implement stricter governance practices. For businesses in Kenya and other countries working with EU clients, these changes highlight the importance of understanding cross-border data rules and planning AI adoption strategies accordingly.