Global AI policy landscape: Countries race to regulate and harness AI’s potential

  • 21 Apr 2024
  • 3 Mins Read
  • 〜 by Kennedy Osore


The dawn of the 21st century witnessed the emergence of Artificial Intelligence (AI) as a transformative force shaping societies, economies, and geopolitics. As AI’s capabilities burgeon, policymakers across the globe are increasingly recognising the imperative to regulate its deployment while harnessing its potential for national development and global competitiveness. The year 2023 marked a pivotal juncture in the trajectory of AI policy, with major jurisdictions introducing a plethora of legislative and regulatory measures aimed at governing AI’s proliferation.

China, a frontrunner in AI development, took proactive steps to address the security challenges posed by “deep synthesis” technology. In January 2023, China introduced regulations targeting the creation of realistic virtual entities and multimodal media, known as “deepfakes.” These regulations mandated stringent measures for content providers and users to ensure legal compliance, data security, and content moderation.

Furthermore, China updated its cyberspace administration measures in August 2023, adopting a more nuanced regulatory approach towards generative AI. Unlike the earlier punitive directives, the revised regulations emphasised enhancing the quality of training data while encouraging the development of generative AI technologies. These measures reflect China’s commitment to balancing technological innovation with regulatory oversight to safeguard national security and societal stability.

In parallel, the United States, cognizant of AI’s significance for national security and economic competitiveness, introduced several legislative initiatives to bolster its AI capabilities. In March 2023, US legislators proposed the AI for National Security Act, aiming to enhance the Department of Defense’s (DoD) cyber-defence capabilities through AI-based endpoint security tools. This bipartisan initiative underscored the imperative to leverage AI for automatic threat detection and mitigation, aligning the DoD’s capabilities with evolving cyber threats.

Moreover, US policymakers prioritised AI literacy among federal leaders with the introduction of the AI Leadership Training Act in May 2023. This legislation mandated the creation of an AI training program for federal employees to promote responsible and ethical AI usage within government agencies. Concurrently, the proposal for the National AI Commission Act emphasised expert input in crafting a comprehensive AI regulatory framework, signalling a proactive approach towards mitigating risks and preserving US leadership in AI research and development.

In Europe, policymakers deliberated on regulatory frameworks to govern AI’s ethical and responsible deployment. The European Union (EU) reached a tentative deal on the AI Act in December 2023 and passed it in March 2024, establishing a risk-based regulatory framework to prohibit AI systems with unacceptable risks while mandating transparency standards for generative AI. Similarly, the UK proposed principles to guide competitive AI markets and protect consumer interests, ensuring accountability, continuous access to essential inputs, and fair competition.

Additionally, both the UK and the EU prioritised AI safety and governance through institutional mechanisms. The UK inaugurated the world’s first AI Safety Institute in November 2023, dedicated to advancing AI safety research and governance in the public interest. Similarly, the EU’s AI Act reflected a concerted effort to harmonise AI regulations across member states, balancing innovation with safeguards against potential risks to democracy, privacy, and consumer rights.

Outside the traditional power centres, the African Union (AU) formulated a comprehensive AI policy framework to guide the responsible deployment of AI across the continent. The AU’s draft policy, published in February 2024, outlined recommendations for industry-specific codes, regulatory sandboxes, and national AI councils to monitor AI deployment. While awaiting formal endorsement by African governments in 2025, the AU’s AI strategy is poised to catalyse continental integration and accelerate Africa’s digital transformation.

Kenya, emblematic of Africa’s burgeoning tech ecosystem, embarked on a national AI strategy to harness AI’s transformative potential for sustainable development. The FAIR Forward initiative, launched in collaboration with international partners, aims to drive Kenya’s digital transformation by leveraging AI across various sectors. By convening key stakeholders and defining strategic priorities, Kenya seeks to position itself as a hub for AI innovation and inclusive growth in the region.

The evolution of AI policy across diverse jurisdictions highlights the global imperative to balance innovation with responsible governance. From China’s regulatory measures to the US’s emphasis on national security and the EU’s commitment to consumer protection, policymakers are navigating complex trade-offs to harness AI’s potential while mitigating its risks. As nations converge on common principles and regulatory frameworks, collaborative efforts such as the AU’s continental strategy and bilateral partnerships will be instrumental in shaping a cohesive global framework for AI governance in the 21st century.