Putting a leash on AI – Tech industry leaders endorse regulation in US Congress meeting

  • 25 Sep 2023
  • 3 Mins Read
  • 〜 by Anne Ndungu

Artificial intelligence is developing at a rapid rate across the globe, discovering various innovative applications. Already, ChatGPT has found many uses that make work easier due to its capabilities. 

That is why a meeting at the US Senate comprising AI advocates and leaders of big tech companies drawn mainly from Silicon Valley such as Elon Musk, Sam Altman, Elon Musk, Jensen Huang, Bill Gates, Satya Nadella, Sundar Pichai, Mark Zuckerberg just to name a few, drew the attention of AI developers, users and enthusiasts worldwide. The meeting was held on Wednesday, September 13th 2023 and as such the conclusion of the meeting was the need for government intervention. 

How that intervention should look was a contentious matter that took six hours of debate. Suggestions ranged from open-source models and code, though this was also debated. This is clearly understandable since AI is a new technological frontier without any legal precedents to guide how it should be regulated. Perhaps, the main problem stems from the fact that even in the field, a full understanding of the uses of AI and its potential ramifications is still taking shape. 


Hosted by Senate Majority Leader Chuck Schumer, they acknowledged the difficulties inherent in AI and not attempting to regulate the ever-changing industry. Regulation has so far failed to play catch up to the Tech industry in a race against each other on who can deploy AI fastest. Suggestions to implement a moratorium on AI development have so far failed despite the acknowledgement that no one understands what impact the AI models so far developed could have. However, these suggestions have managed to awaken global interest.


Many questions abound about AI that make regulation very difficult. Is it intelligent? How intelligent can it be and what is the possibility that it will one day surpass human intelligence and work autonomously? Is it ethical? Questions on the use of smart drones controlled by AI which could be used for spying purposes. There have been news stories of a simulation where an AI drone decided to kill its operator so as to achieve its mission. There is also the question of AI biases. Therefore conversations around AI have to go hand in hand with its ethics. There is therefore a role for the government to regulate AI, especially when it comes to security and human rights.


AI developments in African countries have taken time because AI technology is not easily transferable in a copy and paste manner. AI has to be ‘contextualised’ to different environments and that requires the simulation of the context in which AI is to be applied before deployment.

In Africa barriers such as language, culture, etc have made it difficult to quickly adapt AI models. Another problem in Africa is few or limited data sets. So while the technology is not nascent on the continent, there are challenges to its quick deployment. 


While the African Union has started conversations on AI with African experts in the field. Countries like Kenya, South Africa, Nigeria, Egypt and Ghana already have policies in place to either introduce AI learning or develop AI-related products. 


While the US debates how to regulate AI, the European Union will have an AI Act soon while China already has rules in place. Meanwhile, AI regulatory bodies will definitely be a new feature in many countries and conversations around AI issues are only just beginning.