In the ever-evolving landscape of artificial intelligence (AI), it is crucial to scrutinise its impact on societies, especially those in the Global South. While AI systems are often perceived as neutral and efficient decision-makers, they are not immune to the biases ingrained in the data that fuels their training. The repercussions of AI bias extend far beyond technical flaws, reaching into the domains of economic development, social justice, and human rights.
AI systems, equipped with vast amounts of data and computational power, play a pivotal role in shaping decisions, predictions, and classifications across diverse sectors. However, the belief in AI’s neutrality is challenged by its inherent socio-technical nature and susceptibility to flaws. One key factor contributing to these flaws is the biased nature of the data used for training. The very world from which this data originates is often discriminatory and unjust, leading the algorithms to learn problematic ground “truths”.
Additionally, the human element in building AI systems introduces unique biases that can further compromise their integrity. Developers, consciously or unconsciously, may embed their own prejudices into the training process, perpetuating societal stereotypes. Some AI algorithms operate on spurious correlations, making them difficult to comprehend and potentially reinforcing harmful biases.
Crucially, the global power dynamics in AI development contribute significantly to the flaws in these systems. The design, development, deployment, and deliberation around AI are inherently political processes. The voices of a few dominate the conversation, leading to systems that amplify certain perspectives at the expense of others. AI, therefore, becomes a tool of power wielded by a select group of individuals, impacting societies across the globe.
Despite over 60 years since the coining of the term AI, its influence is deeply embedded in both public and private spheres, affecting everything from flagging problematic content online to diagnosis in healthcare. The speed and scale of AI systems surpass human capabilities, garnering attention and investment from governments, companies, academia, and civil society.
However, the narratives surrounding AI often lack nuance, and global deliberations tend to concentrate on certain jurisdictions, primarily the US, the UK, and Europe. This concentration of thought leadership can lead to a narrow understanding of the global implications of AI development. Recent events, such as the AI Safety Summit at Bletchley Park, highlight the international efforts to address AI risks and establish guidelines for safety. President Biden’s Executive Order emphasises the US government’s commitment to collaborating with international partners on AI standards. Similarly, the EU is actively working on the EU AI Act, the world’s first comprehensive AI law, aligning with the principles outlined in the Bletchley Declaration.
However, the politics of AI extend beyond regulations and policies. They shape how we perceive, critique, and build AI systems. The assumptions guiding the design and deployment of these systems are context-specific but often applied globally, flowing from the “Global North” to the “Global South.” This one-directional application of AI reinforces existing inequalities and perpetuates the dominance of a select few private actors, as pointed out by Amba Kak of the AI Now Institute.
The impact of AI bias on businesses in the Global South is profound, influencing economic development, social justice, and human rights. Acknowledging and addressing these biases is critical to ensure that AI systems contribute positively to a more equitable and just global society.