UNESCO drafts guidelines and opens consultations on AI use in judicial systems

  • 13 Sep 2024
  • 2 Mins Read
  • 〜 by Anne Ndungu

Artificial Intelligence (AI) has various uses in the judiciary, which is often swamped with work. Kenyan courts are known to have backlogs of cases. In FY 21/22, 521,823 cases were pending before the Magistrates’ Court, with 233,177 cases having been in the court system for over one year. The situation has not improved three years later. Continuing to use the old, unorganised system without an army of workers means that some cases may never see the light of day.

 

Justice Martha Koome this year spoke of the adoption of AI in transcription services as part of the digital transformation of the judiciary. Ideally, this should allow judges and magistrates to receive information from court proceedings promptly, allowing them to make their rulings faster. This is commendable. In other countries, AI has already been rolled out in more advanced ways than has been the case in Kenya. For example, Singapore is using speech recognition or language translation services. A study done by Lawyers Hub in  2022 tracked the digitalisation of various judiciaries in Africa and shows that African countries have a long way to go to fully digitise their systems, but in the past two years, there has been a leap towards embracing AI as can be seen in Kenya. 

 

While AI offers numerous benefits, its implementation in the judiciary raises ethical concerns that must be carefully considered. These concerns have prompted UNESCO to initiate consultations on draft guidelines for AI use in Courts and Tribunals. This emphasis on ethical considerations should reassure the audience that the potential risks of AI are being addressed. 

 

The development process started a year ago, and the guidelines provide an overview of AI tools used worldwide in judicial systems. They address the ethical issues related to the use of these tools, particularly in safeguarding human rights. Chief concerns relate to the fact that AI should not take over the decision-making aspects of judicial work. It cannot replace the human aspects that pertain to justice; it can only assist. 

 

The guidelines propose principles for AI adoption in judicial contexts, focusing on justice, the rule of law, and human rights. Key principles include ensuring AI tools do not replace human judgment, maintaining transparency, and implementing oversight mechanisms to manage risks associated with AI usage. While these concerns are applicable to the judicial system, they are not confined there. There are already concerns in other fields, such as the use of AI in warfare.

 

It has been recommended that the tools should go through a human rights impact assessment by experts. They should also be used to assist and not replace human legal analysis and reflect the values and goals of justice administration without compromising human rights. Parties should be able to challenge decisions influenced by AI. There should be proactive harm reduction whereby artificial intelligence should not continue to be used if it negatively affects human rights or if it malfunctions.

 

The guidelines also caution against the use of general-purpose AI models for legal research or analysis due to potential inaccuracies and biases. Instead, Generative AI is recommended for uses such as drafting, summarising, translating, and other similar tasks. However, outputs by such systems should always be verified and should be used in a manner that protects personal data. If adopted, the guidelines will serve to guide countries across the world in the adoption of these tools. 

 

Submissions to these guidelines can be shared on the UNESCO website by 25 September 2024.

 

Tana River County loses rights