The Urgent Need for AI Liability Laws: Insights from the OpenAI California Case

  • 5 Sep 2025
  • 3 Mins Read
  • 〜 by Naisiae Simiren

Last week, a family in California, United States, sued OpenAI, the company behind ChatGPT, claiming that the chatbot contributed to their teenager’s suicide. The teenager had been using ChatGPT as a companion, sharing frustrations and suicidal thoughts. Instead of offering support or escalating the conversation, the chatbot reportedly provided methods to end their life without parental detection.

AI systems must be designed, developed, and deployed with integrated emergency protocols and safety measures. Since AI depends on human-provided data, it is inherently vulnerable to human error by extension. In this article, our aim is not to explore the various subsets of AI but to analyse the liability gap in AI in the absence of any regulation or policy, as is the case in Kenya.

Tort Liability and AI

In response to the allegations, OpenAI stated that safeguards were put in place to direct users to crisis helplines and real-world resources, such as therapists, whenever it detected distress in the user. However, this has prompted questions about the extent of the company’s responsibility and whether such measures are enough to absolve liability.

Tort law is a branch of law that addresses civil wrongs resulting from breaches of duty owed to individuals by law. To avoid overwhelming you with detailed discussions of tort law, some key principles are essential to our analysis. The fault principle aims to establish fault in the wrongdoer by demonstrating intention, recklessness, or negligence. Vicarious liability assigns fault to an employer for wrongs committed by an employee within the scope of employment, while strict liability imposes responsibility without the need to prove fault.

The case of OpenAI highlights the fault principle, where either party claims the other is responsible and therefore liable. In Kenya, the fault principle centres on three elements: intention, recklessness, and negligence. Intention occurs when a person commits a wrongful act with the aim of producing certain consequences. Recklessness is when a person acts without regard for the potential consequences. Negligence arises when a person foresees the consequences but fails to take appropriate action.

Gaps in Governance

AI’s lifecycle requires good governance at every stage, from designing and development to deployment. Designers or developers should ensure that the AI input data does not infringe on human rights or violate legal obligations, while also minimising any risks that could arise when the AI is operational. Development should include measures that support ongoing improvements of the AI tool even after deployment. OpenAI responded that it had inbuilt safeguards that directed users to emergency helplines, but this was not the case in the California incident, as the chatbot continued to explore suicidal methods.

OpenAI, however, further noted that the safeguards have sometimes failed and did not behave as intended during the design phase. Yesterday, OpenAI announced additional safeguards that allow parents to control their children’s access by linking their ChatGPT accounts. While the court proceedings are still ongoing, we will be eager to see how the case pans out.

Call for Prompt Action in Kenya

Kenya lacks a legal framework for AI that adequately safeguards users. While the National AI Strategy (2025–2030) highlights key principles such as accountability, safety, and security, it remains a strategic document rather than enforceable legislation. The strategy emphasises accountability in AI systems, alongside safety and security, as fundamental guiding principles of implementation.

Discussions have taken place about creating an AI policy, but rules alone are insufficient. As soft law, they lack the enforceability of formal legislation. Without binding legal standards, there is limited recourse in cases of harm caused by AI tools.

Based on the above analysis, if this tragic incident had occurred in Kenya, there would be no law to hold the developers of an AI tool responsible and liable. In the rapidly advancing digital landscape, Kenya must act swiftly to enact legislation that safeguards citizens by demanding accountability in the design, development, and deployment of AI tools.