Navigating generative AI and intellectual property rights: How key stakeholders can mitigate risks

  • 25 Sep 2023
  • 4 Mins Read
  • 〜 by Waceera Kabando

In this era of dynamic technological advancements, artificial intelligence is consistently and speedily proving its effectiveness as a tool aiding in inventions. Further to this, we now have a prompt-based generation of novel content simply labelled generative artificial intelligence. The European Union is defining generative AI systems as systems specifically intended to generate varying levels of autonomous content such as complex text, images, audio and video. This new technology is trained on massive amounts of data that are pre-existing to make new outputs. This was first introduced in the 1960s through chatbots, but it was not until the 2010s that this technology could create convincingly authentic images, videos, and audio of real people.

This advancement has had several implications, particularly on ownership of such creations. These may be grouped as the introduction of new actors in the intellectual property system by reducing the barrier to creating novelty; impact on diverse business models that cut across all intellectual property rights; and establishment of policies and regulations around artificial intelligence.

The shift in AI can be categorised into three criteria:

  1.   Evolving from a tool for invention to the inventor of creations;
  2.   Establishing an IP rights system that is tailored to the specific context in which an AI-generated invention is created;
  3.   Overcoming the constraints on innovation by fostering increased novelty, involving multiple actors and enhancing capacity.

Interesting examples include when The Museum of Modern Art in New York hosted an AI-generated installation generated from its own collection and also where the Mauritshuis in The Hague hung an AI variant of Vermeer’s Girl with a pearl earring while the original was away on loan.

The existing laws have significant implications for the use of generative AI and officers of the law in the various capacities are working to understand how this applies. Areas of concern so far are infringement and rights of use issues, uncertainty about ownership of AI-generated works and queries around unlicensed content in training data. There are a couple of claims that have already been litigated and the legal system has sought to clarify what a ‘derivative work’ is under intellectual property laws. It is clear that the interpretation is hung up on the interpretation of the fair use doctrine as stipulated in:

  •         Andersen v Stability AI et al – this class action suit was against multiple generative AI platforms that used the original works without the licence of three artists to train in their AI styles. The court ruled that the AI’s works are unauthorised and derivative thus infringement penalties do apply.
  •         Getty filed a lawsuit against Stable Diffusion on the grounds of improper use of its photos hereby violating copyright and trademark rights in the watermarked photograph collection.
  •         Google successfully defended itself against a lawsuit, contending that transformative use permits the scraping of text from books for the creation of its search engine.
  •         In the matter before the US Supreme Court against the Andy Warhol Foundation brought by Lynn Goldsmith, copyright law would be refined based on whether a court can consider the meaning of the derivative work when it evaluates that transformation.

An interesting concept to note is willful infringement where a business user is aware that training data might include unlicensed works or that an AI can generate unauthorised derivative works not covered by fair use. This risk also entails the accidental sharing of confidential trade secrets or business information by inputting data into generative AI tools.

So what’s the way forward?

How can the key stakeholders mitigate the risks and forge a way forward? As the old saying goes, an ounce of prevention is worth more than a pound of cure, companies need to protect themselves from any suits that may be financially draining to state the least. This can be done in the following ways:

  •         Artificial intelligence developers must ensure that they are in compliance with regulations regarding the acquisition of data being used to train their models. This means licensing and compensating the individual who owns the IP that is being used to enhance their training data, either through licensing it or sharing in revenue generated by the AI tool.
  •         Investors need to know the origin of the data being used. The onus is on content creators to actively protect their IP rather than requiring the AI developers to secure the IP to the work before using it.
  •         Recording the platform that was used to develop the content, details on the settings that were employed, and tracking the seed data’s metadata and tags to facilitate AI reporting including the generative seed and the specific prompt that was used to create the content.
  •         Insurance companies may in the near future require audit trails to extend traditional insurance coverages to business users whose assets include AI-generated works.
  •         Trademark or trade dress monitoring so as to examine the style of derivative works which may have cropped up from being trained on a specific set of a brand’s images.
  •         Demand of terms of service from generative AI platforms that confirm proper licensure of the training data that feeds the company’s AI and broad indemnification for potential IP infringement caused by failure of the AI companies to properly licence data input or self-reporting by the AI itself of its outputs to flag for potential infringement.
  •         Businesses should also add disclosures in their vendor and customer agreements if either party is using generative AI to ensure that IP rights are understood and protected on both sides in terms of how each party will support registration of authorship and ownership of those works.

Microsoft Copilot copyright commitment

Interestingly, Microsoft’s AI-powered Copilots, which are transformative tools, raise questions about the risk of IP infringement claims. For this reason, Microsoft announced the Copilot Copyright Commitment where if a customer is challenged on copyright grounds the big tech company will assume responsibility for the potential legal risks involved.

This extends the company’s existing intellectual property indemnity support to commercial copilot services as long as the customer uses the guardrails and content filters (classifiers, meta prompts, content filtering and operational monitoring as well as abuse detection) built into the products. This is meant to safeguard digital safety and security.