Era of Deepfakes: Risks and the need for protective laws: An Explainer

  • 17 Feb 2023
  • 2 Mins Read
  • 〜 by Annette Muindi

Deepfakes refer to a form of Artificial Intelligence that is programmed to replace one person’s likeness with another in recorded video. The term “deepfake” comes from the technology called “deep learning,” which is a form of AI. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are used to swap faces in video and digital content to make realistic-looking fake media. 

The creation of deepfakes has slowly been increasing.  Some exploiters have taken advantage of this growing technology to misuse the image rights of some celebrities and public figures to generate and sell pornography. In 2017, a Reddit user named “deepfakes” created a forum for porn that featured face-swapped actors. Since that time, deepfake pornography of popular celebrities has been created. 

Deepfake video has also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump giving a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never gave that speech, however – it was a deepfake.  In 2022, a deepfake video of Ukrainian President Vladomir Zelensky telling the Ukrainian Army and people to lay down their weapons emerged. While it was soon discovered to be a deepfake, the video could have had a deadly impact on the ongoing conflict between Russia and Ukraine.

Audio can also be deepfaked to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.

How to spot a deepfake

Current deepfakes have trouble realistically animating faces, and the result is video in which the subject never blinks, or blinks far too often or unnaturally. However, after researchers at University of Albany published a study detecting the blinking abnormality, new deepfakes were released that no longer had this problem. 

Look for problems with skin or hair, or faces that seem to be blurrier than the environment in which they’re positioned. The focus might look unnaturally soft. 

Does the lighting look unnatural? Often, deepfake algorithms will retain the lighting of the clips that were used as models for the fake video, which is a poor match for the lighting in the target video. 

The audio might not appear to match the person, especially if the video was faked but the original audio was not as carefully manipulated.

The risks in Kenya

Deepfakes may pose a threat to Kenya in the future. Deepfakes in Kenya may be created to incite violence, manipulate existing tensions for political ends or spread misinformation. Unfortunately, there is no law in Kenya specifically formulated to address the risk of deepfakes in Kenya. 

However, existing rights to privacy as guaranteed in the Constitution protect persons from having information shared about them without their consent. The Data Protection Act ensures that a data subject is notified and consents to the manner in which their data is used. The Copyright Act also creates image rights such that a persons image and likeness cannot be used for monetary gain without their consent. 

These laws alone cannot fully protect the public from deepfakes. Current laws must be amended/new legislation created to cater to the emerging risks occasioned with the growth of AI.