The Council for Responsible Social Media is a recently established non-partisan and voluntary group that consists of five members from civil society organisations, data and technology, peace and security, and media. The Council aims to minimise online harms and make social media platforms safe for all by holding them to higher standards on harms posed to the health and safety of Kenyans and to guide conversations towards sensible solutions.
The Council has set aside priority social media platforms that they aim to monitor for alleged violation of content moderation which contravene platforms’ own standards and policies. The Council blames the social media companies for only prioritising content moderation in English language while neglecting other African languages. This, the Council, equates to social media platforms operating unregulated in Kenya and generally in Africa causing harm to the people. The Council has called upon the Ministry of ICT to encourage the companies to publicly sign the Self-regulatory Code of Practice on Disinformation. The Code shall contain various public commitments to which they will be accountable to the public and the ministry for taking illegal, malicious and hateful content and actively mitigate the risks of disinformation.
Whereas, the Council is a voluntary group established for a good purpose and committed to protecting digital democracy, decency, and dignity, the question that lingers is implementation of the Code. Most of these social media companies operating in Kenya and Africa have public standards and policies that are regularly updated. For instance, Meta, a leading social media company, has intensified its existing policies and launched varied policies and products aimed at increasing transparency especially during this electioneering period. It uses a combination of artificial intelligence, human review and user reports to detect and take down misinformation, disinformation and hate speech among others.
TikTok has also doubled its efforts in curbing misinformation especially during election period by assigning a dedicated team and rolling out products offering authoritative information about the voters. They have also partnered with fact checkers who review content similar to Twitter that uses a combination of technology and human reviewers to detect misinformation and disinformation among others.
This notwithstanding, several digital rights groups have called out these platforms for being slow in taking down posts that spread hate speech, misinformation or disinformation which arguably is what the Council seeks to prevent by requiring the development of the Code. The Council’s proposed mandate is not a novel purpose, in 2019, there was a legislative proposal, Kenya Information and Communication Amendment Bill 2019, that sought to regulate social media platforms. The Bill proposed a requirement for social media platforms operating in Kenya to obtain licences from the Communications Authority of Kenya (CA).
In addition, the Bill further sought establishment of physical offices in Kenya for social media platforms and registration of all social media users using legal documents. This posed a logistical and impractical difficulty of registering all users. Additionally, the Bill required the CA to develop a code of conduct for bloggers. This proposal was also vehemently opposed by different stakeholders such as Article 19, who stated that this proposal was inimical to citizens’ right of freedom of expression. The CA in its memoranda to the National Assembly stated that the proposal to licence social media platforms contravened the right to privacy and freedom of media which are constitutionally enshrined rights and that the Data Protection Act had covered the proposed amendments. The Bill was not successful on grounds of unconstitutionality of all proposed provisions.
Nonetheless, the Council has its job cut out for it in ensuring that the Self-regulatory Code of Practice on Disinformation is developed by the different social media platforms operating in Kenya under the direction of the Ministry of ICT. Currently as it is, these platforms have continuously worked to improve their policies and products on sharing information and conduct by users. Requiring further code of practice we risk over-regulation of the industry and violation of constitutional rights. However, Kenya cannot completely disregard the possibility of a legislative proposal in future. In Europe, the Digital Services Act (DSA) was recently implemented to tackle the spread of illegal content, online disinformation and other societal risks. The DSA also increases transparency and accountability by providing clear information on content moderation or the use of algorithms for recommending content. It gives an opportunity for users to challenge content moderation decisions. The DSA also regulates big online platforms on how they conduct businesses on their platforms by requiring them not to prevent users from easily un-installing any preloaded software or apps. A platform that fails to observe the provisions of the DSA faces fines of up to 10% of the platform’s total worldwide turnover in the preceding financial year, or up to 20% in case of repeated non-compliance. The DSA will be directly applicable across the EU and will apply 15 months or from January 1, 2024 (whichever comes later) after the entry into force. As regards the obligations for large online platforms, the DSA will apply earlier four months after they have been designated as such by the European Commission. It will not be a surprise if the 13th Parliament follows suit and seeks to re-introduce such a legislative proposal.