Who is responsible? Platforms or Users: Navigating Responsibility Against the Backdrop of Increased Calls for Social Media Regulation

  • 17 Jan 2025
  • 5 Mins Read
  • 〜 by Brian Otieno

The growing discourse on platform responsibility has reignited debates about who should be accountable for content posted on social media platforms: the users or the platforms themselves. This debate, already robust in jurisdictions like the United States and European Union (EU), is now taking centre stage in Kenya and other Global South nations. With legal battles, regulatory shifts, and high-stakes decisions underway, the issue demands nuanced analysis, especially in light of its implications for the digital economy, freedom of expression, and societal values.

Social media platforms have become vital components of modern life, providing spaces for connection, innovation, and economic opportunities. However, they have also facilitated the spread of harmful content, misinformation, and hate speech. Balancing these dual realities has become a key challenge for policymakers and tech companies alike.

Approaches in the US and EU

In the United States, the debate over platform responsibility is shaped by Section 230 of the Communications Decency Act. This provision shields platforms from liability for user-generated content while allowing them to moderate voluntarily. This hands-off approach was cemented by the Supreme Court’s ruling in Gonzalez v. Google, which found that platforms cannot be held liable for content created by third parties unless they fail to act upon notice of illegal material. Critics have argued that this approach fosters a lack of accountability, enabling platforms to profit from harmful content without consequence.

In contrast, the EU has adopted a stricter regulatory stance through the Digital Services Act (DSA). The DSA mandates robust content moderation, transparency in algorithms, and the swift removal of illegal material. Platforms must also provide annual reports on their moderation efforts, making them more accountable to regulators and the public. This regulatory rigour reflects the EU’s prioritisation of user safety and societal harmony over the absolute freedom of digital platforms.

Adding complexity to this landscape is Meta’s recent decision to abandon the use of independent fact-checkers on Facebook and Instagram, replacing them with X-style “community notes” where commenting on the accuracy of posts is left to users. Historically, Meta relied on a mix of in-house teams and contracted moderators to enforce its community standards. However, outsourcing and increasing reliance on artificial intelligence (AI) signal a shift in strategy—one driven by cost considerations but fraught with ethical and operational risks.

Outsourced moderation often leads to inconsistencies, with decisions influenced by cultural and regional biases. Additionally, AI tools, while efficient, struggle with nuanced content, such as sarcasm or context-specific humour. This raises questions about whether platforms are abdicating responsibility under the guise of operational efficiency, leaving users vulnerable to harmful content and its societal repercussions.

Which way for Kenya: Ban or regulate?

For Kenya, these global trends intersect with unique national concerns. Two ongoing legal challenges highlight the urgency of addressing platform responsibility. The first involves a Supreme Court petition seeking to ban TikTok due to concerns over explicit content and its perceived erosion of cultural values. The second case, recently filed in the High Court, targets X (formerly Twitter), criticising its liberal approach to content moderation under Elon Musk.

On Thursday, the Ministry of Interior and National Administration met with telcos and platforms, passing resolutions that emphasise the shared responsibility of platforms and users in addressing harmful content. Platforms have been urged to reassess their content access and use models, including implementing robust user identification mechanisms. Telcos and platform owners are now expected to address criminal activities online more firmly. These resolutions come against the backdrop of increasing misuse of social media, where individuals irresponsibly disregard the limits of free speech.

The Ministry’s approach also includes a push for a more pronounced physical presence of enforcement agencies and digital platforms within the country. Public sensitisation campaigns will be prioritised to promote responsible internet use, while a centralised hub for reporting and sharing information on cyber threats will be established. A national framework for content moderation and filtering is also under consideration, aiming to ensure responsible access to digital content while protecting users from harmful material.

TikTok, which is popular among Kenya’s youth, has become a double-edged sword. On one hand, it fosters creativity, entrepreneurship, and community. On the other, critics argue that its algorithm amplifies explicit and harmful content, undermining societal norms. A ban could address immediate concerns but risks alienating the youth demographic and tarnishing Kenya’s reputation as a tech-friendly hub. X’s approach presents another challenge. By rolling back content moderation efforts, the platform has allowed a resurgence of adult content, misinformation, and hate speech. This shift is particularly concerning in Kenya, where social media has proven to play a pivotal role in shaping political discourse and public opinion. Striking a balance between free expression and harm prevention is critical yet elusive.

For countries in the Global South, the stakes in this debate are higher than in developed economies. Social media platforms are not just communication tools but also engines of economic growth and societal transformation. From enabling e-commerce to facilitating digital activism, these platforms are integral to development. However, the regulatory frameworks in many Global South nations, including Kenya, are underdeveloped. This creates a vacuum where harmful content can thrive unchecked. At the same time, overregulation or outright bans risk stifling innovation and limiting access to digital opportunities. The challenge lies in crafting policies that protect users while fostering a conducive environment for tech growth.

Mark Zuckerberg, Meta’s CEO, succinctly articulated the industry’s preference during US Senate hearings: “Regulate, don’t ban us.” This sentiment reflects a pragmatic approach to addressing the challenges posed by social media platforms. Bans, while appealing as a quick fix, often create more problems than they solve. They drive users to alternative platforms or dark web spaces, making harmful content harder to track. Moreover, bans can erode public trust in government and stymie innovation.

Regulation offers a more sustainable solution. By establishing clear rules, governments can ensure accountability without curbing creativity or access. Effective regulation can encourage transparency by requiring platforms to disclose their content moderation policies, algorithmic processes, and enforcement outcomes. It can promote digital literacy by educating users on responsible online behaviour and critical consumption of information, thereby mitigating the spread of harmful content. Strengthening enforcement mechanisms by establishing independent oversight bodies to monitor compliance and adjudicate disputes ensures impartiality. Regional collaboration can also foster harmonised regulations across regions like the East African Community, preventing enforcement gaps and amplifying collective bargaining power with tech giants.

Kenya’s regulatory response should draw lessons from both the US and EU while tailoring solutions to local realities. Updating the country’s ICT laws to incorporate provisions addressing content moderation, platform accountability, user responsibility, and user safety is essential. Policymakers should engage stakeholders, including civil society, academia, and the private sector, to design inclusive regulations. Leveraging technology through investments in AI and machine learning can enhance local moderation efforts while respecting cultural contexts. Additionally, regulations should strike a balance, ensuring that efforts to combat harmful content do not infringe on constitutional rights, more so the freedom of expression.

Striking the balance: Not just legal or technical, but also societal!

The debate over platform responsibility is not just a legal or technical issue—it is a societal one. It challenges governments, tech companies, and citizens to define the values that should govern digital spaces. For Kenya, this debate offers an opportunity to lead by example, crafting policies that uphold safety, accountability, and innovation.

As the world watches the outcomes of Kenya’s legal battles over TikTok and X, the country’s approach could set a precedent for the Global South. Pragmatic, adaptive regulation—grounded in global best practices but responsive to local realities—is the way forward. By navigating this complex terrain thoughtfully, Kenya can protect its citizens while preserving the promise of a thriving digital economy.