Social media as a catalyst for misinformation

  • 20 Sep 2024
  • 3 Mins Read
  • 〜 by Maria. Goretti

In the last few years, there has been a rise in the use of social media, with many Kenyans preferring platforms such as TikTok, Twitter, and Facebook, among others, as sources of information. 

For the most part, the consumer does not verify whether the information is true but takes the opinion of the content creator as the gospel truth. The importance of truth cannot be understated, however, with the rapid rate of digitalisation globally and the proliferation of social media, there has been a rise in disinformation.

In the tense and polarised atmosphere of Kenyan politics, hateful messaging and destabilising anecdotes cloud consumers’ ability to differentiate between truth and falsehood. Such automated disinformation negatively affects the function of truth in discourse and the general public’s trust. Automation processes, such as algorithms, on social media platforms exacerbated rampant disinformation about the government’s stalled projects. The government reacted by discrediting all opposition through State-backed bloggers who, in turn, spread as much false information.

More recently, Kenya Breweries Limited (KBL) underwent a smear campaign over its logistics tendering in which well-known media personalities alleged that they had received bribes to lock out local transporters. Before KBL had a chance to respond, it received major backlash from Kenyans online. While this information was inaccurate, many Kenyans took it as truth due to limited digital literacy and shortcomings in verification due to limitations in content moderation and fact-checking avenues. 

Through disinformation-spreading tactics, such as astroturfing (the deceptive practice of hiding the sponsors of an orchestrated message or making it appear as though it originates from and is supported by unsolicited grassroots participants), the use of social media has moved from being voluntary to involuntary as algorithms and bots’ usage, affect human agency online.

It is, however, important to note that when dealing with misinformation, one must toe the line between curbing hate and allowing social media users the freedom to express themselves to seek, receive, and impart information and ideas, as well as academic freedom and scientific research freedom. 

For the affected parties, first, report the accounts spreading the misinformation. This is a great attempt to correct the situation. However, many platforms use automated bots to flag any account cited for misinformation without oversight to investigate the claims. This has, in turn, led to the removal of even accurate information from social media platforms.

As much as social media plays a key role in the dissemination of information, especially information pertaining to civic education, and in creating a level of awareness, it is also used to spread false information. The platforms allow users to share content with their followers in real-time. This can be beneficial for spreading important news, but it also means that false information can go viral almost instantly. Once misinformation is shared, it can spread across networks and reach millions before any fact-checking or corrections can be made.

Further, many users curate their feeds to reflect their interests and beliefs. Algorithms prioritise content that generates high engagement, often leading to the reinforcement of existing views. This creates echo chambers where users are less likely to encounter opposing viewpoints or fact-based corrections, allowing misinformation to proliferate unchallenged.

The ease of creating and sharing content on social media means that anyone can publish information, regardless of its accuracy. Unlike traditional media, which often undergoes rigorous editorial processes, social media lacks safeguards, allowing unverified claims to spread rapidly.

The platforms often reward sensational or emotionally charged content with greater visibility, as it tends to generate more likes, shares, and comments. This prioritisation can lead to the promotion of misleading information that captures attention but lacks factual basis, overshadowing more accurate but less engaging content.

Misinformation can spread exponentially through shares, retweets, and re-posts, creating a viral effect. Once misinformation has gained traction, it can be challenging to counteract. Corrections or clarifications don’t often reach the same audience or achieve the same level of engagement, leading to the persistent presence of false information.

With this in mind, some possible solutions, including increasing public awareness, can be used to critically assess information. This would help users become more discerning consumers. Educational initiatives could teach skills such as fact-checking, recognising credible sources, and understanding media biases.

Platforms should make their algorithms more transparent ensuring users can understand how content should be prioritised. This could involve explaining engagement metrics’ influence on users, which may help mitigate the spread of sensational misinformation.

Lastly, encouraging the development of better tools to detect and counteract bot-driven misinformation can help mitigate its spread. This may include identifying automated accounts that amplify false narratives and acting against them.