Neutral Intermediary or Duty-Holder? What the Meta Child Protection Decision Means for Tech Platforms

  • 27 Mar 2026
  • 3 Mins Read
  • 〜 by Agatha Gichana

In a landmark development in the United States, Meta was found liable for its impact on children through social media, with a jury awarding around USD375 million in damages to the affected family. The verdict signifies a major shift in how courts view big tech’s duty of care, moving away from the traditional idea of platforms as neutral intermediaries in child protection towards recognising them as duty-holders with proactive responsibilities to safeguard users, especially children.

Although the full judgment is not yet publicly available, early reporting suggests that the reasoning engages duty–of–care–type principles, particularly regarding the foreseeability of harm and the adequacy of safeguards for minors. In this respect, the developments are reminiscent of the locus classicus case of Donoghue v Stevenson, which established that a duty of care may arise even in the absence of a direct contractual relationship where harm is reasonably foreseeable. Similarly, the claims raise the question of whether digital platforms owe a duty of care to users, especially minors, in relation to harms arising from the design and operation of their services.

In the multistate legal action filed in the Northern District of California, a coalition of 33 states accused Meta of engaging in a multi-year scheme to exploit and “ensnare” young users for profit while repeatedly misleading the public about the safety of its platforms.

The states argued that Meta’s business model is designed to maximise the time and attention young users spend on Facebook and Instagram by targeting youth. They said youth are more impressionable, likely to become long-term customers, and able to set trends. The complaint further asserted that Meta intentionally implemented psychologically manipulative features to encourage addictive and compulsive use.

Among the features cited were recommendation algorithms that trigger dopamine responses akin to slot machines; social comparison tools such as “Likes” and follower counts, which were known to affect teen mental health; infinite scroll and autoplay functions that discourage disengagement; persistent audiovisual and haptic notifications that disrupt sleep and education; and visual filters linked to body dysmorphia and eating disorders.

Meta was also accused of violating the Children’s Online Privacy Protection Act (COPPA) by monetising personal data from millions of users under 13 without obtaining verifiable parental consent.

In its defence, Meta argued that scientific evidence linking social media to mental health harm is inconclusive. The company contended that there is no established causal relationship between platform usage and adolescent mental health outcomes. Meta cited studies from the National Academies of Sciences, Engineering, and Medicine, which indicate that current research does not demonstrate population-level effects. Meta also highlighted that its investments in safety and security measures have exceeded USD20 billion since 2016, as evidence of its commitment to user protection.

Meta’s Ripple Effect: Global Shifts in Child Protection and Platform Accountability

The Meta child-protection litigation in the United States has triggered immediate ripple effects among global regulators and industry actors. In the United Kingdom, Apple introduced device-level age verification shortly after the litigation gained attention, requiring users to confirm they are adults to access certain services, while automatically enabling content filters for underage or unverified users. This response underscores how high-profile legal decisions can reshape expectations for tech companies.

Complementing this, Ofcom has just released the Highly Effective Age Assurance (HEAA) standards for platforms likely to be accessed by children, mandating that platforms assume the presence of minors when effective blocking is not implemented. The trend is mirrored in Australia, where legislation now prohibits children under 16 from accessing major social media platforms, including Facebook, Instagram, TikTok, Snapchat, YouTube, Reddit, and X, with non-compliance punishable by fines of up to A$49.5 million.

Conclusion

The Meta verdict has profound implications for digital platforms worldwide, as courts and regulators increasingly recognise that platforms may have a duty of care to children for foreseeable harms caused by design features and algorithmic recommendations. Regulators are drawing guidance from the ruling, turning legal reasoning into enforceable policies. Platforms will need to implement effective age-assessment systems and restrict children’s exposure to harmful content. More broadly, the decision indicates a growing trend: courts are applying traditional duty-of-care principles to digital spaces, and regulators are ready to formalise these obligations.