Policing the Digital Self: Who is Really in Control?  

  • 17 Apr 2026
  • 4 Mins Read
  • 〜 by Brian Otieno

The digital age once promised liberation: democratised access to information, amplified voices, and diminished the significance of geographic boundaries. That promise, however, has matured into a far more complex reality. The same infrastructure that enables opportunity has also facilitated harm at a scale and speed policymakers never anticipated.

Digital platforms do not merely host content; they accelerate it, optimise it, and frequently reward the most extreme or sensational forms of expression. Abuse, exploitation, and privacy violations now travel faster than regulation, exposing individuals in ways that traditional legal frameworks were never designed to address.

The Crime and Policing Bill: The UK’s Regulatory Pivot

At the centre of the United Kingdom’s response sits the Crime and Policing Bill, 2025, currently progressing through Parliament at a critical juncture. The House of Commons has considered amendments introduced by the House of Lords and returned the Bill with formal reasons for disagreement, triggering further scrutiny in the upper chamber.

This iterative exchange reflects more than procedural rigour. It reveals an institutional effort to reconcile competing priorities: individual rights, technological realities, and the expanding role of the state in digital spaces. The Bill has become a test case for how far democratic systems can stretch existing legal principles to confront emerging forms of harm.

Notably, legislative focus has shifted decisively from content moderation toward accountability for consequences. Policymakers are increasingly targeting harms that exploit both technological advancement and regulatory lag, including manipulated intimate imagery, non‑consensual content, and other deeply invasive forms of digital abuse.

These harms do not exist in isolation. They thrive in ecosystems shaped by anonymity, algorithmic amplification, and the global reach of online platforms. Advances in artificial intelligence have further lowered the threshold for producing convincing harmful content, expanding both the pool of perpetrators and the scale of potential victims. Existing regulatory frameworks have struggled to draw clear lines of liability, frequently leaving affected individuals without effective remedies.

Privacy as a Regulatory Battleground

These dynamics position privacy not as a peripheral concern, but as a central question of public policy. Once understood as an individual right exercised within clearly defined personal spaces, privacy is now inseparable from platform design and data‑driven business models. Digital platforms derive value from the continuous extraction and monetisation of personal data, creating structural incentives that often conflict with user protection.

In such environments, harmful content does not simply appear and disappear. It circulates, resurfaces, and persists over time, eroding individuals’ ability to control their own identity and narrative. The UK’s legislative approach reflects an acknowledgement that privacy sits at the core of societal trust, economic participation, and democratic stability, requiring a more assertive regulatory posture attuned to platform‑driven ecosystems.

Drawing the Regulatory Line

Against this backdrop, the Crime and Policing Bill seeks to redraw the boundaries of acceptable digital conduct. It expands criminal liability for the creation and dissemination of abusive online content while strengthening enforcement mechanisms available to authorities. This signals a broader transformation in UK policymaking, where the state increasingly adopts an interventionist stance in response to perceived regulatory gaps.

While this approach builds on earlier efforts to govern online harms, it moves further by emphasising deterrence and accountability. The resulting clarity affirms that digital conduct carries real-world consequences. At the same time, it raises complex questions about proportionality, the feasibility of enforcement, and the potential for unintended constraints on legitimate expression.

Regulation Travels: The CopyPaste Risk

Regulatory frameworks developed in jurisdictions such as the United Kingdom rarely remain confined to national borders. They influence global discourse, shape multilateral engagements, and often serve as templates for governments seeking to address comparable challenges.

This dynamic places countries in the Global South, including Kenya, in a position where external policy developments increasingly shape domestic regulatory trajectories. Kenya has already taken significant steps through legislation such as the Computer Misuse and Cybercrimes Act, 2018, which was further amended in 2025, reflecting a growing awareness of the risks posed by unregulated digital spaces. The UK’s latest legislative push introduces a more expansive model, one that prioritises privacy and individual dignity while embracing robust enforcement.

However, the temptation to replicate such frameworks without sufficient adaptation presents tangible risks. The UK operates within a context of relatively strong institutions, established jurisprudence, and substantial enforcement capacity. Kenya and many of its peers navigate different conditions, marked by resource constraints, institutional fragmentation, and uneven levels of digital literacy. Transplanting complex regulatory models without contextual calibration risks producing laws that appear robust on paper but prove ineffective in practice, or worse, vulnerable to misuse in ways that undermine civil liberties and public trust.

The Need for Strategic Adaptation

Digital regulation has entered a decisive phase. Governments are increasingly moving beyond cautious incrementalism as the costs of inaction become politically, socially, and economically untenable. Digital harms are no longer marginal; they are embedded in everyday life, evolving in scale and sophistication beyond the reach of traditional legal responses. The UK’s Crime and Policing Bill captures this shift clearly, signalling a willingness to confront the bigger risks of the digital ecosystem with institutional resolve.

For countries across the Global South, including Kenya, this moment, however, demands more than reactive lawmaking or the uncritical adoption of foreign regulatory models. Effective regulation must be context‑aware. Protection cannot come at the expense of freedom, nor can accountability be pursued without regard to local institutional capacity. The question has therefore shifted from whether to regulate digital spaces to how to design frameworks that meaningfully safeguard rights while preserving the openness that sustains digital growth and participation.

Strategic adaptation offers the most credible path forward. Rather than reproducing external legislative blueprints wholesale, policymakers should extract and translate their underlying principles. The UK’s emphasis on individual dignity, clear accountability for digital harm, and recognition of emerging technological threats provides a valuable foundation. Successful application, however, requires careful calibration to local enforcement realities, institutional strength, and societal norms, alongside flexibility to respond to rapid technological change.

This task extends beyond statutory drafting. It demands sustained investment in regulatory capacity, technical expertise, public awareness of digital rights, and collaboration among government, industry, and civil society. The choices made now will shape not only the effectiveness of individual laws but also the long-term architecture of trust, accountability, and participation in digital societies.