AI and Professional Integrity: Tools, Guardrails and Human Accountability

  • 13 Mar 2026
  • 3 Mins Read
  • 〜 by Agatha Gichana

This week has been one for the books on the ethical use of artificial intelligence (AI). At the Kenya National Drama and Film Festival, adjudicators flagged alleged “cheating” through AI in scriptwriting. Around the same time, a judge struck out an application after finding that the submissions had been entirely AI-generated.

This is not a purely local phenomenon. Not long ago, this publication examined the Deloitte case, raising questions about how professional service providers should balance the efficiency gains of AI with the need to maintain professional integrity, including avoiding plagiarism and ensuring accountability for professional services.

However, recent attention to this topic behoves us to return to the fundamental question: should AI be used in the first place, and under what conditions? To answer this, it is instructive to look back at earlier moments when disruptive technologies sparked similar anxieties.

When Google emerged in the late 1990s and rapidly became the dominant search engine in the early 2000s, it changed how we accessed information. Suddenly, vast amounts of knowledge became available with a simple search query. This convenience sparked immediate debate in academic and professional circles about whether reliance on search engines would weaken critical thinking, memory, and independent research skills.

Researchers at Harvard University and other institutions began examining the cognitive and behavioural impact of search engines.

Early academic commentary warned that easy access to information could reduce deep engagement with primary sources and encourage superficial research habits. Over time, scholars also explored what later became known as the “Google effect”, the tendency for people to remember where information can be found rather than the information itself.

Despite these early concerns, search engines ultimately became accepted as legitimate research tools, provided they were used responsibly and supplemented with proper verification and citation.

In many ways, the current debate around artificial intelligence echoes that earlier transition. The issue is not simply the existence of technology, but how institutions, professionals and students establish norms that preserve accuracy, originality, and accountability while still benefiting from technological efficiency.

The lesson from the Google era is instructive: disruptive technologies initially provoke fear about declining standards, yet over time, societies develop guardrails and professional norms that allow the technology to be used productively without undermining integrity. The same governance conversation is now unfolding for artificial intelligence.

This is the approach that media giant Nation Media Group adopted when it recently launched an AI framework. The framework allows AI to support several stages of the newsroom workflow. Reporters may use AI tools to accelerate the early stages of content production. Beyond drafting, AI may be used for polishing and editing tasks, and for analysing large datasets by condensing complex information to support data-driven reporting.

Despite these permitted uses, the framework establishes several mandatory safeguards to preserve journalistic integrity. Human oversight remains compulsory throughout the editorial process. AI is treated strictly as a support tool, while editorial judgement and decision-making must remain with journalists and editors.

In the legal profession, the Singaporean Ministry of Law published a Guide for Using Generative AI in the Legal Sector. At its core, the framework emphasises three principles governing the use of generative AI. First is professional ethics. Lawyers remain fully responsible for the accuracy and integrity of their work product, regardless of whether AI tools are used. Generative AI is therefore treated strictly as a supporting tool, with practitioners required to apply their own legal expertise to verify and refine outputs.

Second is confidentiality. Legal professionals must implement safeguards to ensure that client information remains protected. This includes selecting tools appropriate to the sensitivity of the data involved, such as enterprise-level platforms rather than open or free tools, and reviewing provider terms to confirm that confidential information will not be used to train AI models. Third is transparency. Practitioners are encouraged to consider disclosure where the use of AI could materially affect client interests.

These guidelines in both journalism and the legal profession highlight a common principle in the adoption of artificial intelligence: AI is intended to function as a supporting tool with human oversight remaining central to decision-making and accountability. They recognise an important reality that applies to any technological tool. While AI can enhance efficiency and productivity, it also carries risks, including inaccurate outputs and algorithmic bias. As a result, professional judgement and verification remain essential.

The next phase in responsible adoption is therefore not sensationalism or blanket prohibition, but the establishment of structured guardrails to prevent indiscriminate use. This requires organisations to adopt a deliberate implementation approach accompanied by targeted training to build AI literacy among professionals.