Double-Edged Sword: How AI Can Become a Risk for Businesses

  • 24 Oct 2025
  • 4 Mins Read
  • 〜 by Samuel Kiambi

“AI will be a force for good most likely, but the probability of it going bad is not zero per cent.”

– Elon Musk, 2023

Artificial Intelligence (AI) is no longer a futuristic concept; it’s a current reality shaping industries, workflows, and customer experiences. However, as businesses race to integrate AI into their operations, a sobering truth surfaces: its risks are as profound as its promises. From reputational damage to regulatory scrutiny, the hidden dangers of AI require strategic foresight and strong governance.

 AI is not just a tool – it is a strategic force. But without rigorous governance, it can become a liability. Ultimately, what you need to be aware of in this area is vast, but hopefully manageable. The risks are real, but so are the solutions, if business leaders are willing to confront the hard truths of its adoption and usage. Some pitfalls to watch out for include:

Reputational Risk: When AI Goes Rogue

AI’s ability to make decisions at scale is both its strength and its Achilles’ heel. A single biased or incorrect output can lead to a public relations disaster. For example, Google Search’s “AI Overview” confidently cited a satirical article about “microscopic bees powering computers” as fact in February 2025, even though it was not true. This incident demonstrated how AI hallucinations can mislead users and damage a company’s reputation (Kidman, 2025). More recently, Deloitte Australia’s report on the Targeted Compliance Framework (TCF) included fake quotes and references to non-existent research, costing the firm financially and reputational damage.

In some industry sectors, such as automotive or healthcare, where consumer trust is crucial, “casual” missteps like this can lead to disaster. As Harvard Business School’s Prof. Karim Lakhani states, “AI doesn’t have a built-in, fact-checking mechanism…even with careful prompting, generative AI models can still produce inaccurate or misleading information.”

Furthermore, corporate use of AI requires fact-checking, guided reiterative prompt engineering, and asking it for its sources, among other measures, to counter such risks.

Misinformation Risk: Confidently Wrong

AI hallucinations are “inaccurate outputs generated by tools, such as ChatGPT, Gemini, and Claude, that appear plausible but contain fabricated or inaccurate information” (Augenstein et al., 2024). Such hallucinations pose a distinct challenge. Unlike human misinformation, which often arises from intent, AI-generated errors are statistical artefacts. This means hallucinations exist and persist because AI models generate language (and thus responses) by predicting the next most likely word(s) based on statistical patterns in training data. In business, such hallucinations can inadvertently mislead strategic planning, customer communication, or even legal decisions.

A 2025 Harvard Kennedy School misinformation study revealed that AI systems, including ChatGPT and Gemini, often generate misleading content without any intention to deceive, rendering traditional fact-checking methods inadequate. To mitigate these risks, organisations must implement “human-in-the-loop” oversight to verify AI outputs, particularly in high-stakes settings.

 

Ethical and Regulatory Risk: Bias Amplified

AI systems trained on (biased) historical data risk reinforcing existing biases. From hiring algorithms that discriminate against women to facial recognition tools misidentifying minorities, the ethical concerns are substantial. Without responsible innovation, AI could worsen inequalities and expose organisations to legal liabilities under frameworks such as the European Union’s General Data Protection Regulation (GDPR). This might also affect local regulations, such as the Kenya Data Protection Act, and other national laws.

Currently, Kenya does not have specific AI-only legislation, but it is actively developing its regulatory framework through initiatives like the National AI Strategy (2025-2030). Tanzania, which also lacks a formal AI regulatory framework, is developing one. Moreover, the EU and the U.S. have somewhat divergent regulatory approaches, with the EU advocating for a broad yet comprehensive unitary policy that promotes stricter transparency and accountability in AI systems. At the same time, the U.S. is exploring the implementation of sector-specific and highly distributed policies across several federal agencies. These geographic complexities require businesses operating across borders to navigate the evolving landscape with a fine-tooth comb.

 

Security and Data Privacy: The Prompt Injection Threat

AI systems require vast datasets, making them attractive targets for cybercriminals. Besides traditional breaches, a new threat arises: prompt injection. This occurs when attackers manipulate AI models to reveal confidential data or perform unintended and unauthorised actions. This threat could be widespread for businesses worldwide. A 2025 TechCrunch report highlights apps like Perplexity’s AI browser, which requests extensive access to user data, including calendars, emails, and contacts, raising serious privacy concerns. In August 2023, Zoom faced criticism for allowing the company to use customer data to train its AI models without consent. As illustrated by this example, businesses require AI governance that incorporates AI audits into their daily operations. They must regard corporate AI deployment as a board-level concern, ensuring proper oversight, transparency, and ethical safeguards.

 The Black Box Problem: Decisions Without Explanation

One of the most concerning characteristics of AI is its lack of transparency in decision-making. Advanced models often operate as “black boxes,” making decisions without clear explanations. This absence of transparency makes auditing, compliance, and accountability more difficult. At the very least, black-box access should be provided to auditors to enable proper oversight. Without explainability for AI outputs, companies face legal risks and decreased stakeholder trust.

Without proper AI governance and audits, the risk of adverse consequences from their usage increases, especially when there is no appropriate accountability structure in place. One best practice to address these concerns is to require AI to explain its reasoning, particularly for all critical, high-risk, or sensitive business cases.