Blurred Lines: Lessons from Deloitte Australia on the Use of Generative AI in the Workplace
The role of generative artificial intelligence (AI) in the modern workplace took centre stage this week. Deloitte Australia faced criticism for its use of AI in conducting an assurance review of the country’s Targeted Compliance Framework (TCF), part of the system that manages welfare and benefits payments. The report was found to contain clear errors that were later identified.
The company, which is part of the Big Four consulting firms, agreed to partially refund the government after it was discovered that the report included fabricated quotes and references to non-existent research. Deloitte had been paid A$440,000 (about USD290,000) for the seven-month project.
Australian welfare academic Chris Rudge identified the errors, prompting Deloitte to quietly amend and reissue the report. In the revised version, Deloitte revealed that it had used a generative AI model (Azure OpenAI GPT-4o) to help prepare the report.
Drawing the Line: Ethical AI in the Workplace
The Deloitte Australia incident urges us to reassess the boundary between the fair use of generative AI and maintaining professional standards within an industry. Where does the line get crossed? In this case, the boundary was evidently breached through the use of a fake Federal Court judgment quote and multiple incorrect academic citations.
However, the question of whether using AI for consulting work, or any work at all, is ethical remains open. This debate takes place amid a well-known fact and open secret: AI tools are increasingly being integrated into professional tasks, from drafting reports to analysing complex data.
Ironically, the revelation of errors in Deloitte’s report coincided with the firm’s announcement of a landmark partnership with AI company Anthropic to deploy its Claude chatbot to nearly half a million employees worldwide. The initiative aims to enhance productivity and develop AI-driven compliance tools for sectors including finance, healthcare, and public services.
About one in five United States workers now use AI in their jobs, up from one in six last year, according to a 2025 Pew Research Centre survey. Although similar Africa- or region-specific data is not yet available, there is little doubt that generative AI is increasingly being adopted in formal workplaces across the continent.
It is, therefore, evident that the use of AI in the workplace is a ship that has already sailed. What remains to be clarified, however, is what constitutes responsible and ethical AI integration, and where the boundaries or ceilings should be. This is especially relevant for institutions and service providers where high standards of practice and confidentiality are crucial, and where lapses, such as the Deloitte case, can erode credibility and public trust.
The European Union AI Act and Generative AI
Some jurisdictions, such as the European Union (EU), have attempted to legislate on this. The Union, however, does not regulate the use itself, but rather the risk associated with it. In essence, while the Act exempts personal, non-professional use from regulation, professional users are legally and ethically responsible for the deployment of AI in the workplace.
The Act categorises AI systems according to their risk levels: prohibited AI systems, high-risk AI systems subject to transparency requirements, and general-purpose AI models. Users of ChatGPT for work within the EU must comply with strict requirements if the AI’s use impacts individuals’ rights, opportunities, or access to services, which are regarded as high-risk applications.
In the Deloitte case, if the firm had been governed by the EU AI Act, it would have needed to maintain transparency by informing relevant parties that AI was used to generate outputs that could influence decisions. The firm would also be expected to exercise responsible oversight of AI outputs, avoiding misleading or harmful results, and ensuring that any data shared with the system complies with the General Data Protection Regulation (GDPR) and other relevant privacy regulations.
However, Deloitte operates under Australian jurisdiction, so these EU provisions do not directly apply. Recourse may be available through self-regulatory measures adopted by private actors or industry bodies.
Guided Usage: Columbia University GenAI Policy
Columbia University’s guidelines for using generative AI emphasise responsible, secure, and transparent practices across all professional, research, and academic activities. Users are prohibited from inputting confidential or personal information into AI tools unless such use is explicitly authorised through validated contracts and approved security measures. Accuracy is crucial; all AI-generated content must be verified before being relied upon, as outputs can be factually incorrect or biased. Users are also encouraged to stay vigilant for potential biases in AI outputs that may lead to discriminatory or unfair outcomes.

Transparency is required in all work products: any use of generative AI should be clearly disclosed, including in written materials. Intellectual property rights must be respected, ensuring that AI-generated content does not infringe upon the rights of third parties. The guidelines also prohibit the creation of malicious content, such as malware or phishing attempts, using AI tools. Finally, where possible, users are encouraged to opt out of allowing their inputs to be used for AI training, protecting both privacy and data integrity.
Tailored In-House Solutions: The JPMorgan Chase Case
JPMorgan Chase demonstrates how a major financial institution is incorporating generative AI into its operations. Initially, the bank adopted AI cautiously, emphasising data security and strict regulatory compliance, which restricts employees from using consumer AI tools like ChatGPT. Over time, JPMorgan has invested in internal AI platforms, such as the LLM Suite, which is now accessible to approximately 50,000 employees in its asset and wealth management division. This platform supports tasks such as writing, summarising documents, and generating ideas, acting as an internal research assistant. Specialised tools, like Connect Coach, have also been deployed to help private bank advisors.
The bank encourages experimentation through initiatives such as internal AI competitions. Leadership is also considering client-facing applications of AI, including a potential public version of Connect Coach. JPMorgan estimates that these initiatives already contribute between USD 1 billion and USD 1.5 billion in value.
Conclusion
The Deloitte case has highlighted a vital and timely topic for discussion: the role of generative AI in the workplace. While many regulators remain hesitant to impose strict rules on AI, lessons from the EU and self-regulatory measures adopted by institutions such as universities and financial organisations offer a constructive way forward. These frameworks do not promote the outright banning of AI in professional settings. Instead, they emphasise the responsible, transparent, and guided use of AI, ensuring its integration into workplace processes enhances productivity and decision-making without compromising ethics, accountability, or public trust.
