• Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

AI-driven errors in Deloitte reports spark trust concerns for public policy

Tuesday 09 December 2025 - 16:50
By: Dakir Madiha
AI-driven errors in Deloitte reports spark trust concerns for public policy

What started as a minor embarrassment for an Australian government agency has evolved into a cautionary tale about the risks of artificial intelligence in professional consulting. Deloitte, a global leader in advisory services, is at the center of two high-profile controversies that reveal the growing challenges of integrating AI into intellectual labor.

Fabricated citations in Australia

Earlier this year, Deloitte disclosed that a report it prepared for Australia’s Department of Employment and Workplace Relations (DEWR) contained fabricated citations, nonexistent academic references, and even an invented court ruling. The 237-page document, which guided welfare policies, was quietly replaced in September with a revised version, acknowledging the use of Azure OpenAI in its preparation.

The original report, valued at A$440,000 (US$292,000), has prompted Deloitte to issue a partial refund of approximately A$98,000 (US$65,000). Meanwhile, the DEWR has halted key welfare compliance decisions, citing legal concerns over its automated processes.

Initially, Deloitte portrayed the incident as an isolated case of AI “hallucination” that slipped past human oversight. However, this explanation began to unravel when a similar issue emerged in Canada.

Similar flaws in Canada’s healthcare report

Weeks after the Australian scandal, Deloitte faced parallel accusations over a CA$1.6 million (US$1.155 million) healthcare transformation report for the Government of Newfoundland and Labrador. Researchers uncovered AI-generated errors, including references to nonexistent studies, misattributed citations, and unverifiable supporting evidence.

The provincial government has since demanded a comprehensive review and verification of every citation in the 526-page report. While Deloitte maintained that AI was used sparingly for literature searches and that corrections would not alter the report’s findings, the incident has raised alarm over the reliability of AI-assisted consultancy.

Broader implications for knowledge work

The twin scandals underscore a systemic issue with AI in industries dependent on precision and trust, such as consulting, law, and policymaking. Generative AI's ability to process information and draft reports is impressive, but its tendency to fabricate sources with confidence poses significant risks.

These incidents go beyond questions of accuracy, they challenge the foundation of evidence-based policymaking. Governments and public institutions rely on consultants to deliver rigorous analysis that internal teams cannot. When reports shaping welfare policies or healthcare reforms include fabricated data, the credibility of these institutions is undermined.

An industry at an inflection point

Deloitte insists the core conclusions of its reports remain intact. However, the Australia and Canada cases highlight how easily AI-generated misinformation can infiltrate critical public-sector analysis when oversight mechanisms fail.

As the Big Four consulting firms invest billions in AI tools, the focus must shift from innovation to risk management. The rapid adoption of generative AI has outpaced the implementation of safeguards, leaving industries vulnerable to errors that could erode public trust.

These incidents serve as a stark reminder: while AI offers extraordinary potential for productivity, it also introduces new vulnerabilities. Institutions must adapt quickly to ensure that the benefits of AI do not come at the expense of accountability and trust.



Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.