December 06, 2025 02:52 am (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
In front of Putin, PM Modi makes bold statement on Russia-Ukraine war: ‘India is not neutral, we side with peace!’ | Rupee weakens following RBI repo rate cut | RBI slashes repo rate by 25 basis points — big relief coming for borrowers! | 'Mamata fooled Muslims': Humayun Kabir explodes after TMC suspends him over 'Babri Masjid-style mosque' demand; announces new party | Mosque in the middle of Kolkata airport? Centre confirms flight risks, BJP fires at Mamata | Sam Altman is betting big on India! OpenAI in advanced talks with Tata to build AI infrastructure | Government removes mandatory pre-installation of Sanchar Saathi App. Know all details | Calcutta HC overturns controversial Bengal job annulment — 32,000 teachers rejoice! | Bengal SIR shock: 1 lakh ‘deceased voters’ found in Kolkata North! | Massive twist in Bengal voter list: ‘Perfect’ 2,280 booths shrink to just 480 after probe!
Photo: Video grab

Deloitte to refund part of $440,000 fee after AI-generated errors found in Australian govt report

| @indiablooms | Oct 07, 2025, at 04:59 pm

Canberra: Global consulting firm Deloitte has agreed to refund part of its $440,000 (A$290,000) fee to the Australian government after admitting that generative AI tools were used in preparing a report assessing the government’s “Future Made in Australia” initiative.

The Department of Employment and Workplace Relations had commissioned the firm in 2024 to review the compliance framework and IT system that automatically penalises job seekers who fail to meet mutual obligation requirements, The Guardian reported.

However, the final report—released in July—contained serious inaccuracies, including academic citations referring to non-existent individuals and a fabricated quote from a Federal Court ruling, according to the Australian Financial Review.

The department published an updated version of the report on its website on Friday, removing more than a dozen fake references and footnotes, correcting typographical errors, and revising the reference list.

Australian welfare academic Dr Christopher Rudge, who first identified the discrepancies, said the report exhibited AI “hallucinations”—where AI systems generate false or misleading information by filling gaps or misinterpreting data.

“Rather than simply replacing a single fake reference with a real one, they've removed the hallucinated citations and, in the updated version, added five, six, even seven or eight new ones in their place. So what that suggests is that the original claim made in the body of the report wasn't based on any one particular evidentiary source,” he said.

Deloitte’s response

The firm admitted to using AI but said it was only employed in the early drafting stages, with the final document reviewed and refined by human experts.

Deloitte maintained that AI usage did not affect the “substantive content, findings or recommendations” of the report.

While acknowledging that generative AI tools were used, Deloitte did not directly link the errors to artificial intelligence.

In the revised version, the company disclosed that its research methodology had involved a large language model—specifically, Azure OpenAI GPT-4o.

A Deloitte spokesperson confirmed that “the matter has been resolved directly with the client.” The department said the refund process is underway and that future consultancy contracts could include stricter rules regarding AI-generated material.

Ethical concerns

The episode has triggered broader discussion about the ethical and financial accountability of using artificial intelligence in consultancy work, particularly in government-funded projects.

As consulting firms increasingly rely on AI for efficiency, questions are being raised about the extent of human oversight and whether clients receive genuine value.

Notably, Deloitte recently entered a partnership with Anthropic to provide nearly 500,000 employees worldwide access to the Claude chatbot—underlining the growing integration of AI into professional services.

The case represents one of the first significant instances in Australia where a private firm has faced repercussions for undisclosed AI use in a government project.

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.
Related Videos
RBI announces repo rate cut Jun 06, 2025, at 10:51 am
FM Nirmala Sitharaman presents Budget 2025 Feb 01, 2025, at 03:45 pm
Nirmala Sitharaman on Budget 2024 Jul 23, 2024, at 09:30 pm