Pitfalls of using AI for advice-seekers and advisors - a case of caveat utilitor (user-beware)
- Geraldine Chan
- Oct 28, 2025
- 5 min read
Updated: Nov 11, 2025

I must admit that I have embraced Artificial Intelligence (AI) with gusto since it was launched and it is now my go to rather than Google search. But what are the pitfalls of using AI to deal with issues that are outside your area of expertise and the pitfalls of professionals using AI to deliver advice to clients?
According to Built In, an online community for startups and tech companies, AI can be separated into two main groups - capability-based AI and functional-based AI.
Capability-based AI includes Narrow AI (also known as Artificial Narrow Intelligence or ANI), Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).
Functional-based AI includes Reactive Machine AI, Limited Memory AI, Theory of Mind AI, Self-Aware AI.
What? Didn’t know there were so many types of AI!
Presently, the most common term we hear when it comes to AI is Generative AI. What is Generative AI? According to Oxford Dictionary:
“artificial intelligence designed to produce output, especially text or images, normally requiring human intelligence, typically by applying machine learning techniques to large collections of data.”
So where does Generative AI sit in the scheme of things? When I type “which type of AI does Generative AI fall under” into my Google search field, the AI Overview provides this answer:
“Generative AI is a type of narrow AI that focuses on creating new content, but it is built upon more fundamental AI technologies like deep learning and machine learning. While it operates within the specific task of content creation, it represents an advanced subset of narrow AI because of its ability to produce novel and creative outputs rather than just completing predefined tasks.”
Narrow AI? I was shocked to learn that because Generative AI tools like ChatGPT seem to have an answer for everything. How is it Narrow AI?
According to Built In, “Narrow AI, also known as artificial narrow intelligence (ANI) or weak AI, describes AI tools designed to carry out very specific actions or commands. They are built to serve and excel in one cognitive capability, and cannot independently learn skills beyond their design. All AI systems used today fall under the category of narrow AI. …. Some examples of narrow AI include self-driving cars and AI virtual assistants.”
Self-driving cars? Really? No way it is narrow AI. Shouldn’t it be AGI or ASI? Well, apparently AGI is still a theoretical form of AI and it’s a work in progress. AGI is a form of AI that can learn, think and perform a wide range of tasks at a human level. ASI is even further away in the future. Built In says ASI is “the stuff of science fiction” and that “once AI has reached the general intelligence level, it will soon learn at such a fast rate that its knowledge and capabilities will become stronger than that of even humankind.” That is exciting but also rather scary.
In the context of seeking or providing tax advice, there are many AI tools in the market that you can access to help you answer tax questions and provide references even. But these tools are Generative AI and Generative AI is prone to something called hallucinations.
According to IBM, “AI hallucination is a phenomenon where, in a large language model (LLM) often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
Because Generative AI’s sole purpose in ‘life’ is to generate content, it will do exactly that to ‘please’ you, even if what it generates is nonsense or inaccurate. Generative AI will never say, “erm…I’m not sure, sorry.”
Not only that, but Generative AI’s responses are also based on the data it is given to train it. If the data is biased, its responses will be tainted by that bias.
Of course, these flaws with Generative AI are well-known to the creators of AI tools and the creators will do everything they can to minimise hallucinations. But as an advisor, you should never trust the tool 100% and you need to always scrutinise and crosscheck its answers. Because if you don’t, this could happen:
1. Deloitte Australia caught out using AI in $440,000 report
A report prepared by Deloitte Australia for the Australian government was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to non-existent academic research papers.
2. Handa v Mallick [2024] FedCFamC2F 957 (Australia)
A Victorian lawyer submitted a list of legal authorities generated by AI — without verifying their authenticity. The list included fictitious case citations, and the lawyer admitted to the court that they were AI-generated. The judge ordered the lawyer to explain why they should not be referred to the Victorian Legal Services Board and Commissioner (VLSBC) for professional misconduct.
3. Valu v Minister for Immigration [2025] FedCFamC2G 95 (Australia)
In this case, a NSW lawyer filed submissions containing fabricated quotes and cases generated by AI. The lawyer cited poor health and time pressure as reasons for relying on AI, but the court found the conduct unacceptable. The matter was referred to the Office of the NSW Legal Services Commissioner (OLSC) for investigation.
4. Roberto Mata v Avianca Airlines (USA)
A lawyer from a prominent firm used ChatGPT to draft a legal brief and included six non-existent cases. The lawyer admitted they were unaware AI could fabricate citations. The court fined the lawyer and issued a public reprimand, stating that reliance on unverified AI outputs breached professional standards.
To summarise, check, check and double-check your AI tool’s responses are accurate. Critical thinking is important when reviewing a response given by your AI tool. Note also that you may be contributing to the training of the tool when you feed it more questions after you find that it had hallucinated. There could be issues around confidentiality and intellectual property rights that could arise from this. There is also a cost-benefit issue in that if you’re finding that the time you spend on verifying your AI tool’s output exceeds the time you would spend if you did the research yourself to begin with.
In tax, my view is that training junior staff to research first using the traditional human intelligence way i.e. to first read the legislation (so they know at least the construct of our Tax Act and Tax Administration Act) and then refer to the Master Tax Guide for clarification rather than diving straight into using an AI tool will serve to develop the critical thinking capability needed to use AI effectively and most importantly, safely later on.
For the general public, you could be tempted to find the answer to your questions using AI. But as you may not be an expert in the area, you are unlikely to even know what questions to ask to verify the output. In other words, you are unlikely to have the critical thinking capability needed to use the AI safely. Therefore, my view is that you should consult your advisor. Perhaps the research you have carried out using AI could aid you in thinking critically about your advisor’s advice!!
Caveat utilitor!






Comments