AI in legal practice:
The Australian legal profession has always been central to upholding the rule of law, protecting rights, and supporting access to justice. Solicitors have adapted to changing client needs, including digital communication, remote work, and the growing use of artificial intelligence in practice.
Given these changes, the legal profession has begun integrating AI into everyday practice. This has created new opportunities, as well as new issues arising from the use of such platforms.
How AI is helping – and where it can go wrong
When used carefully, AI has the potential to support rather than replace legal judgment and professional advice. AI platforms can simplify legal jargon, condense documents, and sift through large volumes of information, provided the material is not subject to client confidentiality or privilege.
Similarly, specialist legal research platforms that incorporate AI into their databases can also make large bodies of case law, legislation, and legal commentary digestible and easier to find. This is particularly useful for issue spotting and preliminary research. Courts and some judges are also exploring AI for summarising and managing workloads, provided any information is independently verified. When used in this way, AI can help ease the heavy workloads that are inherent to legal work.
However, despite the benefits, AI poses some risks. Some AI tools are prone to “hallucinations,” where realistic‑sounding but non‑existent cases and statute provisions are generated. In the context of litigation, where accuracy and authority are critical, this is problematic. Free, open-web tools which draw on a broad range of internet sources may generate incomplete legal analysis, legislation based in the wrong jurisdiction, or language which is associated with other legal systems. Similarly, even AI platforms which are dedicated to legal research are only as accurate as the database which uploads them. Therefore, any AI database must be used carefully and routinely checked for accuracy.
Courts have made it clear that professional responsibilities do not change simply because AI has been used. The onus remains with lawyers to ensure that everything they file is accurate. Caselaw demonstrates that the failure to verify AI related sources has led to wasted court resources, indemnity costs, and referrals to regulators on numerous occasions.
There are also issues concerning privilege. Privilege is a cornerstone of the solicitor–client relationship and reflected in duties of confidentiality under instruments such as the Legal Profession Uniform Law and the Australian Solicitors’ Conduct Rules. With many open‑web AI tools retaining all the information uploaded to them, entering confidential material into these platforms can raise serious concerns about compliance with professional obligations and the potential waiver of legal professional privilege.
Courts’ emerging guidance on AI
Courts and regulators have begun to set guidelines and expectations around the use of generative AI in litigation, and how to use these tools responsibly. In New South Wales, courts prohibit the use of AI platforms to draft affidavits, witness statements, and expert testimonies unless leave is granted by the court. Recent judicial guidance in Victoria notes that while AI is not prohibited, “particular caution” is required, and compliance with existing expert and ethical standards should still be followed.
Guidance directed at judges in certain jurisdictions recommend that generative AI is not to be used to draft reasons for judgment or to analyse evidence. These restrictions highlight the broader concerns about transparency and accountability when generative tools contribute to legal decision making, as noted by the Australian Institute of Judicial Administration.
Lessons from recent cases
Recent judgments in Australia AND the UK underscore the importance of verifying any material generated or assisted by AI.
In several Australian matters, lawyers and litigants have filed submissions containing non‑existent cases or false quotations generated by AI.
- In LJY v Occupational Therapy Board of Australia [2025] QCAT 096,a practitioner included a written submission which quoted a fictional Court of Appeal judgment generated by ChatGPT. The Judge stated that ‘the case simply does not exist’ and ultimately ‘caused a significant waste of public resources.’
- In Murray (on behalf of the Wamba Wemba Native Title Claim Group) v Victoria [2025] FCA 731, the Victorian firm used Google Scholar to generate citations in native title documents; many references were incorrect or non‑existent. The Federal Court ordered the firm to personally pay the respondents’ costs on an indemnity basis, describing a “growing problem regarding false citations in documents prepared using AI.”
- In Mertz & Mertz (No 3) [2025] FedCFamC1A 222, a solicitor used AI in the preparation of the originally filed Summary of Argument and List of Authorities. While she denied using AI herself, she conceded that a paralegal had used AI to prepare the original documents without her knowledge. The Federal Circuit and Family Court of Australia rejected the excuse. This decision reiterated that practitioners must remain accountable for accuracy, regardless of whether they delegate tasks.
Similarly, in the United Kingdom, practitioners who relied on fabricated authorities have faced personal costs orders and referrals to professional regulators.
- In Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383, the Divisional Court examined the conduct of legal practitioners who submitted documents containing AI-generated content. In this case, a solicitor relied on case citations provided by their client, which were later found to be fabricated or inaccurate material sourced from generative AI tools. Despite the client being the original source of the material, the solicitor was held accountable for failing to verify the information. The court imposed a personal costs order on the solicitor, reinforcing that legal practitioners must uphold their professional duties to both the court and their clients, regardless of how the information was obtained.
Taken together, these cases deliver a clear message. AI may support legal practitioners, but it can never replace a lawyer’s independent judgment and duty to check the accuracy and reliability of what is brought before the court.
What this means for practitioners
For lawyers, including those practicing in family law, some important principles are emerging.
AI should only be used as a secondary tool: it can assist with brainstorming, but authorities and quotations should always be verified using primary and trusted databases before they are relied on for advice or court documents. It is also wise to ask clients whether they have used AI in preparing any documents they may provide. Problems have arisen where lawyers adopted client‑supplied material that turned out to be AI‑generated and incorrect. Therefore, anything which appears overly polished or citation heavy should be carefully reviewed.
Practitioners should refrain from inputting confidential client information into public AI tools whose platforms retain information. AI should not be used for confidential work, and authoritative legal databases are likely to be safer options.
Nicholes Family Lawyers has expertise advising clients across a wide range of family law matters, including complex parenting and property proceedings.
Should you require any advice about your situation, please contact our office on (03) 9670 4122 to arrange an initial consultation.