Use of GenAI noted by Court of Appeal
After taking note of a memorandum dated 12 September 2024 from the appellant (W), the Court of Appeal commented at [199] n 187:
“We note [W’s] original memorandum in response dated 10 September 2024 which [W] withdrew after the apparent use of generative artificial intelligence in its drafting was drawn to our attention by respondent counsel. The use of generative artificial intelligence was not initially disclosed by [W], but was evident from the references to apparently non-existent cases. No further comment is necessary except to note the relevant guidance recently issued by the judiciary: Guidelines for use of generative artificial intelligence in Courts and Tribunals: Non-lawyers (Artificial Intelligence Advisory Group, 7 December 2023).”
Those guidelines, along with corresponding guidance for lawyers and for judicial officers/support staff, are available at Courts of New Zealand “Guidelines for use of generative artificial intelligence in Courts and Tribunals” <www.courtsofnz.govt.nz>. Their rationale is explained in Courts of New Zealand “Judiciary publishes guidelines for use of generative artificial intelligence in Courts and Tribunals” (media release, 7 December 2023) at <www.courtsofnz.govt.nz>, which includes the following two bullet points:
- Generative AI tools offer significant potential benefits to the courts and court users, including enhancing access to justice by making legal knowledge and information more accessible to non-lawyers.
- It is important, therefore, that early adopters of these new technologies understand their risks and limitations and are given some practical guidance as to how to use such tools in a responsible way.
In summary, the guidelines for both lawyers and non-lawyers are:
- Before using GenAI chatbots, ensure you have a basic understanding of their capabilities and limitations.
- Generally, you should not enter any information into an AI chatbot that is not already in the public domain. Do not enter any information that is private, confidential, suppressed or legally privileged information.
- You are responsible for ensuring that all information you provide to the court/tribunal is accurate. You must check the accuracy of any information you get from a GenAI chatbot before using that information in court/tribunal proceedings.
- Consider ethical issues – particularly biases and the need to address them.
- You do not need to disclose use of a GenAI chatbot by default – unless asked by the court or tribunal.
For judicial officers/support staff, the guidelines are (in summary):
- Before using GenAI chatbots, ensure you have a basic understanding of their capabilities and limitations.
- Generally, you should not enter any information into an AI chatbot that is not already in the public domain. Do not enter any information that is private, confidential, suppressed or legally privileged information.
- You must check the accuracy of any information you have been provided by a GenAI chatbot before it is relied upon.
- Have regard to ethical issues – particularly biases and the need to address them.
- Follow best practices for maintaining your own and the court/tribunals’ security.
- Disclosing GenAI use:
- Judges/judicial officers/tribunal members: You do not need to disclose use of a GenAI chatbot.
- Clerks/research counsel and judicial support staff: Discuss with your supervising judge/judicial officer/tribunal member how you are using GenAI chatbots (or any other GenAI tools) and the steps you are taking to mitigate any risks.
- Check the accuracy of information contained in submissions that show signs they were produced by a GenAI chatbot.
For more information about GenAI, see Thomson Reuters New Zealand “CoCounsel: Most advanced legal assistant” <www.thomsonreuters.co.nz>.