Federal Court provides guidance on use of AI in court proceedings

By David Bowden ·

Law360 Canada (March 28, 2024, 3:05 PM EDT) --
David Bowden
David Bowden
Notable developments in the field of artificial intelligence (AI) have led to the widespread availability of low- and no-cost generative language tools. These tools appear to offer irresistible benefits to consumers, including lawyers: with a simple prompt, a software program can generate a convincing written output in many different formats, including text that replicates common language found in legal documents.

The pitfalls of using AI without due care in a legal practice, however, have been highlighted over the past year by high-profile and embarrassing incidents where the undisclosed use of AI tools was implicated in the preparation and filing of misleading court documents. In this context, lawyers practising before the Federal Court of Canada (the court) will now need to declare when they have used AI to prepare documents — and must follow additional directions provided by the court regarding the use of AI.

On Dec. 20, 2023, the court published two documents addressing the use of AI:

  • a practice notice regarding the use of artificial intelligence in court proceedings by litigants (the Practice Notice); and
  • a document setting out interim principles and guidelines for its own use of artificial intelligence (the “Interim Principles and Guidelines”).

In both documents, the court recognizes that the adoption of this technology can provide substantial benefits in terms of efficiency while still emphasizing caution in light of its potential risks.

Generative AI vs. other types of AI

Add required Alt Text here for accessibility purposes

Moor Studio: ISTOCKPHOTO.COM

“Artificial intelligence” is currently used to describe a wide range of software tools. However, the use of the term “artificial intelligence” in the Practice Notice refers to the use of generative AI, a term that describes “a computer system capable of generating new content and independently creating or generating information or documents.” The court clearly states that the Notice does not apply to other types of AI that do not generate new content.

In contrast to the Practice Notice, the court’s Interim Principles and Guidelines cover the use of a wider range of AI-type technologies, including programs used in the analysis of raw data and the performance of administrative tasks. This broad scope reflects the court’s 2020-2025 Strategic Plan, which announced the court’s plan to explore the use of AI in streamlining its processes (e.g., for online “smart forms”) and for aiding in mediation and other types of ADR.

Use of AI by litigants

The Practice Notice serves three main functions: (1) it mandates the inclusion of a declaration that AI was used to generate content in a court document, (2) it sets out principles for the use of AI by litigants and (3) it provides an explanation as to why the court has published the Practice Notice.

The Practice Notice was published at the end of a year that featured high-profile and embarrassing failures by lawyers who relied on generative AI without exercising proper oversight, including multiple instances in which an AI tool was used to generate documents filed with a court.

On June 23, 2023, U.S. District Judge P. Kevin Castel issued an opinion and order sanctioning a litigant and its law firm for the undisclosed use of an AI tool to draft an affidavit, for failing to properly review and verify its contents, and for refusing to withdraw the affidavit once it was questioned by the other party (Mata v. Avianca Inc., 2023 WL 4114965). The challenged affidavit included a number of citations to decisions that did not actually exist and that, instead, had been fabricated (or “hallucinated”) by the AI tool.

Soon after, the Canadian legal field experienced a similar incident when a B.C. lawyer used an AI tool to prepare a notice of application. The notice of application contained only two citations: references to two cases that did not actually exist. This mistake was quickly identified by the other side’s counsel. It resulted in significant negative publicity for the lawyer and an order of costs made against her personally (Zhang v. Chen, 2024 BCSC 285).

In awarding costs against her, the B.C. Supreme Court (in a decision that cited Mata v. Avianca) provided the following comments:

As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers. Competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less.

The directions set out in the Practice Notice are consistent with the observations above and may help avoid similar unfortunate situations in Federal Court. Pursuant to the Practice Notice, if generative AI was used to prepare a document for the purposes of litigation, and submitted to the court by or on behalf of a party or intervener, that document must include a declaration that discloses that the document contains AI-generated content.

This declaration must be made in the first paragraph of the document at issue. The court also provides an example of such a declaration:

Declaration

Artificial intelligence (AI) was used to generate content in this document.

Déclaration

L’intelligence artificielle (IA) a été utilisée pour générer au moins une partie du contenu de ce document.

In addition, the Practice Notice sets out guiding principles regarding the use of AI and discusses both the benefits and risks of using AI in the legal profession. In particular, the court notes that there are ethical and access-to-justice issues where a lawyer uses AI in circumstances where their client is unfamiliar with the technology. The court encourages lawyers to provide “traditional, human services” to clients where those clients are unfamiliar with AI or where such clients do not want to use AI.

The court also cautions the profession about legal references and analysis generated by AI and emphasizes the importance of using “only well-recognized and reliable sources.” The court’s guidance is no doubt a response, in part, to the failures of counsel in Mata v. Avianca and Zheng v. Chen, in which lawyers generated documents containing fabricated case law and fake citations.

Further, the Practice Notice references the “human in the loop” principle, which explains the necessity of checking AI-generated documents and materials and notes that such a review is in keeping with standards generally required of legal professionals.

Use of AI by the courts

The court’s Interim Principles and Guidelines primarily addresses the use of AI for administrative and procedural purposes. For instance, the court specifically states that it “will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultations.” For now, at least, the court will not use AI in determining issues between parties, as reflected in its Reasons for Judgment and Reasons for Order.

Through the Guidelines, the court attempts to balance its potential use of AI against the potential adverse impact that the use of this technology may have on judicial independence and public confidence in the administration of justice. It also sets out seven principles that will guide its use of AI (note: these seven points have been paraphrased and summarized, in part, through the use of a generative AI tool. The output of that tool was then subject to human review and revised for inclusion in this article):

  • Accountability: The court will be fully transparent to the public about any potential use of AI in its decision-making functions;
  • Respect for fundamental rights: The court will ensure that its uses of AI do not undermine judicial independence, access to justice or fundamental rights;
  • Non-discrimination: The court will ensure that its use of AI does not reproduce or aggravate discrimination;
  • Accuracy: Certified or verified sources and data will be used for processing judicial decisions and data for administrative purposes;
  • Transparency: The court will authorize external audits of any of its AI-assisted data processing methods;
  • Cybersecurity: Data will be securely stored and managed to protect the confidentiality, privacy, provenance and purpose of the data; and
  • “Human in the loop”: Members of the court and their law clerks will verify the results of the AI-generated outputs used in their work.

Conclusion

AI presents opportunities for significant efficiencies in virtually every commercial field — and the practice of law is no exception. The implementation of this technology, however, could lead to significant errors, causing inconvenience for the court and other litigants and profound professional embarrassment for lawyers who do not use the technology in a responsible manner.

The technology itself, moreover, cannot be used as a scapegoat: the Practice Notice and Interim Principles and Guidelines clarify that humans still bear the ultimate responsibility for what is contained in their court documents.

Two predominant themes can be identified in the court’s comments on AI: transparency, in which all parties are given notice that AI has been used in the preparation of a document or as part of an administrative process, and human review, in which outputs of AI must always undergo human verification. These themes are helpful for all types of legal practice, whether or not before a court, as they help create safeguards against deceptive and misleading use of AI tools in the generation of legal documents.

While policies will likely develop in the near future as AI technology advances, the court has offered the profession a means of navigating the implementation of this technology — at least for now.

David Bowden is a lawyer at Clark Wilson LLP. He practises intellectual property law, regularly advising clients on matters related to copyright, domain names and trademarks. He is a registered trademark agent and has been called as a barrister and solicitor in both Ontario and British Columbia.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, LexisNexis Canada, Law360 Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Richard Skinulis at Richard.Skinulis@lexisnexis.ca or call 437-828-6772.

LexisNexis® Research Solutions