R. v. Chand: A cautionary tale of generative AI and judicial intervention

By Oksana Romanov ·

Law360 Canada (October 8, 2025, 1:20 PM EDT) --
Photo of Oksana Romanov
Oksana Romanov
This is the third article in a series building on my earlier discussion of AI hallucinations in the legal context and their prevalence.

Are we ready for algorithmic justice?

Back in 2018, Carole Piovesan and Vivian Ntiri reviewed the applications of AI technologies in law, including risk-based assessments in criminal proceedings in the U.S. and emerging trends in Canada, in their article, “Adjudication by Algorithm: The Risks and Benefits of Artificial Intelligence in Judicial Decision-Making,” published in The Advocates Journal. The authors identified two important dilemmas involving the use of AI in judicial decision-making: (1) the issues of bias, and (2) algorithmic accountability, often shielded by trade secret protections or hindered by a lack of transparency.

The types of biases associated with the use of AI in judicial decisions in the criminal context, explored in this article, include data-driven bias, selection bias, emergent or similarity bias, and interaction
Photoillustration of letters 'AI' next to three eyes

Aparna Sinha: ISTOCKPHOTO.COM

bias. Popular culture provides an analogous example of a cautionary tale about “the potential legality of an infallible prosecutor” in Minority Report, an old Hollywood sci-fi movie. An algorithm, predicting future crimes, implicated an honourable member of the law enforcement as a suspect in a future murder case. It seems that no one would be immune to selection bias if AI-based adjudication lacks algorithmic accountability and transparency.

In the conclusion, Piovesan and Ntiri include a call to action for AI programmers and the legal community “to develop a thorough understanding of the technology involved and the complex nature of the social issues resulting from bias and discrimination.”

R. v. Chand, 2025 ONCJ 282

While R. v. Chand, 2025 ONCJ 282 is not an example of algorithmic judicial decision-making under scrutiny, the case offers instructive insight into the issue of AI hallucinations. This case is a judicial direction regarding defence final submissions, where Justice Joseph F. Kenkel addresses the seriousness and scope of errors found in the criminal law context.

According to the introductory paragraph, in the underlying substantive criminal matter, the accused was charged with aggravated assault and related offences. After the trial evidence was complete, Justice Kenkel requested the parties make final submissions in writing.

Upon reviewing the submissions by the Crown and the defence, Justice Kenkel identified serious issues with the defence submissions, including fictitious citations, case law that did not support the points cited and unrelated civil cases. Faced with the scope of errors found in defence submissions, he wrote that “[t]he errors are numerous and substantial”: R. v. Chand, 2025 ONCJ 282, at para. 3. In light of this finding, the court demanded proper submissions from counsel.

As a result, at para. 5 of R. v. Chand, Justice Kenkel ordered the following direction regarding defence final submissions, including the use of generative AI:

  • the paragraphs must be numbered;
  • the pages must be numbered;
  • case citations must include a pinpoint cite to the paragraph that illustrates the point being made;
  • case citations must be checked and hyperlinked to CanLII or other site to ensure accuracy;
  • generative AI or commercial legal software that uses GenAI must not be used for legal research for these submissions.

To conclude, the court’s guidance on the use of AI tools in this particular case was categorical: no AI-assisted legal research was permitted. We are neither ready for algorithmic justice nor for using AI in the criminal law context without curated legal outputs and robust judicial oversight.

Guidance on the use of generative AI in the legal context

As lawyers and legal content curators, we must remain vigilant and follow the guidance from the Law Society of Ontario (LSO), which offers a robust list of resources on the licensee use of generative AI in their legal practice. Below are some of the LSO’s resources for your legal toolbox:


Practice directions

Finally, under certain local practice directions in Ontario, courts require counsel to hyperlink case law in electronically filed documents. For example, General Practice Direction Regarding All Proceedings in the Court of Appeal for Ontario, Consolidated Practice Direction of the Federal Court of Appeal, and Tribunals Ontario’ Practice Direction on the Use of Artificial Intelligence (AI) in Tribunal Proceedings.

Oksana Romanov, BA (Hons), MA (Comm), JD with Distinction, is a sole practitioner practising criminal law as Law Office of Oksana Romanov.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.