![]() |
| Daniel J. Escott |
The catalyst? Not merely the submission of “hallucinated” case law generated by ChatGPT, a phenomenon that has unfortunately become a known quantity in our courts, but a deliberate, calculated lie told to a judge to cover up that technological incompetence.
This decision marks the first Canadian instance in which the misuse of generative AI has metastasized from a professional negligence issue to a criminal obstruction of justice. For the legal profession, this is the end of the “innocent adoption” phase. The court has signalled that the “black box” of AI will no longer serve as a shield for professional misconduct; indeed, attempting to hide behind it may now put counsel in the prisoner’s dock.
The original sin: The efficiency trap
The trajectory of this case serves as a grim parable for the “efficiency at all costs” narrative driving legal AI adoption.
Alex Sholom: ISTOCKPHOTO.COM
Later, in an attempt to mitigate the damage, Lee doubled down. She wrote to the court and stated on the record that she had “delegated the preparation of the factum … to a student” and that she only learned of the AI’s use later from an assistant. Based on this narrative, essentially throwing a subordinate under the bus to preserve her own professional veneer, the court found that her apology and “recommitment” to standards were sufficient to purge the contempt.
The system prioritized efficiency. The judge accepted the lawyer’s word, the “contempt” was purged and the machinery of justice moved on. But the efficiency was built on a fiction.
The cover-up: When the black box breaks
The facade collapsed on Sept. 30, when Lee sent an unsolicited letter to the court admitting the truth: there was no student.
Lee admitted she had used ChatGPT to draft the factum. She admitted that her previous statements, blaming staff, claiming ignorance, were lies born of “fear of the potential consequences and sheer embarrassment.”
Justice Myers’ response in Ko v. Li is a masterclass in distinguishing between technological incompetence and moral failing. The court noted that the initial contempt proceeding concerned a failure to verify citations, conduct that might be described as “indifference akin to recklessness.” However, the new disclosure fundamentally altered the landscape. As Justice Myers wrote, “The new disclosure raises issues of deception of the court in a contempt of court proceeding. The procedural context and the quality of the acts are very different.”
This distinction is critical. Using AI clumsily is a failure of practice management. Lying to a judge about how you used AI to evade liability for that failure is an attack on the administration of justice.
Procedural fairness over summary execution
What makes Ko v. Li legally fascinating and demonstrative of the tensions between efficiency and justice is Lee’s attempt to resolve the matter.
Lee appeared at the Dec. 2 case conference unrepresented, attempting to “purge” this new contempt by simply confessing and apologizing again. She asked the court to resolve the matter without further penalty, citing her cooperation with the Law Society of Ontario.
In a system obsessed with institutional efficiency, the temptation might be to accept the confession, issue a severe reprimand and close the file. However, Justice Myers recognized that “efficiency” cannot override procedural fairness, even for a lawyer who has admitted to lying.
Recognizing the gravity of criminal contempt, which carries a potential penalty of imprisonment for up to five years, Justice Myers refused to accept Lee’s summary confession. He noted that he had “not seen any case law in which a lawyer … admits to deliberately misleading a court in a criminal contempt of court proceeding.”
Consequently, the court appointed amicus curiae Dean Embry to ensure Lee receives a fair process. This was a necessary intervention. As Justice Myers noted, the court must “ensure that the offender has a fair trial in accordance with the principles of fundamental justice,” including the presumption of innocence and the right to make full answer and defence.
This highlights a paradox often ignored in the legal tech discourse: true justice is rarely efficient. It requires safeguards, hearings and the rigorous testing of evidence, even when the accused is begging to plead guilty and move on.
The liability crisis for the legal profession
The implications of Ko v. Li extend far beyond one lawyer’s meltdown. It exposes the dangerous psychological reliance lawyers are placing on generative AI.
Lee’s conduct reveals a terrifying reality: lawyers are so seduced by the promise of AI-driven efficiency that they are willing to bypass their most basic duties of verification. And when caught, the “black box” nature of the technology offers a tempting, albeit false, alibi (“AI did it,” or “The student using AI did it”).
But the court has now drawn a line. The “risk management crisis” I have written about previously regarding enterprise AI is no longer theoretical.
Fact: Lee used a tool she did not understand.
Fact: She relied on data she did not verify.
Result: When the hallucinations inevitably appeared, she compounded the error with perjury.
The Crown’s position in this case is telling. They argue that Lee’s failure to check citations amounted to “indifference akin to recklessness,” and her subsequent lies constituted a deliberate obstruction of justice.
Conclusion: A wake-up call for competence
The decision in Ko v. Li is a warning shot to every law firm and legal practitioner in Canada. The era of “move fast and break things” has collided with the Criminal Code.
If lawyers use AI, they must do so with a level of competence and transparency that has been sorely lacking. We need “citation, not creation.” We need tools that explain their reasoning (explainability) and professionals who remain strictly “in the loop.”
Lee’s tragedy is self-inflicted, but it was enabled by a culture that treats AI as a magic wand rather than a complex tool requiring rigorous oversight. As the Crown takes carriage of this prosecution, the message is clear: you can delegate drafting to an algorithm, but you cannot delegate your integrity.
Daniel J. Escott is a research fellow at the Artificial Intelligence Risk and Regulation Lab and the Access to Justice Centre for Excellence. He is currently pursuing a PhD in AI regulation at Osgoode Hall Law School, and holds an LLM from Osgoode Hall Law School, a JD from the University of New Brunswick and a BBA from the Memorial University of Newfoundland.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Peter Carter at peter.carter@lexisnexis.ca or call 647-776-6740.
