Your firm must have an account to access this feature

Expert Analysis

Fictitious case law a systemic problem in Canadian courts: 111 and counting

By Tom Macintosh Zheng ·

Law360 Canada (March 17, 2026, 1:08 PM EDT) --
Tom Macintosh Zheng
Tom Macintosh Zheng
In October 2025, a Federal Court associate judge ordered a lawyer to pay costs personally after the lawyer submitted two AI-generated cases that did not exist. The decision drew attention for good reason. But it also raised a harder question: how often is this happening across the country?

To try to answer that question, we spent the past year tracking every publicly available Canadian court and tribunal decision that flagged fictitious case citations in a party’s legal submissions. The results suggest that this issue has become a systemic problem across Canada.

What the data shows

AI turning

Dragon Claws: ISTOCKPHOTO.COM

Between Jan. 1, 2024, and March 10, 2026, Canadian courts and tribunals flagged at least 211 non-existent cases cited as real law across 111 decisions. Those decisions span 42 different courts and tribunals, from the Federal Court to provincial superior courts to administrative tribunals.

The trajectory is steep. In 2024, we identified seven decisions flagging fictitious citations. In 2025, that number rose to 80. In the first 10 weeks of 2026 alone, we have already identified 24.

In 82 of the 111 decisions, the court found or presumed that AI tools generated the fictitious cases. In the remaining 29, courts did not conclusively establish the source.

The methodology behind the study involved targeted keyword searches of publicly available decisions, using search terms designed to capture judicial language indicating that a cited authority could not be verified, as well as decisions that explicitly discuss the use of AI in legal proceedings. The search covered decisions published between Jan. 1, 2024, and March 10, 2026, and we continue to conduct the search on a weekly basis.

The detection gap

Even at 211, the data almost certainly understates the problem.

In 54 of the 111 decisions, courts noted that a party had submitted fictitious cases but did not identify the specific citations in the decision. Each of those 54 decisions may involve several fictitious cases that never made it into the written reasons. The true number of fictitious citations underlying the 111 decisions is unknown and likely higher than 211.

Further, fictitious citations that slipped past the bench, past opposing counsel and past the parties themselves are invisible to this kind of research.

The data, in other words, reflects the floor. The ceiling is unknown.

Disclosure is not verification

In 87 of the 111 decisions, or 78.4 per cent, the fictitious cases appeared in the submissions of self-represented litigants. That number deserves context rather than blame.

Many self-represented litigants turn to AI tools for the same reason they represent themselves: they cannot afford a lawyer. When a generative AI tool produces a confident, well-formatted list of case citations, a person without legal training has no reason to suspect that those cases might not exist. For someone navigating an unfamiliar system without professional guidance, there is no obvious signal that anything has gone wrong.

This makes the current wave of institutional responses, while well-intentioned, incomplete. Ontario, for example, now requires parties to certify in their facta that every cited case is “authentic.” The Federal Court requires disclosure of generative AI use in court filings. Both are reasonable steps. But neither solves the core problem.

A disclosure requirement tells the court that someone used AI. It does not tell the court whether the citations are real. A certification requirement asks the party to confirm authenticity. But if a self-represented litigant does not know that AI fabricates cases, the certification does not solve the issue. They will sign it in good faith because they believe the cases exist.

And for judges and adjudicators on the receiving end, neither disclosure nor certification gives them a reliable way to confirm that the cases before them are real. The burden still falls on the bench to catch what the parties missed.

The problem, then, is not that self-represented litigants are careless. The problem is that AI tools produce fictitious law without warning users that the citations may not exist, and the legal system currently has no systematic checkpoint between that output and the courtroom.

Looking ahead: Little room for optimism

The year-over-year trajectory leaves little room for optimism that this problem will correct itself. AI tools are becoming more accessible. The number of self-represented Canadians in courts and tribunals is growing. And as the data shows, fictitious citations have now surfaced across 42 different courts and tribunals, making this a nationwide concern rather than an isolated one.

The legal profession, the bench and policymakers now face a practical question: how do you build verification into a system that never anticipated fake cases being submitted as real law?

Tom Macintosh Zheng is a Toronto-based former commercial litigator. He is now building online tools to help Canadians access and understand our justice system, including CaseCheck, a legal citation verification tool. To see the full database, which is updated weekly, click here. Tom is this year’s recipient of the OBA Foundation Award.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the author’s firm, its clients, LexisNexis Canada, Law360 Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Richard Skinulis at Richard.Skinulis@lexisnexis.ca or call 437-828-6772.