A June 23 directive signed by King’s Bench Chief Justice Glenn Joyal points to concerns around the “reliability and accuracy” of AI-generated information being put forth to the court.
“With the still novel but rapid development of artificial intelligence, it is apparent that artificial intelligence might be used in court submissions,” it states. “While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence.”
King’s Bench Chief Justice Glenn Joyal
(Traditional AI uses programing to perform specific tasks, while generative AI uses raw data to create new content.)
He was also quick to note that the AI issue is on the radar of other judges, including ones in Ontario and British Columbia.
Chief Justice Joyal points to the tone of the directive, which consists of nothing more than a single, spare paragraph. This, he said, reflects AI remaining nebulous and uncharted when it comes to its place in court matters.
The main objectives of the directive are, one, for the court to know that AI has been used and, two, to ensure the information generated is reliable and accurate.
But it also must be determined if it is even permissible, he said.
“The first question is, is the use of the [AI] in the context of what’s being done here permissible. That’s a normative question courts are going to have to grapple with. I can’t answer that question partly because of the technological fluidity of what we’re discussing, [and] partly because of the courts’ own limited imagination as to [how] AI might be used.”
To this, Chief Justice Joyal says he hopes there will eventually be “some consensus … across the country about what the normatively permissible uses are.”
Then there all the other questions.
“Let’s assume that it is a permissible use; or a potentially permissible use; a normatively justifiable use. Is it a use that can be verified, relied upon, checked, and otherwise assessed by the court in a way that makes it consistent with the other types of … function courts have to play? Or is it verifiable on an ethical level? All of these things are questions that require … the disclosure of the use of AI.”
All of these unknowns is why the tone of the directive “is both cautionary and anticipatory,” he said.
“It’s trying to say there is so much we don’t know. But we do know that it’s coming — and in some respects it’s already come — and we have to start grappling with this. And so, in the context of all that uncertainty, what are the baby steps that allow us to be cautiously wise and proactive, but at the same time anticipating that there will probably be some uses that we will find permissible?”
He called it a “humbling endeavour “where courts have to sort of project [themselves] into the future in a way that is trying to be open, but at the same time trying to be very cautious about something that we can’t completely know how it will evolve or develop.”
In terms of process, Chief Justice Joyal said that after the use of AI is disclosed, a judicial officer would weigh in as to whether it is permissible, verifiable and “reliable for its intended and court purposes.”
This, he added, should be put on the record in the name of transparency.
If you have any information, story ideas or news tips for Law360 Canada, please contact Terry Davidson at email@example.com or 905-415-5899.