The Lawyer's Daily is now Law360 Canada. Click here to learn more.

Nye Thomas, Executive Director, Law Commission of Ontario

Report calls for proactive regulation to address government AI accountability gap

Wednesday, April 28, 2021 @ 8:53 AM | By John Schofield


The warning is loud and clear: if governments at all levels in Canada don’t act now to regulate their own use of artificial intelligence, it could end up hurting vulnerable citizens, damaging public services and undermining public trust, according to a recent report by the Law Commission of Ontario.

In what the law commission calls an “extraordinary gap in public accountability,” it notes that the federal government’s Directive on Automated Decision-Making (ADM), which took effect on April 1, 2019, covering federal institutions, is so far the only legislative or regulatory framework governing the use of artificial intelligence (AI) or ADM in Canada. So far, no Canadian province or public institution already deploying or developing significant AI or ADM systems has followed suit, says the April 14 report, titled Regulating AI: Critical Issues and Choice.

The federal directive was issued under the authority of s. 7 of the Financial Administration Act and under s. 4.4.2.4 of the Policy on Service and Digital.

 Nye Thomas, executive director of the Law Commission of Ontario

Nye Thomas, executive director of the Law Commission of Ontario

“In Canada, these technologies haven’t been developed extensively,” said Nye Thomas, executive director of the Law Commission of Ontario, which bills itself as Ontario’s leading law reform agency.

“However,” he told The Lawyer’s Daily, “because we’ve seen their track record, the good and bad, in other jurisdictions, we can actually regulate this stuff proactively so as to maximize the benefits and minimize the risks.”

One of the well-documented dangers of unchecked AI is the risk of built-in systemic bias against diverse or vulnerable groups, said Thomas. He noted that another significant issue that has arisen in many jurisdictions, particularly in the United States, is the lack of public participation in the development and oversight of AI and ADM systems. And lawyers, he added, are especially concerned about something called the “black box problem.”

“It’s hard to figure out how these systems work,” said Thomas. “If you want to challenge the system, it’s very difficult to know and to have disclosure about how the system works, which is obviously a key issue for lawyers and the justice system.”

Rather than directives, the law commission report recommends that governments adopt regulations or legislation to ensure the ethical development and use of government AI and ADM because they provide a higher legal standard and allow for public scrutiny through the legislative or regulatory process. But the federal directive, which was recently updated, offers a good model on which to build, said Thomas.

“As we point out in the report, it’s got strengths and it’s got some gaps,” he said. There are gaps most notably in the criminal justice system. The federal directive doesn’t apply to AI tools in criminal justice, which are potentially very, very far reaching and potentially could have an extraordinary impact on rights.”

If governments want to promote AI, which has already provoked public concern and controversy, Thomas said they need to provide public assurance and legal assurance that the systems can be trusted not to be biased and to have appropriate accountability and disclosure.

“So for good reasons, they’re looking at AI regulation,” he said. “Trustworthy AI is the direction we should be heading in. The question is what are the details to make sure that happens?”

From a legal standpoint especially, said Thomas, AI and ADM systems must be designed with ingrained due process, procedural fairness and legal accountability. He used the example of a government decision on benefits or a regulatory investigation aided by an AI system.

“If people want to challenge that,” he said, “then the regulation of these systems has to have appropriate due process and procedural fairness protections built in so that you can challenge these systems fairly. And that can get pretty technical.”

The report also recommends mandatory AI registers, which would require a government to publicly post online basic information about the AI systems that they’ve developed. The law commission is also calling for mandatory impact assessments, where governments are obligated to go through a list of questions and to publicly report on the answers to those questions to address the impact of that system. Finally, he said, AI regulations should compel governments to disclose the data used by their AI systems.

“Some of the problems with these systems has been proven to be tied back to the data that’s used to train the system or calibrate the system,” he said, “so we recommend a strong disclosure of the data that’s used by these systems.”

Thomas said governments and publicly funded institutions typically use AI systems to determine government benefits or access to public services such as housing and education. They can also be used to inform regulatory investigations or to conduct fraud checks on a range of government programs. He said some law enforcement agencies in Canada are also beginning to use them for so-called predictive policing — a tool used to help identify potential criminals that can involve the use of mass surveillance and facial recognition technology.

“But because there isn’t a mandatory disclosure of these systems right now in Canada,” he added, “we actually don’t know how they’re being developed or who’s developing them or for what purpose. And that’s a public accountability issue.”

The Law Commission of Ontario is funded by The Law Foundation of Ontario, the Law Society of Ontario, Osgoode Hall Law School and York University.

If you have any information, story ideas or news tips for The Lawyer’s Daily please contact John Schofield at john.schofield@lexisnexis.ca or call 905-415-5891.