Autonomy through Cyberjustice Technologies (ACT), the latest brainchild of the Cyberjustice Laboratory, is the largest international multidisciplinary research initiative that seeks to leverage AI to increase access to justice while providing justice stakeholders with a roadmap to help them develop technology that is better adapted to justice.

Karim Benyekhlef, Cyberjustice Laboratory
A class action case launched in 2017 in Michigan reveals the perils of an AI powered software program gone awry, added Benyekhlef. The class action was filed after more than 40,000 unemployment claimants were accused of benefits fraud based on the results of an AI software, and had nearly US$100 million seized through tax refunds or garnished wages, according to the plaintiffs. The state’s auditor general found that 93 per cent of the charges were spurious. The consequences go beyond the class action. Poorly conceived or implemented AI can lead to concerns about the use of AI in the legal system, said Benyekhlef.
“Artificial intelligence that could be useful would be set aside because there would be little or no social acceptability,” remarked Benyekhlef. “These tools have to be reviewed before implementation, improved, and while perfecting them introduce built-in protections to ensure the respect of fundamental rights. That’s why we’re working with our partners, both in the government and the private sector, to build awareness of these kinds of problems.”
That’s where ACT comes into play. Over the next five years ACT will take an inventory of existing technology and canvass situations where AI is used in the justice system, evaluate its impact through case studies, develop a body of best practices and establish a governance framework to ensure the fair use of AI in the justice system.
The burgeoning venture has culled together 50 researchers from around the world and 42 partners from business, industry, institutional and community and social circles. The likes of the federal and Quebec justice ministries, the Courts Administration Service, Community Legal Education Ontario, the Quebec bar as well as giants like Microsoft have all joined forces, as have a handful of law firms. The work conducted by ACT researchers and partners is intended to provide a better understanding of the practical, ethical and sociolegal issues arising from the integration of AI tools within the judicial system. It will also devote attention to the design and simulation of technological tools for conflict prevention and resolution.
All told there are 16 research projects that will examine a host of issues, many of which take direct aim at AI tools legal practitioners may be using or are considering using. One project will analyze algorithmic tools that claim to be able to predict the probable outcome of a trial. Another will assemble an inventory of common practices of AI tools that are used or under development by legal authorities such as police as well as administrative and tax authorities. Yet another will analyze existing traditional and technology-based mechanisms for debt recovery in France, Belgium and Quebec and then design a smart contract, using blockchain technology, to make the procedure more efficient. There is also a project that will examine tools being developed to assist self-represented litigants and determine its effectiveness, relevance and satisfaction rates.
“We wanted to put in place projects that are closely aligned with market developments,” noted Gélinas, a law professor at McGill University who heads the Private Justice and the Rule of Law Research group. “Things are moving rapidly, and rather than stay in the ivory tower, we really want to work with stakeholders from the private and public sector who know what’s taking place on the ground.”

Amy Salyzyn, University of Ottawa
Data is the fuel that drives AI. “It’s the essential ingredient,” said Benyekhlef. “If you don’t have data, you cannot develop algorithms or predictive AI tools.” Companies that have made the most advances in AI happen to be technological behemoths like Amazon, Facebook and Google who are sitting on troves of data generated by their own applications and tools, said Benyekhlef. Since many AI technologies are proprietary “understanding what they are and who is using them can be a challenge,” remarked Salyzyn.
Developing public policies around the use of AI also will be a challenge, according to Gélinas. Coupling the development of AI technology with legal reasoning, the traditional concept of court decisions and the use and role of judges are all among “the biggest challenges of research,” said Gélinas. Perhaps more so because to date “we still don’t completely understand how AI tools produce results,” noted Gélinas. “We don’t know how this tool exactly works and therefore the management of AI transparency and its link with legal reasoning is a great challenge.”