Commons committee recommends ‘national pause’ on use of facial recognition technology

By Amanda Jerome

Law360 Canada (October 11, 2022, 10:50 AM EDT) -- A “national pause” on the use of facial recognition technology (FRT), “particularly for federal police services,” was recommended by the House of Commons Standing Committee on Access to Information, Privacy and Ethics, in a recently tabled report on the impact of FRT and the “growing power of artificial intelligence [AI].”

The report, tabled in the House of Commons on Oct. 4, examined the “use of facial recognition technology and algorithmic tools in the private and public sector, including its use by police forces, border authorities and other federal agencies,” a news release explained.

According to the committee’s report, its examination “confirmed that Canada’s current legislative framework does not adequately regulate FRT and AI. Without an appropriate framework, FRT and other AI tools could cause irreparable harm to some individuals.”

The committee was of the view that, “when FRT or other AI technology is used, they must be used responsibly, within a robust legislative framework that protects Canadians’ privacy rights and civil liberties” and “since such a legislative framework does not exist at the time, a national pause should be imposed on the use of FRT, particularly with respect to police services.”

The report issued 19 recommendations to “further strengthen Canadians’ privacy and ensure that there is an appropriate legal framework for the use of facial recognition technology and artificial intelligence in Canada,” the release explained.

The report’s conclusion “strongly” encouraged the government of Canada to “implement” these recommendations “as quickly as possible.”

As well as a national pause, the recommendations propose amendments to the Privacy Act and Personal Information Protection and Electronic Documents Act, as well as updates to the Canadian Human Rights Act.

The committee held nine “public meetings as part of this study and heard from 33 witnesses, including the privacy commissioner of Canada, representatives of the Royal Canadian Mounted Police, and other experts and stakeholders,” the press release noted.

According to the report, Carole Piovesan, a managing partner at INQ Law, appeared before the committee in March 2022 and noted that “FRT uses highly sensitive biometric facial data to identify and verify an individual.” While Brenda McPhail, director of the privacy, technology and surveillance program at the Canadian Civil Liberties Association (CCLA), said “FRT can be thought of as facial fingerprinting.”

Piovesan, the report noted, said that “while discussions about FRT tend to focus on security and surveillance, various other sectors are using this technology, including retail and e-commerce, telecommunications and information technology, and health care.” And this “presents a growing economic opportunity for developers and users.”

The report emphasized that the “biggest concern with the use of FRT is the potential for misidentification.”

Cynthia Khoo, Georgetown Law School

Cynthia Khoo, Georgetown Law School

Cynthia Khoo, a research fellow with the Citizen Lab at the University of Toronto and a senior associate at the Center on Privacy & Technology at Georgetown Law in Washington, D.C., appeared before the committee in March 2022 and advised that “researchers have found that FRT is up to 100 times more likely to misidentify Black and Asian individuals,” misidentifying “more than one in three darker-skinned women,” while being 99 per cent accurate for white men.

“According to Ms. Khoo,” the report explained, “one of the key problems with law enforcement use of FRT is the lack of transparency. The public often learns about the technology’s use through media, leaked documents and freedom of information requests.”

In an interview with The Lawyer’s Daily, Khoo said the most impactful recommendation in the report is the one calling for a “federal moratorium on the use of facial recognition technology by (federal) policing services and Canadian industries unless implemented in confirmed consultation with the Office of the Privacy Commissioner or through judicial authorization.”

Khoo stressed that this recommendation was made in a Citizen Lab report, co-authored by herself, Kate Robertson and Yolanda Song titled ‘To Surveil and Predict – A Human Rights Analysis of Algorithmic Policing in Canada.’

She believes strongly in this recommendation as “it would be irresponsible” to allow FRT use to go on “without any sort of regulatory framework, without any sort of national research effort from a legal and equality perspective on what the harms are.”

Like Khoo, Piovesan also “raised concerns about accuracy and bias in system outputs, unlawful and indiscriminate surveillance and black box technology that is inaccessible to lawmakers, restricting freedom and putting at risk fundamental values as enshrined in the Canadian Charter of Rights and Freedoms.”

Before the committee Tim McSorley, the national co-ordinator of the International Civil Liberties Monitoring Group (ICLMG), raised “three main concerns about FRT: biased and inaccurate algorithms reinforce systemic racism and racial profiling; facial recognition allows for indiscriminate and warrantless mass surveillance; and the lack of regulation, transparency and accountability from law enforcement and intelligence agencies in Canada.”

Concerns over FRT and AI have existed for some time. According to the report, a “group of 77 privacy, human rights and civil liberties advocates, including the ICLMG, wrote a letter to the Minister of Public Safety calling on him to ban all use of facial recognition surveillance by federal law enforcement and intelligence agencies” back in 2020.

McSorley, the report noted, stressed that “CSIS has refused to confirm whether or not it uses FRT in its work, stating that it has no obligation to do so.”

Also in 2020, the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) “released a report about FR and its use at the border.”

Tamir Israel, a lawyer with the CIPPIC, “outlined the report’s key findings,” to the committee and noted that, “[F]acial recognition is being adopted at the border without due consideration for the harms it would cause, without much external oversight and often without regard to existing policies, such as the Treasury Board's policy on artificial intelligence, where you are supposed to bring in external guidance when adopting intrusive technologies like that.”

Dr. Petra Molnar, with the Refugee Law Lab, also shared her concerns with the committee about “the use of FRT by border authorities for the purpose of implementing biometric mass surveillance in migration and border management.”

“In her view, to fully understand the impacts of various migration management and border technologies (e.g., AI lie detectors, biometric mass surveillance and various automated decision-making tools), it is important to consider the broader ecosystem in which these technologies develop,” the report explained, noting that this is an “ecosystem that is increasingly replete with the criminalization of migration, anti-migrant sentiments, and border practices leading to thousands of deaths, not only in Europe but also at the U.S.-Mexico and U.S.-Canada borders.”

The report also highlighted the use of FRT in public spaces, with Khoo advising that “the use of FRT in public violates privacy preserved through anonymity in daily life.”

These violations, Khoo noted in the report, would “likely induce chilling effects on freedom of expression such as public protests about injustice” and “promises to exacerbate gender-based violence and abuse by facilitating the stalking of women.”

With regards to private industry, Dr. Elizabeth Anne Watkins, a postdoctoral research associate at Princeton University, expressed concerns about the use of “facial verification on workers.”

Watkins, the report noted, said that “[f]acial verification is increasingly being used in work contexts, in particular gig work or precarious labour.”

“According to Dr. Watkins, ‘these systems are often in place to guarantee worker privacy, to prevent fraud and to protect security’ but there needs to be alternatives in place to give workers other options,” the report explained, noting that Watkins believes that “workers should be consulted to better understand what kinds of technology they would prefer to comply with and to provide them with alternatives so that they can opt out of technologies yet still access their means of livelihood.”

With regards to police procurement of technology, the report highlighted Khoo’s advice that “strict legal safeguards must be in place to ensure that police reliance on private sector companies does not create a way to go around people’s rights to liberty and protection from unreasonable search and seizure.”

As an example, the report noted that “software from companies like Clearview AI, Amazon Rekognition and NEC Corporation” are “typically protected by trade secret laws and procured on the basis of behind-the-scenes lobbying.”

Khoo said “this circumstance results in secretive public-private surveillance partnerships that strip defendants of their due process rights and subject the public to inscrutable layers of mass surveillance,” the report explained, noting that in order to address this situation, Khoo recommended that “any commercial technology vendor that collects personal data for law enforcement should be contractually bound or otherwise held to standards of privacy and disclosure.”

Given the “risks of FRT,” the report emphasized “most stakeholders recommended a moratorium, particularly in law enforcement, until an appropriate regulatory framework is in place and more research and consultation on the use of the technology and its impacts have been done.”

As for regulating FRT and AI, most witnesses before the committee “agreed that, while the current legislative framework provides some protections, it is insufficient.”

As an example, Daniel Therrien, the former privacy commissioner of Canada, noted there is a “patchwork of laws that govern FR: the Canadian Charter of Rights and Freedoms, the common law and certain other laws, including privacy legislation.”

“The problem,” Therrien noted to the committee, is that this “patchwork of laws can be used in many ways” and the “current rules are too vague to give the necessary level of trust that citizens should have in the collection of information by the public and private sector.”

The committee acknowledged that “Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, introduced in the House of Commons in June 2022, if passed in its current form, could address certain gaps in the current privacy legislative framework that may apply to FRT and AI.”

“However, since the bill has not yet been passed, the committee is making its recommendations based on the legislative framework in place,” the report explained.

According to the report, to “shape a relatively comprehensive regulatory framework for FRT that mitigates the threat this technology poses and takes advantage of its real beneficial possibilities, Piovesan advised Canada “should consider four AI principles that align with the Organisation for Economic Co-operation and Development (OECD) AI Principles and leading international guidance on responsible AI.”

The OECD AI Principles are “technical robustness, accountability, lawfulness and fairness,” the report added.

Khoo told The Lawyer’s Daily that at “first blush” the recommendations seem “significantly wide-ranging” as they impact the public sector.

Other recommendations that Khoo highlighted were the ones focusing on “public transparency,” noting also the recommendations “aren’t just limited to facial recognition technologies,” but some touch on “algorithmic policing technologies in general.”

The other recommendation Khoo was excited about advised that “advanced public notice” around FRT use, which she believes is key in implementing a “public comment period.” She stressed that “independent oversight, as well as consultation with impacted communities,” has to be the North Star of all of these policies and regulations.”

With regards to lawyers, Khoo suggested that defence counsel and judges who are presiding over criminal law cases “have to pay attention to the presence of these technologies when it comes to evidence that relies on the use of facial recognition technology or other algorithmic policing technologies.”

She stressed that “predictive technologies are novel and emerging,” so they “should be considered novel sciences under the legal test for whether you can introduce ‘novel’ scientific evidence into the record.”

For tech or invocation lawyers, she noted that these recommendations are a signal that “people are catching on.”

“They're no longer going to just accept new technologies as either automatically state of the art, or automatically beneficial, or automatically inevitable, which I think is a narrative that often happens,” she explained, stressing that it matters if “your technology, or your product, or service is harming people, even if you think it’s innovative and engaging.”

“Perpetuating inequity across multiple spheres of society; these things do matter. So maybe that’s something that you want you to take into account up front, rather than have the government or public interest researchers or NGOs try and clean up your mess five to 10 years after the fact,” she added.

The committee’s report was the result of a motion, adopted in 2021, to study the impacts of FRT and AI after a “joint investigation by the Office of the Privacy Commissioner of Canada (OPC) and its provincial counterparts in Alberta, British Columbia and Quebec in a case involving Clearview AI.”

According to the report, the OPC’s joint investigation concluded that “Clearview AI had failed to comply with the Personal Information Protection and Electronic Documents Act (PIPEDA) by engaging in the mass collection of images without consent and for inappropriate purposes.”

“The OPC also conducted an investigation into the use of Clearview AI technology by the Royal Canadian Mounted Police (RCMP),” releasing a special report to Parliament in June 2021, finding that the RCMP “failed to comply with the Privacy Act by collecting personal information from a third party (Clearview AI) who illegally collected it.”

“On behalf of the committee, I want to thank all the witnesses who appeared before the committee and shared their knowledge, time and expertise,” said commitee chair, Pat Kelly.

“The report’s recommendations are supported by members of all four parties on the committee and I hope that the government will respond quickly and decisively to implement them,” he added in a statement.

If you have any information, story ideas or news tips for The Lawyer’s Daily please contact Amanda Jerome at Amanda.Jerome@lexisnexis.ca or 416-524-2152.