![]() |
Tara Sarvnaz Raissi |
In addition, OpenAI did not provide a legal basis for processing personal data to train its AI model. The company also failed to implement adequate safeguards to protect minors on its platform. OpenAI was ordered to conduct a six-month media campaign across Italy to raise awareness about its data collection practices.
While this decision is based on GDPR compliance, its implications go beyond the European Union. With new Canadian AI legislation and privacy reforms like the Digital Charter Implementation Act (Bill C-27), stalled due to Parliament’s prorogation, international regulations like the GDPR complement existing Canadian jurisprudence on responsible AI practices. Canadian companies can use global standards to gauge what is expected of them in the marketplace until further domestic legislation is enacted. Integrating these evolving criteria into a comprehensive AI governance program within an organization is a practical and beneficial interim solution.
The fine against OpenAI forecasts increased regulatory scrutiny of companies leveraging AI-based services. The Garante initially investigated OpenAI’s chatbot service following reports that it had suffered a data breach. A temporary ban was imposed on ChatGPT in Italy and was lifted after the Garante was satisfied that OpenAI had implemented corrective measures.
The Garante simultaneously launched an investigation into OpenAI’s GDPR compliance. In a decision dated Nov. 2, 2024, it found that the company had violated several GDPR provisions, including breach notification protocol (Article 33), lawful basis for processing personal data (Articles 5(1)(a), 6), privacy notice requirements (Articles 12, 13), age-appropriate safeguards (Article 24, 25) and compliance with a Supervisory Authority’s Order (Article 83(5)(e)).
The Garante’s findings were in part informed by an opinion from the European Data Protection Board (EDPB) published on Dec. 18, 2024. This opinion outlined considerations to determine whether a legitimate interest was a lawful basis for processing personal data in AI development and deployment.
Under the GDPR, a legitimate interest must be lawful, clear and specific and relate to a business, commercial or broader societal goal. The opinion also underscored the significance of the three-part balancing test to ensure that the legitimate interests of data controllers did not encroach on the rights of data subjects. This test involves identifying a lawful interest, ensuring that processing is strictly necessary to achieve that purpose, and weighing it against the rights and reasonable expectations of data subjects.
The EDPB provided criteria to assess individuals’ reasonable expectations of their data, including the availability of personal data, the relationship between individuals and data controllers, the nature of the service provided, the data source, potential model uses and user awareness about their data being online. The Garante incorporated these criteria in evaluating OpenAI’s practices before issuing its final ruling against the company.
This article will examine the history of this decision and its implications for Canadian organizations using AI-based services.
Investigation following data breach
On March 24, 2023, OpenAI confirmed its ChatGPT service had suffered a data breach. A technical bug exposed queries and personal information of users to one another. As a result, the Garante launched an investigation into ChatGPT and identified the following violations of the GDPR:
- Inadequate transparency about the collection and processing of user data;
- Lack of a lawful basis for processing data to train algorithms powering the chatbot;
- Discrepancies between the information provided by ChatGPT and the actual data, leading to inaccurate personal data processed; and
- Failure to implement appropriate age verification measures, exposing minors to risk.
The authority imposed a temporary suspension of the service throughout its investigation. OpenAI agreed to revise its privacy notice, enhance user controls to allow opt-out of data processing and correct inaccuracies in personal data. In addition, the company undertook to implement an age gate to exclude underage users. The Garante was satisfied with these measures and lifted its suspension.
Broader investigation into GDPR compliance
The Garante’s parallel investigation into OpenAI’s GDPR compliance revealed multiple violations. OpenAI had failed to notify the Italian Data Authority of the March 20, 2023 data breach involving Italian users, as required under Article 33 of the GDPR. The Garante also determined that OpenAI lacked a lawful basis for processing personal data to train ChatGPT before its Nov. 30, 2022 launch and had failed to establish one until March 30, 2023, even though the GDPR applied to EU users (Articles 5(2) and 6). The company’s privacy notice was only available in English and overly broad (Articles 12 and 13). OpenAI did not verify users’ ages or secure parental consent for minors aged 13 to 18 (Articles 24 and 25).
Finally, as a condition to lift its temporary ban on March 30, 2023, the Garante required OpenAI to conduct a public information campaign by May 15, 2023. While OpenAI complied with this mandate and ran the campaign, it did so without the requisite prior approval from the Garante (Article 83(5)(e)). The authority concluded that these violations were sufficient grounds to levy a fine against the company.
The global reach of GDPR and final remarks
The GDPR informs industry standards for data privacy across the globe. It enhances existing Canadian jurisprudence on AI by forecasting what organizations can expect as minimum standards evolve.
For Canadian organizations using AI-based services, this decision offers insight into the effectiveness of robust governance programs to mitigate regulatory risk. Comprehensive assessments are at the core of AI governance. They ensure that AI systems are developed and deployed responsibly, in alignment with organizational values and legal obligations.
The Garante decision incorporates EDPB guidance that justifying a legitimate interest to safeguard data subjects’ rights requires a balancing test. Understanding data subjects’ reasonable expectations informs part of said test. Organizations that integrate these considerations into their assessments can document them as part of their mitigation strategy. Proactively adopting these practices within an AI governance framework better positions companies to navigate the evolving regulatory landscape of AI.
Tara Sarvnaz Raissi (CIPP/C) is senior legal counsel (Ontario, Western and Atlantic Canada) at Beneva and is based out of Toronto. She has written extensively about AI use in legal practice.
The opinions expressed are those of the author and do not reflect the views of the author’s firm, its clients, Law360 Canada, LexisNexis Canada or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
Interested in writing for us? To learn more about how you can add your voice to Law360 Canada, contact Analysis Editor Yvette Trancoso at Yvette.Trancoso-barrett@lexisnexis.ca or call 905-415-5811.