OpenAI has been fined €15 million by the Italian Data Protection Authority for privacy violations involving their ChatGPT model. The organization cited inadequate safeguards for minors, poor data protection practices, and transparency issues.
Findings from the Investigation
In its investigation started after a data breach in March 2023, the Italian regulator, known as Garante, found numerous infractions. Transparency issues arose when OpenAI failed to notify authorities of the incident. Furthermore, the company violated the GDPR's transparency principle by processing user data without a legal basis. Minors under thirteen were exposed to potentially inappropriate chatbot responses due to ineffective age verification.
Transparency and User Awareness
To address these shortcomings, OpenAI must launch a six-month public awareness campaign across media platforms. The campaign aims to educate users on data collection practices, rights under GDPR, and ways to counter data use for AI training. This initiative targets better public understanding of generative AI and its ethical implications.
Broader Implications for AI Regulation in Europe
OpenAI relocated its European headquarters to Ireland during the investigation, making the Irish DPC the lead supervisory body for ongoing probes. Besides the fine, the case signals increasing regulatory scrutiny over generative AI and its societal impact. The IDPA's findings align with the European Data Protection Board’s opinion on using personal data for AI development.
The fine and requirement to launch an awareness campaign highlight growing concerns over AI regulation in Europe, emphasizing GDPR compliance and transparency in ethical innovation.