European Privacy Regulators Form Task Force to Address ChatGPT Privacy Concerns

The European Data Protection Board (EDPB) announced the establishment of a dedicated task force to address growing privacy concerns related to ChatGPT, the world’s most famous chatbot. The decision came during a meeting of the EDPB, a body comprising European data Demystifying GDPR: Understanding Its Impact and the Imperative for Organizational Complianceprotection authorities responsible for coordinating and enforcing EU data protection rules.

The move follows Italy’s Garante (Guarantor) of personal data’s decision on 31 March to mandate the temporary suspension of ChatGPT’s processing of personal data from Italian residents. The Italian privacy watchdog also launched an investigation into OpenAI, the provider of ChatGPT, for potential violations of the General Data Protection Regulation (GDPR).

The Garante’s decision has sparked international debate on the data protection implications of large language models like ChatGPT. European authorities, including France’s CNIL, have initiated investigations into the artificial intelligence system in response to similar complaints.

The Spanish data protection agency requested that ChatGPT’s privacy concerns be placed on the EDPB’s meeting agenda to facilitate coordinated decision-making at the European level. As a result, the EDPB agreed to create a task force to foster cooperation and information exchange on possible enforcement actions, ensuring a harmonized application of data protection rules across Europe.

In addition to the task force’s establishment, the Garante has given OpenAI until 30 April to comply with Europe’s data protection regulations by implementing a series of correction measures. OpenAI is required to provide a privacy notice that explains how users’ data is used for ChatGPT’s functioning and clarify the legal basis for processing personal data for training its algorithms.

OpenAI also faces the challenge of making tools available for individuals—whether users or not—who seek corrections or deletions of their personal data. Additionally, individuals must have the option to prevent their data from being fed into the AI model.

OpenAI has a deadline of 31st May to propose a plan to the Italian regulator for implementing age verification for its users by 30th September. The verification process is intended to prevent access to children under the age of 13 and minors who lack parental consent.