30.09.2024 09:54:00
Дата публикации
Google, X (former Twitter) and Meta (owner of Instagram, Facebook and WhatsApp) are in the spotlight of European regulators, highlighting the seriousness of the approach of European regulators and the importance of protecting personal data amid the growing role of AI in modern society.
In mid-September 2024, the EU’s leading data protection regulator, the Irish Data Protection Commission (DPC), announced the launch of an investigation into Google.
The main objective of the investigation was to determine how carefully the company protects the personal data of European users when developing its new AI system, Pathways Language Model 2 (PaLM 2).
The regulator noted that this investigation is part of a broader effort to control the processing of personal data as part of the development of AI in Europe.
X, owned by Elon Musk, has also come under close scrutiny from the DPC. In August 2024, X agreed to temporarily stop using EU user data to train its AI systems, including the Grok chatbot.
The regulator raised concerns that the company had started processing the data before users were given the opportunity to opt out. Since July 16, 2024, users have been able to disable the data processing through their privacy settings.
Meanwhile, legal proceedings over the legality of X’s data use are ongoing, and the platform has temporarily stopped using data collected between May and August 2024 until a final decision is reached.
X representatives consider the regulator’s actions to be unjustified and said that the company complies with privacy laws. However, this case only highlights the tensions surrounding the use of user data in AI systems.
It is worth noting that X and Google are not the first companies to face such investigations. In June 2024, Meta postponed the launch of its AI models in Europe after consulting with the Irish Data Protection Commission.
The main instrument for protecting personal data in the EU is the General Data Protection Regulation (GDPR), adopted in 2018. The GDPR obliges companies to obtain explicit consent from users to process their data and gives them the right to withdraw it at any time.
Companies that violate this law can face significant fines, making it one of the strictest data protection standards in the world.
Additional attention to data privacy in the context of AI is due to the fact that the new EU AI Act came into force in August 2024. The document introduces strict rules for the use of AI systems aimed at minimizing the risks associated with the technology.
It is therefore noticeable that the European approach to regulating AI is distinguished by its rigor.
Along with existing mechanisms such as the GDPR, the new AI Act poses serious compliance challenges for companies, which requires significant changes in their business processes.
In mid-September 2024, the EU’s leading data protection regulator, the Irish Data Protection Commission (DPC), announced the launch of an investigation into Google.
The main objective of the investigation was to determine how carefully the company protects the personal data of European users when developing its new AI system, Pathways Language Model 2 (PaLM 2).
The regulator noted that this investigation is part of a broader effort to control the processing of personal data as part of the development of AI in Europe.
X, owned by Elon Musk, has also come under close scrutiny from the DPC. In August 2024, X agreed to temporarily stop using EU user data to train its AI systems, including the Grok chatbot.
The regulator raised concerns that the company had started processing the data before users were given the opportunity to opt out. Since July 16, 2024, users have been able to disable the data processing through their privacy settings.
Meanwhile, legal proceedings over the legality of X’s data use are ongoing, and the platform has temporarily stopped using data collected between May and August 2024 until a final decision is reached.
X representatives consider the regulator’s actions to be unjustified and said that the company complies with privacy laws. However, this case only highlights the tensions surrounding the use of user data in AI systems.
It is worth noting that X and Google are not the first companies to face such investigations. In June 2024, Meta postponed the launch of its AI models in Europe after consulting with the Irish Data Protection Commission.
The main instrument for protecting personal data in the EU is the General Data Protection Regulation (GDPR), adopted in 2018. The GDPR obliges companies to obtain explicit consent from users to process their data and gives them the right to withdraw it at any time.
Companies that violate this law can face significant fines, making it one of the strictest data protection standards in the world.
Additional attention to data privacy in the context of AI is due to the fact that the new EU AI Act came into force in August 2024. The document introduces strict rules for the use of AI systems aimed at minimizing the risks associated with the technology.
It is therefore noticeable that the European approach to regulating AI is distinguished by its rigor.
Along with existing mechanisms such as the GDPR, the new AI Act poses serious compliance challenges for companies, which requires significant changes in their business processes.
(the text translation was done automatically)