In a bold move that has sparked controversy across the European Union, Elon Musk’s social media platform X (formerly Twitter) finds itself at the center of a privacy storm. The company has been hit with nine separate privacy complaints after allegedly using EU users’ data to train its AI chatbot, Grok, without obtaining proper consent.
The issue came to light in late July when an observant social media user noticed a setting indicating that X had begun processing post data from regional users for AI training purposes. This revelation caught the attention of privacy advocates and regulators alike, with the Irish Data Protection Commission (DPC) expressing “surprise” at the undisclosed data usage.
According to reports, X had been processing Europeans’ data for AI model training between May 7 and August 1, 2024, without notifying users or seeking their explicit consent. This action appears to violate the EU’s General Data Protection Regulation (GDPR), which requires a valid legal basis for all uses of personal data.
Max Schrems, chairman of the privacy rights nonprofit noyb, which is supporting the complaints, stated, “We have seen countless instances of inefficient and partial enforcement by the DPC in the past years. We want to ensure that Twitter fully complies with EU law, which — at a bare minimum — requires to ask users for consent in this case.”
The complaints, filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland, and Spain, argue that X lacks a valid basis for using the data of approximately 60 million EU users without their consent. While X appears to be relying on “legitimate interest” as its legal justification, privacy experts contend that explicit user consent is necessary for such data processing.
This incident draws parallels to a similar situation in June, when Meta paused its plans to process user data for AI training after facing GDPR complaints. However, X’s approach of quietly implementing the data processing without user notification allowed it to operate under the radar for several weeks.
The controversy raises important questions about the balance between technological advancement and user privacy. As AI development accelerates, companies are increasingly looking to vast datasets to train their models. However, the X case highlights the need for transparency and user consent in these processes, especially when dealing with personal data.
As the complaints make their way through the regulatory process, X could face significant penalties if found in violation of the GDPR, with potential fines of up to 4% of global annual turnover. The outcome of this case may set a precedent for how social media platforms and tech companies approach AI training using user data in the future.
For now, X users in the EU have gained the ability to opt out of the AI training data processing via a setting added to the web version of the platform. However, questions remain about the data collected before this option was made available and whether users can have their “already ingested data” deleted.
As this story develops, it serves as a reminder of the ongoing challenges in balancing technological innovation with individual privacy rights in the digital age. The resolution of these complaints against X could have far-reaching implications for the future of AI development and data protection regulations worldwide.