Meta Platforms Faces Backlash Over AI Data Use Without Consent

Meta Platforms came under fire as 11 complaints were lodged against it for proposed changes that would utilize personal data to train its artificial intelligence (AI) models without explicit consent. This move potentially violates European Union privacy regulations, Reuters news report said.
Meta Platforms businessAdvocacy group NOYB (none of your business) called upon national privacy watchdogs to intervene immediately, expressing concern over Meta’s updated privacy policy set to take effect on June 26. The policy alterations could empower Meta to leverage years of personal posts, private images, and online tracking data for its AI technology.

NOYB has a history of filing complaints against Meta and other major tech companies for alleged breaches of the EU’s General Data Protection Regulation (GDPR), which carries penalties of up to 4 percent of a company’s global revenue for violations.

Meta defended its actions, citing a legitimate interest in utilizing user data to train and enhance its AI models, potentially sharing these tools with third parties.

Max Schrems, founder of NOYB, highlighted a 2021 ruling by the European Court of Justice (CJEU), which emphasized users’ rights to data protection, particularly concerning advertising. Max Schrems criticized Meta’s attempt to extend these arguments to AI technology, accusing the company of disregarding CJEU rulings and complicating opt-out procedures.

Max Schrems emphasized that Meta should seek opt-in consent from users, rather than requiring them to navigate complex opt-out processes.

NOYB has urged data protection authorities in multiple European countries, including Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland, and Spain, to take urgent action in response to the impending policy changes by Meta.

Related News

Latest News

Latest News