pull down to refresh
0 sats \ 0 replies \ @Dkryptoenth 16 Apr \ on: Meta gets EU regulator nod to train AI with social media content AI
Big kudos to Meta cos not all social media platforms could get that and the main reasons social‑media platforms like X (formerly Twitter) or others have failed to secure EU approval for using user content to train AI models are rooted in the bloc’s stringent data‑protection and AI governance rules:
-
Lack of a valid GDPR legal basis Under the General Data Protection Regulation (GDPR), any processing of personal data—including “public” posts—must rest on a clear legal ground (e.g., informed user consent or a demonstrable legitimate interest). Ireland’s Data Protection Commission (DPC) opened a formal investigation into X’s Grok AI, finding that EU/EEA users’ publicly accessible posts were being used without users’ explicit consent or a sufficiently documented legitimate‑interest assessment, and thus withheld approval for its AI training plans .
-
Insufficient transparency and user‑control mechanisms GDPR also requires that data subjects be informed “in a concise, transparent, intelligible and easily accessible form” about how their data will be used, and be given a simple way to object. X initially provided neither clear in‑app notifications nor an objection form. By contrast, Meta only won its EU nod after committing to notify users across Facebook and Instagram about the AI‑training scope, to exclude private messages, and to honor opt‑out requests via an easy objection form .
-
Non‑compliance with the EU AI Act’s data‑governance requirements The EU AI Act—set to become fully applicable in 2026—mandates that AI systems, especially those deemed “high‑risk,” be trained on datasets that are representative, bias‑mitigated, and accompanied by thorough Data Protection Impact Assessments (DPIAs) and ongoing risk‑management processes. Companies must document data provenance, perform regular risk assessments, and publish summaries of training data. Platforms that have not demonstrated such governance frameworks have been unable to satisfy regulators’ demands .
-
Failure to exclude sensitive or under‑aged data GDPR bars processing special‑category data (e.g., private messages) without explicit consent, and imposes strict limits on collecting minors’ personal data. Meta secured approval only after explicitly excluding private communications and all content from users under 18, whereas X and others had not initially built in these safeguards, contributing to regulators’ refusal to grant permission .
In sum, EU regulators have set a high bar—demanding clear legal grounds under GDPR, robust transparency and opt‑out tools, stringent AI‑Act data‑governance processes, and strict exclusions of private and minor data—before green‑lighting any training of AI models on social‑media content. Platforms unable or unwilling to meet each of these requirements have so far failed to obtain the necessary approvals.