Members of the European Parliament (MEPs) have recently submitted formal inquiries written questions to the European Commission regarding GDPR compliance issues. The most prominent and timely example, reported just days ago, involves a cross-party group of MEPs from the S&D, Greens, The Left, and Renew groups across 17 countries.
They submitted a written parliamentary question to the Commission concerning privacy and data protection concerns related to Meta’s smart glasses; Ray-Ban Meta smart glasses. Reports that the glasses are allegedly recording users in intimate or private situations without their knowledge or consent.
EU users’ data being sent to Kenya for human review by a Meta contractor. Questions about what actions the Commission will take, in coordination with national data protection authorities, to ensure Meta’s compliance with the EU’s General Data Protection Regulation (GDPR).
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Broader concerns linking this to the Commission’s Digital Omnibus package proposals, which some critics argue could weaken GDPR protections; easing rules on personal data use for AI training. The MEPs requested further impact assessments on these proposed changes.
This was covered by Euractiv, which obtained the written question, highlighting how the allegations “raise broader questions regarding the Commission’s digital policy initiatives.” Parliamentary questions like this are a standard tool for MEPs to formally demand answers from the Commission on EU law enforcement, including GDPR application to tech companies.
While no other major cluster of “this week” inquiries on a different GDPR topic appeared in recent searches, this Meta-related one aligns closely with the timing and fits the description of MEPs pressing the Commission on GDPR compliance in a high-profile tech and privacy context.
The GDPR focuses on protecting personal data and individual rights, while the AI Act regulates AI systems’ safety, transparency, and fundamental rights impacts with the GDPR taking precedence where they overlap, e.g., on personal data handling. AI tools must adhere to core GDPR principles (Articles 5–6).
Lawful basis for processing — Most commonly legitimate interests (Article 6(1)(f)), after a three-step test: identify the interest, ensure necessity, and balance against individuals’ rights per EDPB guidelines. Consent is possible but often impractical for large-scale training. Explicit consent is required for special categories.
AI models trained on personal data aren’t automatically anonymous; assess if individuals can be re-identified (rarely fully anonymous for generative AI). Data subject rights — Enable access, rectification, erasure objection; challenging for trained models, but feasible via unlearning techniques or output restrictions.
Prohibited practices — Ban certain AI; social scoring, real-time remote biometric ID in public, these often overlap with GDPR bans on certain processing. Leverage existing GDPR frameworks; DPIAs, privacy by design under Article 25 to meet AI Act obligations like data governance and bias mitigation.
Conduct thorough legitimate interests assessments (LIA) for AI use. Perform DPIAs early in development. Implement privacy by design/default. Ensure transparency in privacy notices about AI. Build in human oversight and explainability for significant decisions. Verify training data sources and lawful basis.
Document everything for accountability. Appoint/empower a Data Protection Officer (DPO) for AI oversight. Non-compliance risks fines up to €20 million or 4% of global turnover, plus AI Act penalties. Many organizations treat August 2026 as a firm deadline despite potential delays via proposals like the Digital Omnibus.



