Meta has quietly terminated its relationship with outsourcing firm Sama after reports emerged that contractors reviewing footage from Ray-Ban smart glasses were exposed to highly sensitive recordings, including private conversations, financial information, and explicit scenes involving individuals who allegedly did not know they were being filmed.
The decision is drawing renewed scrutiny to the hidden labor systems powering the generative AI race, while reigniting broader concerns about privacy, surveillance, and worker treatment in the global AI supply chain.
The controversy underlines a growing tension at the center of the artificial intelligence industry: while technology firms are aggressively marketing AI-powered wearable devices as the next computing platform, the systems behind them still rely heavily on armies of low-paid human reviewers tasked with sorting through deeply personal content to improve machine-learning models.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Sama, a California-headquartered outsourcing company with major operations in Nairobi, disclosed the termination of 1,108 employees after Meta ended its contract. Some workers alleged they faced retaliation after raising concerns internally about the nature of the material they were required to examine.
The dispute traces back to investigations published earlier this year by Swedish media outlets, which cited workers who said they were asked to label footage recorded through Meta’s Ray-Ban smart glasses. According to those accounts, the recordings included private conversations, banking details, nudity, and intimate encounters involving people who often appeared unaware they were being captured.
The revelations strike at one of the most sensitive questions surrounding AI wearables: whether consumers and bystanders fully understand how much data these devices collect and how that information is ultimately used.
Meta maintains that users must explicitly enable the glasses’ AI features and that its terms of service disclose that some recordings may be used to improve AI systems. Yet privacy advocates argue that disclosure buried in user agreements does little to address concerns about uninformed third parties who may appear in recordings without consent.
The incident also exposes the uncomfortable reality that the rapid progress of generative AI still depends heavily on human labor. While AI companies market their systems as increasingly autonomous, the technology often relies on thousands of contractors who manually categorize images, review audio, and flag problematic content.
Industry analysts say the demand for this kind of data annotation work has surged as companies race to develop multimodal AI systems capable of understanding video, audio, and real-world environments. Smart glasses intensify that challenge because they capture unfiltered moments from everyday life rather than curated online datasets.
The fallout has revived scrutiny of Sama itself, which has repeatedly surfaced at the center of debates about AI labor ethics.
The company previously worked with OpenAI to help filter toxic material for ChatGPT before its public launch in 2022. Investigations at the time revealed that Kenya-based workers were paid less than $2 an hour to review graphic and disturbing content, with several workers reporting psychological trauma from the assignments.
Sama and Meta also faced allegations in previous years tied to anti-union practices and misleading job descriptions. Labor groups argued that workers recruited for content moderation and AI labeling tasks were often not fully informed about the emotional intensity or sensitivity of the material they would encounter.
The latest controversy could intensify regulatory pressure on Meta at a time when governments across Europe and parts of the United States are already examining AI transparency, biometric surveillance, and workplace protections tied to generative AI systems.
It also lands as Meta pushes aggressively into AI-powered hardware under CEO Mark Zuckerberg’s broader strategy of building what the company sees as the next major computing ecosystem beyond smartphones. The Ray-Ban smart glasses have become one of Meta’s most commercially successful hardware experiments in years, helped by improvements in AI assistants and real-time multimodal capabilities.
But the privacy concerns surrounding wearable cameras are not new. More than a decade ago, Google Glass faced public backlash over fears that users could secretly record others in public spaces. That resistance helped doom the product commercially.
Meta’s newer generation of smart glasses has avoided some of that stigma by using more discreet designs and integrating them into mainstream fashion branding. Even so, reports have increasingly emerged of users wearing the devices in classrooms, courtrooms, police encounters, and other sensitive environments.
Privacy researchers warn that as AI wearables become more capable, the volume of sensitive real-world data collected by technology firms could increase exponentially. Unlike smartphones, which users intentionally point toward subjects, smart glasses continuously capture the user’s surroundings from a first-person perspective, raising the likelihood of incidental surveillance.
The controversy may also deepen questions about how AI companies balance automation with accountability. While Meta blamed Sama for failing to meet standards, critics argue the dispute highlights systemic issues across the AI industry, where companies outsource difficult moderation and training tasks to contractors operating far from Silicon Valley headquarters.
As AI systems move beyond text generation into always-on wearable devices embedded in daily life, the debate over who controls the data, who reviews it, and who bears the consequences when safeguards fail is becoming increasingly difficult for the industry to avoid.



