Meta AI Glasses Under Fire: Workers Reviewing Intimate Footage
Meta is facing growing scrutiny over its Ray-Ban AI glasses. An investigation by the Swedish newspapers Svenska Dagbladet and Göteborgs-Posten reveals what may happen to some of the recordings captured by the smart glasses.
- Investigations by Svenska Dagbladet and Göteborgs-Posten report that recordings from Meta’s Ray-Ban AI glasses are reviewed by AI teams and contractor data workers, including staff based in Nairobi, Kenya.
- The audio and video clips are used for AI training and quality checks, and reports say they can include private conversations and highly personal everyday moments.
- Privacy advocates warn that bystanders and conversation partners are often unaware they may be recorded and have limited practical ways to prevent it.
According to the report, audio and video recordings from the devices are sent to external data workers employed by a Meta contractor in Nairobi, Kenya. Their job is to review clips so Meta can improve and train its artificial intelligence systems.
The troubling part: workers say the material often includes extremely private situations. According to the investigation, reviewers regularly encounter not only everyday scenes but also highly personal moments inside homes — including private conversations between partners, people using the bathroom, nudity, and even sexual situations.
Many of the people appearing in these recordings reportedly have no idea they were filmed.
Meta’s Ray-Ban glasses look like regular sunglasses. Users can take photos or record short videos and interact with built-in AI features using voice commands. Those features generate large volumes of visual and audio data that can be used to train AI systems.
Privacy advocates say this creates a major problem. In public spaces or during conversations, people often cannot tell whether someone nearby is recording. Even though the glasses include a small LED indicator meant to signal when recording is active, critics argue that many bystanders simply do not notice it.
Another key concern is how recordings are repurposed beyond their original context. Clips captured during ordinary daily life may end up in datasets used to train artificial intelligence, where they are viewed and analyzed by remote workers.
The revelations raise a broader question about the future of everyday privacy: What happens when cameras embedded in ordinary-looking glasses become commonplace?
For Meta, the issue could become particularly sensitive in Europe. The EU’s data protection framework sets strict rules for handling personal data, and regulators may now examine whether the practices surrounding AI glasses comply with those standards.










