This post reflects on the personal convenience of Meta AI smart glasses versus the stark, troubling reality revealed by the recent Svenska Dagbladet (SvD) investigation.
The View from Behind the Lens: Why I Loved My Smart Glasses—And Why I’m Now Terrified
For the past few months, my Meta AI smart glasses have been my favorite piece of tech. They changed how I interact with the world. I used them to capture a first-person view of my toddler’s first steps without fumbling for a phone. I used the AI assistant to translate menus while traveling and to identify plants in my garden. They felt like the future—a seamless, hands-free extension of my own eyes.
But a recent investigation by Svenska Dagbladet has completely shattered that illusion of privacy.
The report, titled “We See Everything,” pulls back the curtain on what happens to the data captured by these glasses. It turns out that while I thought I was just asking a digital assistant for help, a human being halfway across the world might have been watching the most intimate moments of my life.
“We See Everything”
The most jarring takeaway from the SvD report is the testimony from contract workers in Kenya. These workers, employed by a subcontractor called Sama, are tasked with “annotating” data to train Meta’s AI. They aren’t just looking at public street scenes or pictures of landmarks. They reported seeing:
- Intimate sexual encounters filmed by users who likely had no idea they were being recorded.
- People undressing or using the bathroom, often because the glasses were left on a bedside table or shelf while still active.
- Sensitive financial information, including clear shots of bank cards and personal documents.
One worker’s quote haunts me: “We see everything—from living rooms to naked bodies. Meta has that type of content in its databases.”
The Myth of “Local” Processing
When I bought these glasses, the marketing jargon led me to believe my data was “designed for privacy.” In many retail stores, staff reportedly tell customers that data stays “locally in the app.”
The SvD investigation proves this is a lie. For the AI features to work—the very features I have used—the data must be sent to Meta’s servers. Once it’s there, it becomes fair game for human review. Meta claims they use AI to blur faces and protect identities, but the workers themselves say these safeguards frequently fail, especially in low light or “difficult” conditions.
My face, my family’s faces, and the entire inside of my home may have been effectively unmasked.
The Trap of the “Hey Meta” Command
I’ve realized that the “convenience” of the “Hey Meta” voice command is actually a massive privacy hole. The glasses are always listening for that wake word, and the investigation suggests that recordings are often triggered accidentally. This explains why workers are seeing footage from bedrooms and bathrooms—places where no sane person would intentionally hit “record.”
Why This is Different
We’ve all heard about “data collection” before, but this feels different. It’s not just a list of my interests or my GPS coordinates. It is a literal point-of-view video of my private life. The SvD article highlights a “transparency problem” that Meta seems unwilling to fix. We are told we are in control, but the fine print says otherwise: if you want the AI to work, you must allow your data to be processed, and that processing can include manual human review. There is no middle ground.
Final Thoughts
I wanted to believe that smart glasses were the next step in human evolution. Instead, they feel like the ultimate surveillance tool—one that I paid for and put on my own face. But the next time I go to put these on, I won’t just see a cool gadget. I’ll see the hidden workforce in Nairobi watching my every move. I’ll see the bank cards I accidentally glanced at, and the private moments with my family that were never meant for a database.
Meta says these glasses are “built for privacy.” After reading what the workers have to say, I’ve realized that in the eyes of Big Tech, privacy is a luxury we can no longer afford.
Clarkson Law Firm, a prominent California-based public interest firm, filed a class action lawsuit against Meta in the US District Court’s Northern District of California, San Francisco Division. The false advertising suit, filed on behalf of Meta AI Glasses users, representing themselves and the class, alleges that Meta is deliberately deceiving consumers about the privacy of their AI Glasses while covertly exposing their most intimate moments and personal data.
As the complaint details, when users activate the glasses’ AI and recording features, their footage is automatically transmitted to offshore contractors hired by Meta – exposing intimate moments including people undressing, engaging in sexual activity, and sharing personal financial information, all without their knowledge or consent. Seven million pairs of these glasses were sold in 2025 alone, meaning millions of unsuspecting consumers have been feeding footage into a data pipeline they cannot see, access, or stop. Once that footage leaves the device, users have no control over how it is used, who reviews it, or where it ends up.
“You cannot market a product as ‘built for privacy’ and then funnel footage of people’s intimate moments to contract workers without their knowledge,” said Yana Hart, partner at Clarkson Law Firm. “Meta made privacy the centerpiece of its marketing campaign because it knew consumers would never buy these glasses if they knew the truth.”
I’m selling my entire collection of Meta glasses. Cheap. Let me know if you’re interested in having your own personal privacy invaded.







