
The author of this The Verge article, Victoria Song tested Halo Glass, a smart eyewear prototype that listens, transcribes, and offers relevant information during real-time conversations. Despite the bold pitch, the experience fell short: the system was awkward, distracting, and ethically fraught.
Halo is built by two ex-Harvard students using Even Realities’ G1 smart glasses hardware, aiming to make AI “second memory” ubiquitously accessible. As Song notes, this isn’t just a wearable, it’s a new category of always-on recording AI that tries to anticipate your informational needs. Because it’s always listening, ethical questions loom large: California law requires consent for recording conversations, complicating everyday use.
In practice, Halo tries to pop up facts and context during your speech. But its triggers are clumsy; you must tilt your head up to activate the display. The prototype showed puzzling trivia (e.g., definitions you didn’t need), repeated facts simultaneously to both people in conversation, and sometimes interjected ill-timed interruptions. The effect was more distracting than helpful. Even when it offered useful info, the constant anticipation of an unwanted interruption consumed mental bandwidth.
Song reflects on how the promise clashes with human authenticity. If you know you’ve been recorded, do you truly “be yourself”? And how do you protect the privacy of those around you who didn’t sign onto the device? The project surfaces urgent dilemmas about transparency, consent, and the social contract of AI wearables.
Overall, Halo Glass is a provocative glimpse into wearable AI’s future, a path full of promise and peril. It wants to make you smarter in the background, but risks turning you into a spectacle in the name of assistance.