Lucid dreaming has fascinated the public and the neuroscience community alike for decades, spawning references across pop culture, from films like “The Matrix” and “Inception,” to a Reddit community (r/LucidDreaming) with more than 500,000 members. Neuroscientific studies on the subject date back to the 1970s, according to research published in the National Library of Medicine, but interest has increased with the expansion of the cognitive neuroscience field.
Wollberg had his first lucid dream at age 12, and though he doesn’t remember exactly what he did, he called it “just about the most profound experience I’ve ever had.” In college, he started lucid dreaming twice a week and realized he wanted to create a way to use the practice to explore consciousness on a deeper level.
Meanwhile, co-founder Berry had a background in neurotech prototyping — specifically, feeding electroencephalogram, or EEG, data into a transformer neural network, an AI model pioneered by Google, to explore what people may be seeing in their minds. That’s the kind of work he had been doing with Grimes.
“Eric came to me and he told me what he was working on, and I didn’t think the technology was there at that time — we can’t induce dreams, let alone lucid ones, so how could this be possible?” Berry told CNBC. “The defining moment for me was when I realized that you’re not inducing the dream state itself — someone is already dreaming normally, which happens for most people multiple times a week. You’re simply activating the prefrontal cortex, and it turns lucid.”
Wollberg and Berry are counting on the results of the Donders Institute’s yearlong study to provide enough training data for their AI to work on the Halo device. The golden-ticket type of brain data they’re looking for via the study is gamma frequencies — the fastest measurable “band” of brain wave frequencies, which occur in states of deep focus and are a hallmark of an active prefrontal cortex, which is believed to be a defining characteristic of lucid dreams.
While today’s leading transformer models that underpin tools like OpenAI’s ChatGPT deal in inputs and outputs of text, Berry is aiming to do something differently with Prophetic. His plan is to use a convolutional neural net to decode brain-imaging data into “tokens,” then feed those into the transformer model in a way it can understand them.
“You can create this closed loop where the model is learning and figuring out what sort of sequences of brain states need to occur, what sort of sequences of neuro-stimulation need to occur, in order to maximize the activation of the prefrontal cortex,” Berry said.
Prophetic’s goal with the prototype is to use focused ultrasounds to stimulate the user’s prefrontal cortexes while dreaming. Research suggests that focused ultrasound stimulation can improve working memory, and Berry compares that, in a way, to the idea of not knowing how you got somewhere while dreaming. It’s part of why he believes there’s a “really, really, really good shot that this works.”
“My conviction strongly comes from how it feels like a quantum leap … when you’re using this focused ultrasound,” Berry said. “It’s quite a bit better than everything else that’s been done.”