From feelings to music to words
As explained here, according to Duke University futurist Nita Farahany, the technology to decode our brainwaves already exists. And some companies are likely already testing the tech. That’s what she claimed during her recent “The Battle for Your Brain” presentation at World Economic Forum in Davos.
“You may be surprised to learn it is a future that has already arrived”, Farahany said in her talk. “Artificial intelligence has enabled advances in decoding brain activity in ways we never before thought possible. What you think, what you feel, it’s all just data, data that in large patterns can be decoded using artificial intelligence”.
The sensors can pick up EEG signals from wearables like hats, headbands, tattoos placed behind the ear, or earbuds and use AI-powered devices to decode everything from emotional states to concentration levels to basic shapes and even your pre-conscious reactions to numbers (i.e., to steal your bank card’s PIN without your knowledge).
In one dystopian—yet very probable—scenario, an employer could use AI to keep tabs on a worker, check to make sure they’re wearing the correct equipment, and detect if they’re daydreaming or paying attention to a major or unrelated task. “When you combine brainwave activity with other forms of surveillance technology”, Farahany said, “the power becomes quite precise”.
Electromyography signals can be picked up by additional technology built into a watch to track brain activity as it transmits signals down your arm to your hand. According to Farahany, by combining these technologies, we will be able to control our own electronics with our thoughts.
“The coming future and I mean near-term future, these devices become the common way to interact with all other devices It is an exciting and promising future, but also a scary future. Surveillance of the human brain can be powerful, helpful, and useful, transform the workplace, and make our lives better. It also has a dystopian possibility of being used to exploit and bring to the surface our most secret self”.
Farahany addressed the Davos meeting to push for a commitment to cognitive liberties, including topics like mental privacy and freedom of thought. When a person uses this technology to gain a better understanding of their own mental health or well-being, she claimed that it has the potential to be beneficial. even serve as a warning indicator for potential medical problems. Also, as more people monitor their brainwaves, the data sets grow, allowing companies to extract more information from the same data.
But that has a flip side. “More and more of what is in the brain”, she said, “will become transparent”.
Decoding the music you’re listening to
Another study, which was published in the journal Scientific Reports, used a combination of two non-invasive techniques to track a person’s brain activity while listening to music: electroencephalogram (EEG), which records what is happening in the brain in real time, and functional magnetic resonance imaging (fMRI), which measures blood flow throughout the entire brain. The information was transformed to reconstruct and identify the piece of music using a deep-learning neural network model.
As natural language and music both consist of complicated acoustic signals, it is possible to adapt the model to translate speech. This line of research eventually hopes to translate thought, which might be a significant help in the future for those who have trouble communicating, such as those with locked-in syndrome.
Dr. Daly from Essex’s School of Computer Science and Electronic Engineering, who led the research, said: “One application is brain-computer interfacing (BCI), which provides a communication channel directly between the brain and a computer. Obviously, this is a long way off but eventually, we hope that if we can successfully decode language, we can use this to build communication aids, which is another important step towards the ultimate aim of BCI research and could, one day, provide a lifeline for people with severe communication disabilities”.
The study involved the reuse of fMRI and EEG data obtained from participants listening to a set of 36 pieces of simple piano music that varied in tempo, pitch harmony, and rhythm. The music was played for 40 seconds at a time. The 36 pieces were selected as part of a previous project at the University of Reading. The model successfully identified the piece of music with a success rate of 71.8% using these combined data sets.
Guessing words from your brain
Researchers at Meta’s AI research division instead, decided to investigate whether they could decode complete sentences from someone’s neural activity without requiring dangerous brain surgery. The researchers described how they created an AI system that can predict what words someone is listening to based on brain activity captured using non-invasive brain-computer interfaces in a paper posted on the pre-print server arXiv.
“It’s obviously extremely invasive to put an electrode inside someone’s brain”, Jean Remi King, a research scientist at Facebook Artificial Intelligence Research (FAIR) Lab, explained. “So we wanted to try using noninvasive recordings of brain activity. And the goal was to build an AI system that can decode brain responses to spoken stories”.
The researchers made use of four pre-existing datasets of 169 individuals’ brain activity as they listened to spoken word recordings. Both magneto and electroencephalography (MEG and EEG), which use various types of sensors to detect the electrical activity of the brain from outside the skull, were used to record each volunteer.
In their approach, the brain and audio data were divided into three-second segments and sent into a neural network, which then searched for patterns that could link the two. They tested the AI on new data after training it for many hours on this data.
For one of the MEG datasets, the system performed the best. The correct word was present 72.5% of the time among the top 10 terms that had the highest likelihood of being connected to the brain wave segment.
Although that may not sound impressive, keep in mind that it was chosen from a vocabulary of 793 words. In the other MEG dataset, the system achieved a score of 67.2 percent, but performed worse on the EEG datasets, with top-10 accuracies of just 31.4 and 19.1.
Although it is obvious that this technology is still very far from being useful, it represents important advancement on a challenging issue. Deciphering brain activity in this way is difficult since non-invasive BCIs have significantly worse signal-to-noise ratios. However, if successful, this approach could lead to a technology that will be much more frequently used.
But, not everyone is convinced that the issue can be resolved. The use of these non-invasive methods to listen to someone’s thoughts, according to Thomas Knopfel of Imperial College London, was like “trying to stream an HD movie over old-fashioned analog telephone modems” and he questioned whether they will ever be accurate enough for practical use.
The study conducted by the Meta team is still in its very early phases, therefore there is still room for development. And anyone who can master non-invasive brain scanning will undoubtedly have plenty of incentive to try given the commercial prospects that await them.
Our mind is the most important private place we have. If someone can know what we think, it means that we can’t be free anymore and that we might be completely controlled. Although reading what’s in someone’s mind could be useful for some diseases and can help people who can’t communicate to get their life better, if this technology was used to monitor our actions, it could lead to a more dystopic society than that of AI.