It will be helpful for people who are unable to communicate

Artificial Intelligence can help in many fields, including neuroscience and a recent study found that brain activity can be decoded into words and sentences thanks to this technology.

Although it’s not perfect, using data on a person’s brain activity for a few seconds, A.I. estimates what they have heard. In a preliminary analysis, researchers found that 73% of the time, it lists the right answer within its top ten alternatives.

The performance of A.I., according to Giovanni Di Liberto, a computer scientist at Trinity College Dublin, is far above what many people think.

One day, people who are unable to communicate verbally or through body language may benefit from this A.I. which is developed by Meta, the parent corporation of Facebook.

The majority of the technologies that are currently accessible to these people need risky brain surgeries for the placement of electrodes. But at the École Normale Supérieure in Paris, as explained by this article, neuroscientist and Meta A.I. researcher Jean-Rémi King thinks this novel technology may provide a viable way to aid people with communication deficits without the use of invasive technologies.

King and his colleagues created a computational program to recognize words and phrases using 56,000 hours of audio recordings in 53 languages. A language model, as it is commonly known, has the capacity to recognize linguistic elements at both a specific and a more general level, ranging from letters and syllables to word and sentence levels.

Using magnetoencephalography, or MEG, the researchers found that up to 73% of the time, the accurate response was among the AI’s top ten choices.

Di Liberto claims that when electroencephalography was used, that number was no greater than 30%, but he is less optimistic about its actual applications.

He attributes the cause to MEG’s requirement for expensive, bulky equipment. Therefore, it will require technological advancements to make the equipment less expensive and sophisticated so that it may be used in clinics.

However, it must be said that the word “decoding” is frequently used to describe the process of obtaining information directly from a source, in this example, speech from brain activity. But the A.I. was only able to accomplish this because it had a small pool of possibly reliable information from which to draw when formulating its assumptions but language is actually infinite.

The team trained an A.I. using this language model, employing databases from four different institutions that contained the brain activity of 169 participants. Participants in these databases were subjected to either magnetoencephalography or electroencephalography brain scans while hearing various passages and stories. These passages and tales contained fragments from Ernest Hemingway’s The Old Man and the Sea and Lewis Carroll’s Alice in Wonderland so that the techniques could evaluate the magnetic or electrical component of brain impulses.

Using just three seconds of each participant’s brain activity data and a computational technique that helps account for physical differences among actual brains, scientists then tried to assess what participants had heard. The A.I. was instructed by the scientists to compare speech sounds from the story recordings to brain activity patterns that it determined would be consistent with what people were hearing. Then it forecasts what the listener would have heard during that little period based on more than a thousand different situations.

Di Liberto also claims that the A.I. decoded data from participants who were merely passively listening to the audio, but this is not immediately applicable to patients who are nonverbal. Researchers must figure out how to read patients’ messages from their brain activity. A simple “yes” or “no” or signs of hunger or pain must be recognized for the AI to be a helpful communication tool.

These new experiments demonstrate how powerful A.I. can be and how it can be useful in helping people who are unable to communicate. However, in the future, this technology may be a dangerous weapon that could invade the privacy of our brains. Our thoughts are the most intimate part of ourselves, and if technology could violate this, we would be completely helpless and manipulable.