Brain-machine interfaces, also known as mind-machine interfaces, have seen major leaps over the past few years. Where the world still marvels creations like virtual assistants, scientists at the University of California, San Francisco have succeeded in translating brain signals with 97% accuracy.
The study was led by neurosurgeon Edward Chang of UCSF’s Chang Lab, and the team used a new method to decode the brain signals using electrodes.
The study consisted of four epilepsy patients that were under study for seizures caused by their medical condition. The USFC team asked them to read and repeat a number of set sentences aloud, as the electrodes attached to their head recorded their brain activity.
This collected data was then fed to a neural network that analyzed the patterns of brain activity corresponding to different speech signatures, including vowels, consonants, and mouth movements. After this, the data was decoded by another neural network based on repetitions of 30–50 spoken sentences.
According to the study, the system was able to produce a word rate error as low as 3%. This is as close to reading the human mind as AI has ever gotten, even though it was in strictly defined experimental conditions.
In some cases, the errors hardly had any virtual relation with the words spoken. Despite that, the overall test and its highly accurate results mark a new benchmark for AI-based decoding of brain activity. Although there are numerous hurdles to overcome, according to the researchers, this might act as the basis of a speech prosthesis for patients who have lost the power to talk in the future.