Experimental brain-controlled hearing aid decodes, identifies who you want to hear

Science Daily  May 15, 2019
When two people talk to each other, the brain waves of the speaker begin to resemble the brain waves of the listener. Using this knowledge a team of researchers in the US (Columbia University, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research) combined powerful speech-separation algorithms with neural networks, complex mathematical models that imitate the brain’s natural computational abilities to create a system that first separates out the voices of individual speakers from a group, and then compares the voices of each speaker to the brain waves of the person listening. The speaker whose voice pattern most closely matches the listener’s brain waves is then amplified over the rest. In tests when a subject focused on one speaker, the system automatically amplified that voice. The work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people…read more. Open Access TECHNICAL ARTICLE

Posted in Bioenergetic technology and tagged .

Leave a Reply