Machine learning algorithm to improve performance of hearing aids and cochlear implants in reverberant environments
Unmet Need
In the US, nearly 30 million adults require assistive hearing technology such as hearing aids and cochlear implants. Even with assistive devices, hearing impaired individuals have particular trouble with speech recognition in reverberant environments, such as airports, concert halls, and museums, where significant echo can cause traditional devices to function poorly. Though there have been efforts to improve the technology used in these devices, current approaches have not successfully addressed poor sound quality in reverberant environments. There is a need for assistive hearing technologies which perform well in reverberant environments.
Technology
Duke inventors have developed a machine learning algorithm that filters out reverberant echo and enables hearing assist devices to clearly detect spoken words. It is intended to be deployed on hearing assistance devices, such as cochlear implants and hearing aids, to improve those devices’ performance in reverberant environments. Specifically, this software uses a machine learning algorithm to identify spoken phonemes within a short time frame and then apply a time-frequency mask that reduces the acoustic distortions that are prevalent in these types of environments. The software has been demonstrated using both simulated sound and in real listening environments with a variety of reverberation parameters.
Advantages
- Causal signal processing with potential to functions in real time (minimal processing delay)
- Software-based for compatibility with multiple devices