A hearing aid that reads minds: Speaker-independent auditory attention decoding without access to clean speech sources

Han, C., O’Sullivan, J., Luo, Y., Herrero, Mehta, A.D., & Mesgarani, N. | 2019|
Speaker-independent auditory attention decoding without access to clean speech sources|Science Advances| AAV6134 | DOI: 10.1126/sciadv.aav6134
New research that uses  a novel speech separation algorithm to automatically separate speakers in mixed audio, has the potential to prevent the ‘cocktail party’ problem where all sound is amplified by modern hearing aids rather than increasing the volume of an individual voice. Although the technology behind this study is in its early stages, it is a significant step toward better hearing aids that would enable wearers to converse with the people around them seamlessly and efficiently.
Image source: science.fas.columbia.edu

Speech perception in crowded environments is challenging for hearing-impaired listeners. Assistive hearing devices cannot lower interfering speakers without knowing which speaker the listener is focusing on. One possible solution is auditory attention decoding in which the brainwaves of listeners are compared with sound sources to determine the attended source, which can then be amplified to facilitate hearing. In realistic situations, however, only mixed audio is available. We utilize a novel speech separation algorithm to automatically separate speakers in mixed audio, with no need for the speakers to have prior training. Our results show that auditory attention decoding with automatically separated speakers is as accurate and fast as using clean speech sounds. The proposed method significantly improves the subjective and objective quality of the attended speaker. Our study addresses a major obstacle in actualization of auditory attention decoding that can assist hearing-impaired listeners and reduce listening effort for normal-hearing subjects (Source: Columbia University).

See also:

[News story] Columbia University A Voice in the Crowd: Experimental Brain-Controlled Hearing Aid Automatically Decodes, Identifies Who You Want to Hear

In the news:

The Guardian Scientists create mind-controlled hearing aid



App can detect acute otitis media with effusion (AOM)

Chan et al. developed a smartphone system to detect middle ear fluid that uses the microphone and speaker of a phone to emit sound and analyze its reflection (echo) from the eardrum. The smartphone system outperformed a commercial acoustic reflectometry system in detecting middle ear fluid in 98 pediatric patient ears, and the system could be easily operated by patient parents without formal medical training. This proof-of-concept screening tool could help aid in the diagnosis of ear infections. The full article is published in Science Translational Medicine.



The presence of middle ear fluid is a key diagnostic marker for two of the most common pediatric ear diseases: acute otitis media and otitis media with effusion. We present an accessible solution that uses speakers and microphones within existing smartphones to detect middle ear fluid by assessing eardrum mobility. We conducted a clinical study on 98 patient ears at a pediatric surgical center. Using leave-one-out cross-validation to estimate performance on unseen data, we obtained an area under the curve (AUC) of 0.898 for the smartphone-based machine learning algorithm. In comparison, commercial acoustic reflectometry, which requires custom hardware, achieved an AUC of 0.776. Furthermore, we achieved 85% sensitivity and 82% specificity, comparable to published performance measures for tympanometry and pneumatic otoscopy. Similar results were obtained when testing across multiple smartphone platforms. Parents of pediatric patients (n = 25 ears) demonstrated similar performance to trained clinicians when using the smartphone-based system. These results demonstrate the potential for a smartphone to be a low-barrier and effective screening tool for detecting the presence of middle ear fluid.

A copy of this article is available to Rotherham NHS staff, contact the Library 

In the news:

OnMedica Smartphone app can detect fluid in middle ear

New trial to investigate how tinnitus can affect concentration

University of Nottingham | April 2019 | New trial to investigate how tinnitus can affect concentration

Researchers at the University of Nottingham are recruiting participants to a new trial which will assess the impact of tinnitus on cognitive well-being of  people who experience it. 


Experts at the University of Nottingham at the School of Medicine and NIHR Nottingham Biomedical Research Centre (BRC) want to find out which types of cognition may be different in people with tinnitus compared to people without tinnitus.

They are recruiting 144 participants who will be divided into three groups; those with severe or bothersome tinnitus, those who experience tinnitus but are not affected by it and those who do not have tinnitus. To assess how having tinnitus can affect concentration volunteers will participate in  computer-based puzzles that test concentration, clear thinking and ability to multi-task. As their is evidence to show that having this condition can affect concentration and people with tinnitus may perform differently on computer-based puzzles that measure different types of cognition.  (Source: University of Nottingham)

The study, Investigation of executive functioning in adults with and without tinnitus,is funded by the Medical Research Council (MRC) and NIHR Nottingham Biomedical Research Centre.

More information is available from the University of Nottingham

890 more children and adults eligible for cochlear implants on the NHS each year

Hundreds more people with severe to profound deafness will be eligible for cochlear implants each year, due to updated NICE guidance. Update comes after a review of the definition of severe to profound deafness which is used to identify if a cochlear implant might be appropriate.

Severe to profound deafness is now recognised as only hearing sounds louder than 80dB HL at 2 or more frequencies without hearing aids. A cochlear implant works by picking up sounds which are turned into electrical signals and are sent to the brain. This provides a sensation of hearing but does not restore hearing.


Currently around 1,260 people in England receive cochlear implants each year. These updated recommendations could lead to a 70% increase in that number, to 2,150 people, once a steady state is reached in 2024/25.

Full detail: Cochlear implants for children and adults with severe to profound deafness
Technology appraisal guidance [TA566]

See also:


Kids with cochlear implants since infancy more likely to speak, not sign

Science Daily | March 2019 | Kids with cochlear implants since infancy more likely to speak, not sign

A US study from researchers at a Chicago hospital reports that deaf children who received cochlear implants (implanted electronic hearing device) before 12 months of age learn to more rapidly understand spoken language and are more likely to develop spoken language as their exclusive form of communication- this was true even for children with additional conditions often associated with language delay, such as significantly premature birth.

In their findings, which have been published in  Otology and Neurotology, researchers also showed that implantation surgery and anesthesia were safe in young children, including infants (via Science Daily).

Full news story from Science Daily

Full reference: Hoff, S. et al | 2019 | Safety and Effectiveness of Cochlear Implantation of Young Children, Including Those With Complicating Conditions|Otology & Neurotology | Publish Ahead of Print MAR 2019 |DOI: 10.1097/MAO.0000000000002156

Objective: Determine safety and effectiveness of cochlear implantation of children under age 37 months, including below age 12 months.  Study Design: Retrospective review. Setting:Tertiary care children’s medical center.Patients:219 children implanted before age 37 mos; 39 implanted below age 12 mos and 180 ages 12–36 mos. Mean age CI = 20.9 mos overall; 9.4 mos (5.9–11.8) and 23.4 mos (12.1–36.8) for the two age groups, respectively. All but two more than or equal to 12 mos (94.9%) received bilateral implants as did 70.5% of older group. Mean follow-up = 5.8 yrs; age last follow-up = 7.5 yrs, with no difference between groups.Interventions:Cochlear implantation.Main outcome measures:Surgical and anesthesia complications, measurable open-set speech discrimination, primary communication mode(s). Results: Few surgical complications occurred, with no difference by age group. No major anesthetic morbidity occurred, with no critical events requiring intervention in the younger group while 4 older children experienced desaturations or bradycardia/hypotension. Children implanted under 12 mos developed open-set earlier (3.3 yrs vs 4.3 yrs, p  more than  0.001) and were more likely to develop oral-only communication (88.2% vs 48.8%, p more than or equal to  0.001). A significant decline in rate of oral-only communication was present if implanted over 24 months, especially when comparing children with and without additional conditions associated with language delay (8.3% and 35%, respectively).Conclusions:Implantation of children under 37 months of age can be done safely, including those below age 12 mos. Implantation below 12 mos is positively associated with earlier open-set ability and oral-only communication. Children implanted after age 24 months were much less likely to use oral communication exclusively, especially those with complex medical history or additional conditions associated with language delay.