AI headphones let users choose which sounds to boost or block

Noise reduction technology has been around for decades, helping to eliminate background noise while remaining focused on desired sounds.

A phone or set of headphones can reduce the sound of a dog barking or background chatter, for example, while allowing the user to concentrate on a conversation, podcast or music.

What traditional noise-cancelling technology has not been able to do – until now – is to differentiate between different types of background and foreground noise and select which ones to screen out and which ones to amplify.

Now, researchers led by Professor Shyam Gollakota at the University of Washington have used artificial intelligence (AI) to create an intelligent sound filter that allows users to pick and choose the sounds they want to listen to.

Unveiling a working prototype at a conference held by the Acoustical Society of America and the Canadian Acoustical Association last week, Gollakota gave the example of being in a park enjoying listening to the sound of the birds.

When you are suddenly interrupted by the chatter of a group of loud people “who just can’t stop talking”, you can choose to filter out their conversation in real time while continuing to listen to the sound of the birds.

System could filter out background noise while allowing alarms and voices

Another potential use could be industrial settings, where workers can safely cancel the background noise of machines and industrial processes but allow for essential sounds such as alarms or the voice of a fellow worker to cut through.

The technology can make conversations much clearer in noisy environments, and Gollakota believes that it could also improve the quality of life for people with hearing impairments.

The system uses a sophisticated neural network initially trained on massive datasets to recognise 20 sound categories, including alarms and car horns, crying babies, birdsong and wind.

The headphone-format prototype allows users to choose the sounds they wish to hear through a smartphone app or voice commands.

The algorithms must then analyse the audio stream, identify the chosen categories, and isolate those sounds from the rest of the audio in close to real time – processing the sounds in less than a hundredth of a second.

The researchers are now looking to commercialise the technology, and Gollakota believes that it will soon be part of billions of headsets and earbuds that people use every day.

Today’s news was brought to you by TD SYNNEX – the UK’s number one solutions distributor.