Home Scientific Research Are You Listening? We’ve Found How Our Brains Pick Voices Out of...

Are You Listening? We’ve Found How Our Brains Pick Voices Out of a Crowd

Are You Listening? We've Found How Our Brains Pick Voices Out of a Crowd
Are You Listening? We've Found How Our Brains Pick Voices Out of a Crowd

The Art of Hearing One in Many: The Brain’s Remarkable Mechanism Unveiled

Under the guidance of Dr. Nima Mesgarani from Columbia University, USA, a new study unveils how the brain interprets speech in noisy environments.

According to the findings, which are published in the open-source journal PLOS Biology today, our brains process speech differently, contingent on the level of background noise and our attention level. These insights, based on neural recordings and computer models, might enhance the effectiveness of hearing aids by aiding in the isolation of intended speech.

Interpreting speech in a noisy setting can be challenging, particularly when surrounding voices overshadow the desired speech. Merely increasing the volume of all sounds isn’t an effective strategy to isolate these challenging voices. Current hearing aids, which aim to amplify only the intended speech, lack the precision required for practical applications.

The Columbia University team embarked on a mission to better comprehend speech processing in such challenging circumstances. They employed electrodes, implanted in the brains of epilepsy patients during surgical procedures, to record neural activity. Patients were asked to focus on a single voice, which was at times overshadowed by another voice (“glimpsed”) or subdued by it (“masked”).

Using the collected neural data, the researchers developed predictive models of brain activity. These models illustrated that both the primary and secondary auditory cortex of the brain encoded the phonetic details of “glimpsed” speech. This encoding was more pronounced in the secondary cortex for attended speech.

However, “masked” speech was only encoded if it was the voice the listener was focusing on. Interestingly, the encoding of “masked” speech took place later than that of “glimpsed” speech.

These findings suggest that decoding the “masked” segment of intended speech separately could enhance the effectiveness of auditory attention-decoding systems in brain-controlled hearing aids.

“When listening to someone in a noisy place,” explains lead author Vinay Raghavan, “your brain recovers what you missed when the background noise is too loud.

“Your brain can also catch bits of speech you aren’t focused on, but only when the person you’re listening to is quiet in comparison.”

Image Credit: Getty

Exit mobile version