News Story
How the Brain Builds Meaning from Sound

When someone loses the ability to speak or struggles to understand words after a stroke, how might we know what their brain still comprehends?
A newly published study from University of Maryland researchers shed new light on how the brain processes spoken language, from raw sound to complete meaning. The findings may pave the way for noninvasive diagnostic tools that help assess language comprehension in individuals who are unable to communicate—such as infants, people with aphasia, or those with cognitive impairments.
The article, “Neural Dynamics of the Processing of Speech Features: Evidence for a Progression of Features from Acoustic to Sentential Processing,” was published in the Journal of Neuroscience. The work authored by I. M. Dushyanthi Karunathilake (ECE Ph.D. 2023); Christian Brodbeck (former ISR postdoctoral researcher), faculty at McMaster University; Shohini Bhattasali, faculty at University of Toronto; Philip Resnik (Linguistics/UMIACS) and Professor Jonathan Z. Simon (ECE/ISR/Biology), who advised Karunathilake.
“It’s really amazing to see the hierarchical nature of speech and language processing unfold, millisecond by millisecond, as each level of the hierarchy is processed in its own region of the brain,” said Simon, professor of electrical and computer engineering.
Hear It for Yourself: From Noise to Narrative
Want to experience how your brain moves from hearing to understanding?
Listen to audio examples that illustrate the research findings.
These audio demos reflect the actual stimuli used in the experiment—progressively increasing in linguistic structure. First, you’ll hear a sample of speech-shaped modulated noise, followed by a “narative” made of non-words, then real words taken from a real narrative, but scrambled, and finally, a coherent spoken narrative. Each step adds a new layer of linguistic information.
As you listen, notice how your brain starts to anticipate structure and meaning. This mirrors how neural features—acoustic, phonemic, lexical, and sentential—emerge in the brain during real-time speech comprehension.
This “layered” demo not only brings the science to life, but also shows how the brain transforms raw auditory input into meaning through a cascade of processing stages.
“What excites me most,” Professor Jonathan Z. Simon has said, “is that this work allows us to trace the path from sound all the way to meaning—all in one experiment, in one lab, in one thought process.”
A Pathway Through the Brain
Using magnetoencephalography (MEG), a neuroimaging technique that captures millisecond-level changes in brain activity, the research team mapped how different brain regions respond to various features of speech: its acoustic properties, word-level information, and sentential meaning. The results reveal a progression of activity, starting in auditory regions and moving forward to areas responsible for high-level comprehension.
In essence, the brain doesn’t just decode sound—it assembles it into meaning through a coordinated sequence of processing stages.
The researchers identified distinct neural markers for different processing levels, confirming the idea that comprehension involves more than just hearing. This is a critical insight for clinicians working with patients who can no longer communicate clearly but may still understand spoken language.
Building on Prior Discoveries
This new work follows on the heels of two earlier ISR studies. One demonstrated how the brain distinguishes between math and language streams in real time—even when both are spoken at once. That study revealed distinct, task-specific neural networks for arithmetic and linguistic processing.
Another recent study introduced the concept of priming—showing that listening to a clear version of a speech sample can help the brain make sense of an otherwise unintelligible version of the same recording. It highlighted the brain’s remarkable ability to extract meaning through top-down processing, even when the acoustics are poor.
Together, these studies form a cohesive body of work that explores how the brain navigates the complexity of human speech and thought.
Future Applications: From Clinics to Classrooms
One of the most exciting implications of the research is its potential to inform future hearing technologies. Rather than relying on guesswork, hearing aids or cochlear implants could one day be tuned based on real-time brain feedback, making it possible to objectively determine whether a person is truly understanding what they hear.
The research may also inform how language learning is measured in nonverbal populations, and how clinicians assess developmental delays or speech comprehension disorders.
This latest study not only deepens our understanding of the brain’s inner workings but also points toward real-world tools for improving communication, comprehension, and care.
Published April 10, 2025