7 Minutes
A new study reveals that cognitive ability, not just hearing, plays a key role in how well people process speech in noisy environments.
You are sitting in a crowded café, straining to follow a friend across the table while conversations, clattering dishes, and music create a competing soundscape. Conventional thinking often assumes a hearing problem is to blame when someone struggles in these settings. New research from the University of Washington School of Medicine challenges that assumption, showing that measured cognitive ability strongly predicts how well people with clinically normal hearing can understand speech in noisy, multi-speaker environments.
Study design and participant groups
The research team recruited three distinct groups to broaden the range of cognitive profiles: individuals diagnosed with autism spectrum disorder, individuals with fetal alcohol spectrum disorder, and an age- and sex-matched neurotypical control group. In total the sample included 49 participants, ranging in age from early adolescence to mid-adulthood.
All participants passed an audiology screening confirming conventional hearing thresholds within the normal clinical range. That allowed the investigators to isolate suprathreshold processes such as attention, working memory, and perceptual reasoning from peripheral hearing loss.
To probe real-world listening challenges the researchers used a computerized multitalker task. Participants were first introduced to a target speaker and instructed to attend to that voice while two additional background speakers began speaking simultaneously. Each voice delivered a short directive that started with a call sign followed by a color and a number, for example: Ready Eagle go to green five now. The target voice remained male, while the competing voices were varied combinations of male and female talkers.
As the task progressed the background voices were gradually increased in level, so participants had to identify the color and number spoken by the target speaker and select the matching option on the screen. This allowed the team to determine speech perception thresholds under increasing interference, a robust measure of multitalker speech perception often referred to as speech-in-noise performance.

Measuring cognition and linking it to listening
After the listening task, participants completed brief standardized assessments of intellectual ability, including measures of verbal and nonverbal reasoning and perceptual organization. Those scores were then correlated with individuals performance on the multitalker challenge.
The analysis revealed a consistent, statistically significant relationship between intellectual ability and speech perception in competing talker environments across all three groups. In other words, higher scores on standardized cognitive tests were associated with better ability to extract target speech from overlapping voices, even though everyone had clinically normal audiometric hearing.
Lead investigator Bonnie Lau, a research assistant professor in otolaryngology-head and neck surgery at the University of Washington School of Medicine who directs studies of auditory brain development, notes that the relationship between cognition and listening performance transcended diagnostic categories. The effect was present among neurotypical participants as well as those with autism or fetal alcohol spectrum disorder, suggesting that cognitive functions such as attention, working memory, and perceptual reasoning play a core role in speech-in-noise understanding.
Why cognition matters for listening in noisy environments
Successful communication in complex acoustic settings draws on multiple neural processes beyond the ear's ability to detect sound. The brain must segregate concurrent streams of speech, select the relevant speaker, suppress competing signals, and then rapidly decode acoustic details to map phonemes into words and meaning. This sequence places demands on attention, inhibition, working memory, and language processing, increasing overall cognitive load.
From a neuroscience perspective, auditory stream segregation and selective attention rely on interactions between auditory cortex and frontal executive networks. Efficient top-down control helps listeners prioritize the target speaker and filter out distractors, while bottom-up perceptual encoding provides the acoustic detail needed for lexical access. Limitations or variability in these interacting systems can reduce speech-in-noise performance even when peripheral hearing thresholds appear normal.
The study underscores a common clinical misconception: difficulty understanding speech in noisy settings does not necessarily indicate peripheral hearing loss. Instead, suprathreshold central processes and general cognitive ability can determine how well someone copes with background noise.
Implications for education, clinical practice, and assistive technology
The findings have practical implications for clinicians, educators, and audiologists. For neurodivergent individuals and others whose cognitive test scores are lower than average, targeted environmental or technological adjustments could improve communication outcomes. Examples include seating arrangements such as placing a student in the front row, optimizing classroom acoustics, using remote microphone systems or other assistive-listening devices, and reducing background noise where feasible.
Clinicians should consider including measures of speech-in-noise perception and cognitive screening as part of comprehensive assessments when patients report real-world listening difficulties. Audiological evaluation that focuses exclusively on pure-tone thresholds risks missing central auditory or cognitive contributors to communication challenges.
For hearing technology development, the results suggest that devices and algorithms designed to support selective attention and reduce cognitive load could be particularly valuable. Signal processing that enhances target voice features or improves spatial separation between speakers may complement cognitive strategies, and training programs that strengthen auditory attention and working memory could also help.
Limitations and directions for future research
The study sample was modest in size, under 50 participants, which the authors acknowledge limits the generalizability of the results. Lau and colleagues recommend larger, longitudinal studies to confirm and extend the findings, and to disentangle which specific cognitive domains most strongly predict multitalker performance.
Future research could also examine neurophysiological correlates of speech-in-noise ability, using measures such as neural tracking of attended speech, event-related potentials, or functional connectivity to map how central auditory processing and executive control networks interact during multitalker listening. Such multimodal investigations would help translate behavioral correlations into mechanistic models useful for diagnostics and interventions.
Expert Insight
Dr. Maria Chen, cognitive neuroscientist and science communicator, commented on the study relevance: The work highlights that listening is an active cognitive act. Understanding speech in noise requires rapid allocation of attention and memory resources. For clinicians and educators, the take-home is clear: if someone struggles in a noisy room, check their cognitive workload and environment before assuming a peripheral hearing problem.
Dr. Chen added that combining behavioral assessments with classroom accommodations and targeted auditory training could substantially improve day-to-day communication for many people, particularly those with neurodevelopmental differences.
Conclusion
This study from the University of Washington adds important evidence that cognitive ability is a major determinant of how well people with normal audiograms understand speech in noisy, multitalker settings. While pure-tone hearing tests remain essential, they do not capture the full picture of everyday listening. Clinicians, educators, and device developers should account for cognitive factors when assessing difficulties in real-world acoustic environments and when designing interventions to support effective communication.
Source: scitechdaily
Leave a Comment