Hearing and Donuts (Brain and Bagels) Seminar

Christian Stilp, PhD
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Objects and events in the sensory environment are never perceived in isolation, but relative to surrounding stimuli. This is especially true in speech perception, where acoustic characteristics of surrounding sounds have powerful contextual influences on perception of speech sounds. In this talk, I will discuss two classic effects of surrounding spectral context on auditory perception: spectral contrast effects and auditory enhancement effects. I will show that speech sound categorization is exquisitely sensitive to both of these effects, which are related to each other at the individual differences level. I will review the neural mechanisms thought to underlie these effects and introduce data that seek to clarify where these effects occur in the auditory system. Finally, I will examine how these context effects shape speech perception for listeners with hearing impairment, investigating their interrelationship at the individual differences level as well. Together, this work points to promising future directions that may ultimately improve speech-perception-in-context by listeners with impaired hearing.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Kevin Sitek, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

The central auditory system is comprised of a number of subcortical and cortical brain structures. While non-invasive methods like MRI and EEG have enabled detailed study of the human auditory cortex, imaging the deep, small subcortical auditory nuclei remains challenging. Fortunately, advances in non-invasive imaging are facilitating research into the entire human auditory system and how it's involved in critical behaviors like speech communication. In this talk I’ll present my contributions to human auditory neuroimaging, including publicly available anatomical atlases of the subcortical auditory structures, their function, and the connectivity of the auditory pathway. I’ll then discuss recent work using these methods to probe auditory–striatal connectivity and its role in sound category learning. Finally, I’ll show how the EEG frequency-following response can provide insights into auditory–motor integration, with implications for speech production and feedback processing. Overall, this research advances our understanding of the human auditory system as a distributed network supporting complex behaviors, highlighting the value of multimodal neuroimaging in bridging brain structure and function.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Xin Huang​, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Natural scenes are often complex and contain multiple entities. Visual segmentation refers to the processes of partitioning visual scenes into distinct objects and surfaces, as well as segregating figural objects from their background. Segmentation is crucial for scene interpretation, object recognition, and visually guided action. However, it is still unclear how the brain represents and segregates multiple stimuli. We hypothesize that the visual system exploits statistical regularities in natural scenes to represent multiple stimuli and facilitate segmentation. To test this hypothesis, we characterized natural scene statistics of motion and depth pertinent to visual segmentation, as these cues are potent for segmentation. We found that the figural region tended to move faster and more coherently and tended to be nearer in depth than the background region. In neurophysiological experiments, we recorded the activities of neurons in cortical area MT, a crucial hub for processing visual motion and depth information. We found that the responses of MT neurons to multiple stimuli within their receptive fields tended to be biased toward the stimulus component that moved at a faster speed, more coherently, and at a nearer depth. Previous theoretical studies suggest that mixing multiple stimuli with different weights (rather than equal weights) can enhance the ability to encode multiple stimuli in neuronal populations. Our neural results revealed that MT neurons indeed incorporate this strategy, but in an interesting way: the response biases and hence the response weights for different stimuli reflect optimization for performing behavioral tasks such as figure-ground segregation, given our measured natural scene statistics related to the figure and ground regions. Together, these results enrich our understanding of neural representation and segregation of multiple visual stimuli, as well as demonstrate that neural coding can be optimized to perform essential behavioral tasks, in contrast to preserving information and efficiently using resources.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Karen B. Schloss, PhD
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Visual communication is fundamental to how humans share information, from weather patterns to disease prevalence, to their latest scientific discoveries. When people attempt to interpret information visualizations, such as graphs, maps, diagrams, and signage, they are faced with the task of ascribing meaning to perceptual features—perceptual semantics. Sometimes, visualization designs include legends, labels, or captions to help determine perceptual semantics in the context of the visualization. However, people have expectations about how perceptual features will map to concepts called inferred mappings, and they find it more difficult to interpret visualizations that violate those expectations. Traditionally, studies on inferred mappings distinguished factors relevant for visualizations of categorical vs. continuous information. In this talk, I will discuss recent work that unites these two domains within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. I will begin by presenting evidence that observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of assignments between perceptual features and concepts, with an emphasis on color semantics. I will then discuss factors that contribute to merit in assignment inference and explain how we can model the combination of multiple (sometimes competing) sources of merit to predict human judgments. This work has increased our understanding of how people ascribe meaning to perceptual features, which can be used to make visual communication more effective and efficient.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

G. Nike Gnanateja, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

This talk will explore the use of naturalistic/continuous speech paradigms in understanding speech, language, and hearing processes. Unlike traditional isolated word or sentence tasks, continuous speech provides a more ecologically valid approach to studying how humans process spoken language. I will discuss how this methodology reveals the dynamic interplay between acoustic, linguistic, and cognitive processes during real-world communication. Recent advances in neuroimaging and computational techniques have enabled researchers to track neural responses to continuous speech with high temporal precision. This has led to new insights into how the brain segments and integrates information across multiple timescales, from phonemes to discourse-level structures. I will present evidence showing how continuous speech paradigms have enhanced our understanding of speech perception in both normal-hearing listeners and clinical populations. I will highlight specific applications in 1) hearing loss, 2) studying auditory and language development , and 3) acquired language disorders. I will conclude by discussing future directions and methodological considerations for implementing continuous speech paradigms in research and clinical settings. This approach promises to bridge the gap between laboratory findings and real-world speech processing.

Nike Gnanateja, Ph.D.

Hearing and Donuts (Brain and Bagels) Seminar

Monita Chatterjee, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Cochlear implant technology provides many children with severe/profound hearing loss the ability to hear sounds and acquire spoken language. However, some aspects of sounds such as the pitch of voices or musical instruments, are not conveyed well through the device. This leads to deficits in the communication of pitch-dominant information in speech, such as question/statement contrasts, speaker identification, lexical tones, and emotional information. In this presentation, I will describe our research team’s work on how school age children with cochlear implants perceive spoken emotions, predictors of individual variability in their outcomes, and links between their perception and production of emotions in speech.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Joseph Roche, M.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Congenital cytomegalovirus (cCMV) is a long-established etiology of congenital hearing loss with a wide spectrum of expression, ranging from typical / near-typical hearing abilities to profound hearing losses. This presentation will review the current state cCMV evaluation and treatment including work at UW-Madison and UW Health investigating the natural history of hearing ability evolution and its implications for treatment.

Joseph Roche, M.D.

Hearing and Donuts (Brain and Bagels) Seminar

Dhatri Devaraju, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Speech production errors exhibited by individuals who stutter may result from underlying deficits in auditory feedback monitoring as posited by internal models for sensorimotor processing. These deficits can lead to faulty auditory speech representations at the cortical level as the disorder progresses through its developmental course. Individuals who stutter also exhibit temporal processing deficits, which are vital for speech perception. Temporal fine structure is important for speech perception, more so in the presence of noise, as noise smears the temporal envelope of speech. Thus, deficits in temporal processing can manifest as impaired speech perception in noise in these individuals. In this talk, I will be discussing a series of behavioral and electrophysiological (frequency following responses) studies conducted to understand temporal processing in adults who stutter. These findings highlight how temporal processing and speech perception in noise are impacted in this population, warranting further exploration of these essential processes in communication.

Dhatri Devaraju, Ph.D.

Hearing and Donuts (Brain and Bagels) Seminar

Erik Jorgensen, Au.D., Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Hearing loss is associated with negative social-emotional health outcomes, particularly increased loneliness and depression. The reasons for this are unclear. A popular theory is that hearing loss leads to an avoidance of acoustically challenging environments as well as disengagement within those environments. Over time, this social isolation can lead to increased feelings of loneliness and depression. Recent work in our lab has focused on empirically testing this theory and developing a general framework around hearing-related behaviors and their specific connections to social-emotional health outcomes. In this talk, I will first present results from a study that supports a moderated mediation model linking speech-in-noise exposure, speech perception in noise, loneliness, and depression among young adults with audiometrically normal hearing. Then, I will discuss recent evidence that shows that hearing aid use among older adults with hearing loss may be associated with increased social isolation poorer social-emotional health outcomes. The talk will conclude by discussing clinical implications for auditory rehabilitation and the need for a broader conceptualization of audiologic intervention outcomes.

Erik Jorgensen, Au.D, Ph.D.