Hearing and Donuts (Brain and Bagels) Seminar

Sara Misurelli, Ph.D., Au.D., CCC-A
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Hearing loss (HL) affects countless individuals, resulting in significant adverse outcomes, including social isolation, diminished ability to work, and increased anxiety and depression. Many of these individuals do not speak English as their primary language, and for them, access to equitable healthcare can be extremely challenging. It is well-established that treating HL, such as through surgical interventions or amplification (e.g., hearing aids), provides benefits to both the individual and the healthcare system. In fact, the World Health Organization states that untreated HL is estimated to have a global economic cost of $750 billion annually by 2050. Audiologic hearing evaluations, which include word recognition tests (WRTs), are essential for diagnosis and treatment of HL. However, WRTs are primarily available only in English, leaving many non-English speakers without the tools needed to fully understand, treat, and manage their hearing health. This presentation will cover past, current, and future projects aimed at more accessible hearing health care for all.

Sara Misurelli, Ph.D., Au.D., CCC-A

Hearing and Donuts (Brain and Bagels) Seminar

Bobby Gibbs, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Speech in noise remains a top concern when listening through a cochlear implant (CI). Speech enhancement algorithms have shown limited success in noise. Improved noise mitigation requires a better understanding of what acoustic information is prioritized when listening through a CI in noise, and how acoustic utilization in noise is affected by the fidelity of initial neural encoding (the electrode-to-neural interface). The “bubble noise” paradigm provides an opportunity to test the hypothesis that acoustic utilization in noise depends on the electrode-to-neural interface. This talk will present analyses from bubble noise data when listening to consonants in noise. Bubbles are random regions of attenuation of an otherwise unintelligible masker that provide random glimpses of the phonemes. Test conditions involved vocoded stimuli with broad spread of excitation, vocoding with narrow spread of excitation, and unprocessed stimuli. I will present analyses from time-frequency importance functions (derived from correlating bubble regions with correct responses), error patterns, and acoustic intelligibility prediction metrics. The converging evidence from these analyses provides an initial roadmap for how CI speech enhancement in noise might be more tailored.

Bobby Gibbs, Ph.D.

Hearing and Donuts (Brain and Bagels) Seminar

Bikalpa Ghimire
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

The visual perception of our natural environment is intricately structured; comprising meaningful objects, surfaces, and the relationships among them. At the early stage of visual processing, the neural representation of the visual world is local. Through processes known as perceptual organization, local elements associated with the same object or surface are integrated, while distinct entities are segregated from each other. Motion provides an important cue for such grouping and segmentation. One particularly difficult problem in perceptual organization is how spatially overlapping stimuli moving in different directions are segregated to give rise to the perception of overlapping surfaces moving transparently against each other, referred to as motion transparency. In this talk, I will delve into the neural mechanism underlying motion transparency, emphasizing the interplay between two visual cortical areas important for visual motion processing, the primary visual cortex (V1) and the middle-temporal area (MT), which operate at different spatial scales.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Didulani Dantanarayana, M.Sc. Audiology
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Children show significant variability in outcome measures including speech understanding in quiet and in noise. Children with hearing loss show even greater variability and numerous factors can contribute to such variability, including auditory experience prior to the onset of deafness and implantation and the downstream effects of deafness, including neurocognitive abilities, neural health, and the integrity of the auditory system. Much of the research to date on speech understanding in children with and without hearing loss has used standardized tests that are high in semantic contexts. However, semantic context of speech may influence speech understanding in complex listening situations. Therefore, to fully understand how children use the semantic context to recognize speech in complex auditory environments, the content of sentence materials used in this study had either semantically coherent- or anomalous in context. To investigate the extent to which children benefit from spatial separation of target speech from background noise, spatial release from masking (SRM) was also investigated.

Didulani Dantanarayana, Master of Audiology

Hearing and Donuts (Brain and Bagels) Seminar

Agudemu Borjigin, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

Cochlear implants (CIs) are the most successful neural prostheses, restoring hearing for over a million individuals with severe to profound sensorineural hearing loss and enabling most recipients to achieve satisfactory speech intelligibility in quiet settings. However, CI users continue to face significant challenges in understanding speech in noisy, everyday environments. My primary research focuses on identifying the factors contributing to these listening difficulties and developing solutions to address them. Specifically, I have been investigating the potential benefits of incorporating temporal fine structure (TFS) encoding into CI sound coding strategies. TFS, a fundamental component of all sounds, is currently absent in most CI sound coding strategies. Beyond improving sound coding strategy by introducing TFS encoding, I have also explored deep learning-based approaches to eliminate noise interference before it reaches the sound coding stage of CIs. My work in enhancing both the sound coding and pre-processing stages has demonstrated significant improvements in the auditory capabilities of CI users. These advancements hold promise for informing the next generation of CI technology, aiming to provide CI users with a better auditory experience, particularly in complex and noisy listening environments.

Agudemu Borjigin

Hearing and Donuts (Brain and Bagels) Seminar

Michael Roberts, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

The inferior colliculus (IC) is the midbrain hub of the central auditory system and an important site for computations related to speech processing and sound localization. With more than five times as many neurons as the lower auditory brainstem, the computational potential of the IC is immense, but the cellular and synaptic mechanisms underlying computations in the IC have remained largely unknown. Using a multifaceted approach, we have discovered several of the first molecularly identifiable neuron types in the IC. In addition, the ability to identify and manipulate specific IC neuron types using genetic tools has enabled us to uncover several new mechanisms for how local IC circuits shape the processing of ascending auditory inputs. This seminar will address the challenge of identifying neuron types in the IC and our most recent discoveries of candidate markers for neuron types. It will then focus on several mechanisms that contribute to circuit operations in the IC, with an emphasis on the prevalence of feedforward and recurrent connections. Together, the results will provide new insights into the varied ways that IC circuits shape auditory processing.

A W crest banner flutters in the wind on Bascom Hill at the University of Wisconsin-Madison during autumn on Oct. 18, 2019. (Photo by Jeff Miller /UW-Madison)

Hearing and Donuts (Brain and Bagels) Seminar

Matthew Banks, Ph.D.
Waisman Center
@ 8:30 am - 9:30 am
Learn more about the Hearing and Donuts Seminar Series

The sense of self is a multiplicity of interacting processes that is dynamic and flexible, for example including core self (“I am”), embodied self (“I am an agent with a body that feels and senses”), and narrative self (“I have traits and identity that persist through time”). How these processes are integrated during typical waking consciousness, and how this integration is disrupted during sleep, anesthesia, and in psychiatric and neurodegenerative disorders is unclear. We are investigating the neural correlates of the self in neurosurgical patients using electrophysiological recordings combined with behavioral tasks. We will use these data to test models of the integration of the self during loss and recovery of consciousness during sleep and anesthesia.

Matthew Banks, Ph.D.