- This event has passed.
Hearing and Donuts (Brain and Bagels) Seminar
March 19 @ 8:30 am - 9:30 am
Presenter: Bob McMurray, Ph.D., Director, Mechanisms of Audio-Visual Categorization Lab, Department of Psychological & Brain Sciences, University of Iowa
Topic: Decoding the Neural Dynamic of Spoken Word Recognition
There is unparalleled consensus on the mechanisms of spoken word recognition. From the earliest moments of the input, listeners activate multiple words that compete for recognition. This has been shown by psycholinguistic measures like the Visual World Paradigm (VWP). However, we have little understanding of the neural basis of lexical competition. No ERP components directly reflect this lexical competition process. Moreover, while fMRI and work with brain damaged populations has revealed a network of structures involved in word recognition, these have not elucidated the fundamental question of where competition takes place. Here, I present new work applying machine learning to EEG to decode the strength at which lexical candidates compete during real-time word recognition. We build on a recent electro-corticography paradigm (ECoG, recordings from the surface of the brain of awake humans undergoing treatment for epilepsy). We trained a support vector machine to identify which word was heard on each trial over successive 25 msec increments and analyzed the patterns of confusion over time. Results mirrored empirical results (from the VWP) and computational models: Early on, the decoder was equally likely to report the target (e.g., dinosaur) or a similar sounding competitor (dynamite), but by around 500 msec, competitors were suppressed. This was only seen in auditory and phonological areas, and not in higher level language areas, but even in auditory cortex we see evidence for memory-like processes. We build from this to an EEG analogue that can potentially be run on typical individuals, children or people with communicative impairments. We used a similar set of stimuli – this time including both words and non-words—while recording 64 channel EEGs from 16 listeners. A similar classification scheme was applied, and results mirrored the EcOG tracking the smooth timecourse of competition and integration. This demonstrates that value of dynamic machine learning approaches applied to electrophysiology to understanding the dynamics of language processing.