Computational Architecture of Speech Comprehension with Laura Gwilliams
Join us for our first Listen. Engage. Connect. event of winter quarter, "Computational Architecture of Speech Comprehension" with Laura Gwilliams on Thursday, January 23 at 3pm in Sequoia 207.
Abstract:
How does the human brain transform sounds into meaning with such remarkable speed and accuracy? My research aims to uncover this process by examining how the brain represents and processes speech using advanced brain recording tools, with high temporal and spatial resolution. I will discuss how the brain encodes auditory and linguistic information, how it reconciles the rapid pace of speech with neural processing times, and how information is organized across different timescales and brain regions to enable efficient processing. By integrating cognitive science, machine learning, and neuroscience, this work contributes to the development of computational models that are both biologically informed and capable of mimicking human language understanding.
BIO:
Laura Gwilliams is jointly appointed between Stanford Psychology, Wu Tsai Neurosciences Institute and Stanford Data Science. Her work is focused on understanding the neural representations and operations that give rise to speech comprehension in the human brain. To do so, she brings together insight from neuroscience, linguistics and machine learning, and takes advantage of recording techniques that operate at distinct spatial scales (MEG, ECoG and Neuropixels).