Gašper Beguš is an Assistant Professor at the Department of Linguistics at UC Berkeley. Before coming to Berkeley, he was an Assistant Professor at the University of Washington and before that he graduated with a Ph.D. from Harvard. His research focuses on developing deep learning models for speech data. More specifically, he trains models to learn representations of spoken words from raw audio inputs. He combines machine learning and statistical modeling with neuroimaging and behavioral experiments to better understand how neural networks learn internal representations in speech and how humans learn to speak. He has worked and published on sound systems of various language families such as Indo-European, Caucasian, and Austronesian languages.
In a recent set of papers (here and here), he proposes that language acquisition can be modeled with Generative Adversarial Networks and propose a technique for exploring the relationship between learned representations and latent space in deep convolutional networks.
Beguš directs the Berkeley Speech and Computation Lab.