From Acoustics To Phonology
When hearing speech, humans rapidly and robustly map from acoustic signal to the sounds of their language and extracting the necessary features to understand speech from the raw signal is not an easy task. By building computational models and comparing to brain data, we are investigating the underlying neural mechanisms of this system. While starting with a basic sparse coding model we are creating evaluation methods and metrics in order to benchmark performance of these models. This project is the main focus of my PhD work.
The Language Familiarity Effect In Infancy
Human listeners are better at telling apart speakers of their native language than speakers of other languages, a phenomenon known as the language familiarity effect. While most accounts of this effect in adults require abstract phonological knowledge or comprehension of the speech itself, the effect has also been observed in infants as young as 4.5 months of age who are unlikely to have such sophisticated knowledge. Using algorithms from unsupervised machine learning and automatic speech recognition we are building models to demonstrate how children may show this effect without requiring any sophisticated linguistic knowledge.
Thorburn, C., Feldman, N. H., & Schatz, T. (2019). "A quantitative model of the language familiarity effect in infancy." Proceedings of the Conference on Cognitive Computational Neuroscience.