Over the first year of life, the vocalizations infants produce change remarkably, as do the ways that infants use those vocalizations to communicate. My lab’s research seeks to document how this dramatic vocal learning unfolds and to understand the neural, social, and physical mechanisms involved. On the human side, we focus on utilizing long-form audio recordings of children’s vocalizations and auditory environments, collected “in the wild”. Currently we are conducting a project that views infants and the adults in their environment as foraging agents, searching acoustic space for social responses. I’ll describe the approach we are taking, our results so far, and some of the main challenges for future work. On the computational modeling side, my collaborators and I simulate how neural, mechanical, and social mechanisms jointly contribute to infant vocal learning. A main finding from this line of work is that reward-modulated Hebbian learning in motor cortex may play an important role in speech development. Moreover, spiking neural network models exhibit multiscale dynamics characteristic of complex systems more generally. At short timescales, oscillations in neural activity allow for the production of dynamic behaviors including movements of the vocal tract that can produce syllabic babbling. A future question is whether these models generate clustered behavior at longer timescales that is similar to what we observe over the course of a day in our human data.
- This event has passed.
Phonetics Seminar: Anne Warlaumont, UCLA Department of Communications, “The discovery of speech vocalizations by human infants and computational models”
October 16, 2017 @ 4:00 pm - 6:00 pm