Ph.D. 2009, Northwestern Univ.
I am an Assistant Professor of Phonology in the UCLA Department of Linguistics.
How do we learn a language? This is the question that lights my soul on fire. To answer it, we must ask several more specific questions:
- What do we know when we know a language?
- How can we characterize this knowledge?
- How do we learn things in general?
- How is linguistic knowledge deployed when speaking and listening?
Most of my work has addressed these questions in a focused domain called phonotactic acquisition. Phonotactics is the study of possible sequences of speech sounds, and how that knowledge is deployed during speech perception/production; phonotactic acquisition is the study of how infants/adults learn the phonotactics of their language.
For example, my dissertation proposed a computational learning model of phonotactic word segmentation. Word segmentation is the perceptual process by which we hear speech as a sequence of word-like objects (acoustically, speech does not contain breaks between words, as can easily be verified by listening to speech in an unfamiliar language). The basic idea is that infants figure out that certain sequences (like pd) do not occur within words, and so can infer a word boundary when these sequences occur.
Another recent project, with Ingrid Normann, focused on the tendency of native Spanish speakers to produce extra vowels when they are speaking English, e.g. snob –> esnob. We collected novel phonetic evidence in support of the hypothesis that this behavior is driven by intrusion from the syllable structure of Spanish.