- This event has passed.
Colloquium: Xin Xie
Location: Haines Hall 220
What is “adapted” in adaptive speech perception?
The acoustic-phonetic realization of the same linguistic categories (e.g., phonemes, syllables, or words) can vary considerably both within and across talkers. This variability poses a challenge for listeners, who must be able to perceive speech accurately despite these changes. Empirical data suggest that listeners adapt to cross-talker variability. Exactly how this adaptivity is achieved remains unknown. Three hypotheses have been proposed: (1) low-level, pre-linguistic signal normalization, (2) changes in linguistic representations, or (3) changes in post-perceptual decision-making. However, no study has directly compared these hypotheses.
I will present a computational framework for adaptive speech perception (ASP for brevity) that implements all three hypotheses and use it to derive predictions for experiments on perception. We find that the signature results of influential experimental paradigms do not distinguish between the three hypotheses, highlighting the need for new research methods. I will then discuss new approaches for investigating how these mechanisms—together and separately—guide the impressive adaptivity in human speech perception.