- This event has passed.
Bob Frank (Yale) – computational, syntax
Apr 1, 2022 @ 11:00 am - 1:00 pm
Haines 118,
Linguistic Productivity in Neural Networks: Representation and Inductive Bias
A fundamental fact about human language is its productivity: speakers are able to understand and produce forms different from those that they have previously encountered. Linguists typically account for this fact by positing abstract grammars that characterize structural representations for an infinity of possible forms. At the same time, recent neural network models have achieved extraordinary levels of performance on practical NLP tasks without any explicit abstract grammar or structured representations. This remarkable success raises the question of whether these models do in fact exhibit productivity of the sort human speakers are capable of, even in the absence of abstract grammar. In this talk, I will explore this question from two perspectives. First, I will discuss a line of work that investigates this question in the context of large pre-trained language models that have been at the forefront of contemporary NLP. We ask whether these models show evidence of productive knowledge of selectional restrictions that cut across variation in syntactic context (deriving from argument structure alternations like dative shift or syntactic “transformations” like passivization). For the second part of the talk, I will drill down to the properties of the neural network models themselves, and consider what kinds of biases they exhibit when they learn: how do they perform in the face of ambiguous data, and how do their biases differ from human generalization? To conclude, I will consider the viability of different approaches to fostering linguistic abstraction in these models, by modifying the structure of model itself or the way in which training takes place.