Sharpening the empirical claims of generative syntax

There is often confusion or skepticism about just what empirical claims are being made by research in generative syntax. A certain proposal about grammatical derivations and transformations will be intended as a description of mental structures, but it is not always clear how points of debate in the literature should be tied in to these mental/cognitive kinds of research goals. This course aims to show that one way to make this connection more concrete is via more explicitly formalized models of generative syntax.

We will begin by reviewing how grammars in general can be understood as cognitive hypotheses, based on examples with simpler and more frequently formalized context-free models. We will then investigate ways in which the Minimalist Grammar formalism can be used to bring the same kind of explicitness, and hence the same kind of cognitive significance, to proposals set in the framework of modern generative grammar.

This course is a (slightly) updated version of my course at NASSLLI 2014.

Lecture slides:

- Part 1: Grammars and Cognitive Hypotheses
- Part 2: Minimalist Grammars (MGs)
- Part 3: MGs and MCFGs
- Part 4: Probabilities on MG Derivations
- Part 5: Learning and wrap-up

Related readings and links:

- On the MG formalism (due to Ed Stabler):
- Stabler 1997 is the original source.
- Stabler 2010 surveys a range of variants of the formalism, and shows that most are weakly equivalent.
- Greg Kobele's 2006 dissertation gives a nice, slow introduction to the formalism in chapter 2, including the use of tuples of strings in place of derived trees, and an explicit semantics.
- Hunter 2011 talks about the relationship between MGs and IMGs, drawing heavily on Stabler 2006.
- See also Stabler's publications and a course he taught in 2012.

- On the surprisal/entropy reduction framework (due to John Hale):
- Hale 2001 introduces the idea of surprisal as a complexity metric, using CFGs.
- Hale 2003 introduction entropy reduction, still using CFGs.
- Hale 2006 uses entropy reduction in combination with MGs. Yun et al 2015 is more of the same, but with explanations that may be clearer (coming from this course).
- Levy 2008 shows a wide variety of effects predicted by surprisal, in combination with CFGs.

- On the "smart parametrization" of probabilities on an MG, see Hunter and Dyer 2013.
- Various pieces of software for working with MGs.