Archive of past proseminars

Spring 2017 


Ling 252: On the Nature of Islands 
(Schutze)

I propose to begin this proseminar by laying out a buffet of topics that fall under the heading above and inviting the participants to select the ones that look most appetizing for us to pursue. I start with the disclaimer that I am by no means an expert on islands—far from it. I have a couple of little pockets of knowledge, but I hope everyone else will bring data and theoretical expertise to complement my own. Here is a sample of some of what you will find at the buffet table:

What are the criteria for identifying something as an island, anyway?
Yes, we have definitions along the lines of “a constituent that is hard to extract from,” but in practice these constituents are often rather complex beasts in and of themselves, and we know from a couple of decades of processing literature that extracting (i.e., a long-distance dependency) is hard in and of itself, so people like Jon Sprouse have proposed that to warrant the label “island” (and hence the need for an island constraint), you need to be ‘harder to extract from than you would expect based on the nature of the constituent and the fact of doing extraction’. He operationalizes this definition as a statistical interaction in the acceptability ratings of a 2×2 set of sentence types, and by this measure, according to experiments conducted to date, not everything people have claimed is an island comes out as an island, and there are intriguing crosslinguistic differences (e.g. English vs. Italian). What to make of that?

• Relatedly, although island constraints are the poster children for UG constraints, there is massive (insufficiently acknowledged) cross-linguistic variation in which islands which languages enforce. (How) can you learn which islands your language tolerates violations of? Is there a notion of “parameter” that fits current theory that could capture such variation? Are ANY islands universally bad? The best candidate seems to be the Coordinate Structure Constraint, though even that has apparent exceptions (see (2) below), and there have been proposals (e.g. by Jacobson) that the CSC should be explained in terms of (roughly) semantic type mismatch rather than anything syntactic.

To what extent does it makes sense in 2017 to keep talking about “islands” as if they are a unified phenomenon? There have been arguments (some recent, some dating back decades) that some “so-called” islands are syntactic, some are semantic/pragmatic, some are prosodic, some are processing-based, etc. To the extent these claims are right, could/should our theoretical architecture hook them in at a single locus? (Kyle Johnson shared some thoughts on this in his prosem this quarter.) If we were to “reduce” some islands to “processing considerations,” what sort of progress would this represent, as opposed to perhaps just kicking the can out of the syntacticians’ yard into the psycholinguists’?

Does Minimalism have a theory of islands, really? We have phases (weak, strong, vanilla, chocolate, …), though we can’t seem to agree on what categories they are, and we have some notion that there are restrictions on how you can get out of them (via edges, and “edge features”, whose virtual conceptual necessity one might question), and to some people this feels like a warmed-over dish formerly known as Barriers served on a fancier platter whose shininess makes it harder to discern the predictions, while to others the dish now has a more principled flavor grounded in much deeper notions tied to the fundamental nature of the architecture, e.g. cyclic spell-out. Can we tell who is right (yet)?

Language inaugurated a new “Perspectives” section recently with a target article entitled “Child language acquisition: Why universal grammar doesn’t help”, one of whose central arguments was that positing island constraints in UG is unnecessary (and actually counterproductive) because their consequences (to the extent they’re even empirically correct) fall out from discourse constraints that children must independently have/acquire anyway, drawing on work by i.a. Adele Goldberg. A response (2/3 of the authors of which have UCLA connections, including yours truly) argued that they completely failed to demonstrate this. Their reply to our response said we failed to make that case. Did anything useful come out of this exchange? Was it truly a scandal that Language published the target article in the first place (cf. Hornstein’s blog)?

• There is some recent intriguing work that seeks to wrangle and unify many notorious “exceptions” to island constraints by thinking more deeply about the meanings of the relevant sentences. One such example is Robert Truswell’s book/dissertation “Events, Phrases, and Questions,” whose central claim can be caricatured as ‘You can extract out of one event but not two’. (Why, one should of course wonder.) Some examples to give a taste of what he hopes to explain [judgments reflect (at least) one speaker of mongrel English]:

(1) a.    Here is the influential professor that John went to college in order to impress.
b. ??the book that I went to college because I liked

(2)  a.   Which dress has she gone and ruined now?
b. *Which dress has she danced and worn?

(3)  a.  What did John drive Mary crazy whistling?
b. *What does John work whistling?

(4) a.     Who did John go home without talking to?
b.  ?*Who did John get upset despite talking to?

 

Ling 252: Scrambling and Clitics (Anoop Mahajan / Dominique Sportiche)

As early as in Mahajan 1991 ( Clitic doubling, object agreement and specificity. NELS 21:263-277.) the suggestion is made that (some) Hindi Scrambling shares properties with Clitic Left Dislocation in Italian and Romanian. In Sportiche (1992) (Clitic Constructions, ms., UCLA. published as: 1996. Clitic Constructions, in Phrase Structure and the Lexicon, L. Zaring and J. Rooryck, 213-276, Kluwer Academic Publishers, Dordrecht.), the claim is made is that cliticization in French (and Romance) is the counterpart of Germanic Scrambling.

The purpose of the seminar is to explore the connections between all these: Hindi Scramblings, Romance (and Greek) Clitics via Clitic Left and Right Dislocation, and Germanic Scrambling (German and Dutch at least) (and possibly Japanese and Korean Scrambling), with the aim of deciding whether a general theory unifying these movement types across languages can be formulated.

Since these suggestions were made, analytical tools used to establish the existence and properties of movement dependencies have been refined. We will describe these tools and apply them systematically to the three (or four) classes of languages: French (and Romance), Hindi, German and Dutch (possibly Japanese and Korean).


Winter 2017


Ling 252
(Koopman)

Within current Minimalists approaches, we are looking for the syntactic framework that is most suitable for the interfaces with the semantics and the phonology,  provides a likely path to acquisition,  models the data from an individual speaker, extends to capture linguistic variation, and allows the development of SSWL properties to test hypotheses about (im)possible crosslinguistic variation.  We will explore the question whether we can choose between (specific implementations of) different frameworks. (Not surprisingly) antisymmetry, and generalized U(niversal) 20 patterns, will play an important role throughout the quarter. We will look at left right asymmetries in scope interactions, and (further) test a U20 typology of morpheme orders, (attempt to test) a theory of the expected typology of second position phenomena, contrasting it with the Distributed Morphology framework.  We will start off with a comparative study of the distribution of NegXPs in Germanic languages to determine what part of the syntax is stable and what minimally varies.

Meets Mondays 2-4.50 –(week1: in the syntax lounge).

Ling 254: A Multidominant Theory of Movement (Kyle Johnson)

A common assumption is that there is a single operation, “movement,” that is responsible for certain types of long-distance dependencies. In its classic formulation, movement gives a moved term a new location and puts in the original location a silent variable that is bound by the term in its new position. Attempts have been made in this century to decompose movement into more elementary operations, and this seminar will trace one of those attempts — one that uses phrase markers that tolerate multidomance. The aim will be to explain some of the features that appear to define movement: semantic displacement, terseness and boundedness. The focus will be on Verb Movement, Wh Movement and Quantifier Raising. We will look at linearization schemes designed to flatten multidominant phrase markers into strings and how those schemes interact with the semantics of constituent questions, quantifiers and topicalized verbal projections. Key readings include Elisabet Engdahl’s 1985 UMass dissertation, Jairo Nunes’s 1995 University of Maryland dissertation and Hadas Kotek’s 2014 MIT dissertation.

Ling 252: Indexicality and de se (Yael Sharvit)

Indexicality and ‘de se’-ness, and the relations between them, in English and cross-linguistically (in the person as well as tense domains), has been the topic of much exciting research in recent years. We will explore some of the “old” and current literature on this topic, with the goal of understanding the important questions and some possible (and impossible) answers.


Fall 2016

 

Linguistics 251: The analog-continuous / discrete-categorical divide (Daland)

Phonological categories are theorized as discrete entities. Phonetic signals are theorized as analog and continuously varying. How do speaker/hearers intermediate between these? The two intentions of this seminar are (i) that students will learn about the various approaches to the discrete/analog interface in speech and language, and (ii) that students will emerge from the course with hands-on experience implementing models. Thus, the seminar will consist of theoretical readings, interleaved with practica. To the extent that it proves feasible, the practica will consist of implementing and assessing our own versions of models we are reading about. Prior programming experience is desirable, but not strictly necessary. Students taking the class for 4 credits are expected to complete a final project addressing any aspect of the discrete/continuous interface. In-class time will consist of a mixture of lecture (by professor), oral presentations (by students), and discussion (by everyone). Lectures will be concentrated at the beginning and at the introduction of technical topics. Conditional on class size, each student will give an oral presentation on one paper that everyone has read, and one paper that they read on their own. Hands-on and out-of-class activities will be focused on implementing the simplest versions of the model classes we study. In HW1 we will assess how well a Gaussian Mixture Model learns the vowel space of a standard 5-vowel system (e.g. Japanese, Spanish). HW2 replicates a classic exemplar simulation of lenition and contrast preservation by Pierrehumbert. Homeworks are intended to provide a natural jumping-off point for projects. For example, one could extend HW1 by incorporating speaker normalization, robustness under noise, iterated learning, playing with the initialization, or comparing with an exemplar model.


Spring 2016

 

Linguistics 217: Experimental Phonology (Daland)

The focus of this course will be artificial grammar learning studies. On the first day of the course I will unveil the proposal for an experiment, which we as a class will implement and carry out. That is, I will have the general idea and a suggestion for the kind of stimuli that we will use, but we as a class will need to determine the exact number of stimuli and trial properties/design, create the stimuli, code up the experiment in some (to-be-decided) experiment software, debug the experiment, pilot it on ourselves, and then run it on our friends. The first month we will spend reviewing literature on artificial grammar learning studies, focusing on the following: issues of substantive bias versus formal complexity, learning sound patterns only versus learning morpho-phonological alternations, general design considerations for artificial grammar learning studies. The second month we will spend actually implementing and conducting the experiment. The remainder of the class will be divided up between writing up the experiment, and the final project. The final project is an experiment proposal, something like the Intro + Methods section of an experimental paper (except without results). In other words, we as a class will do one experiment together starting from an experiment proposal; then you as an individual student will design your own experiment for the final.

Linguistics 218: The semantics of degree constructions (Rett)

I use the term ‘degree construction’ to refer to a variety of constructions traditionally associated with gradience: adjectival constructions like comparatives and equatives; intensifiers like very, measure phrases like two feet, and other modifiers; and verb phrases like degree achievements (e.g. the soup cooled). In this course, we’ll explore compositional semantic treatments of these constructions, which are generally thought to require more sophisticated machinery than the basics in GQT/Heim & Kratzer. We will focus mostly on degree semantic treatments (beginning with Cresswell 1976), but we’ll also look at some alternatives to degrees (in particular, Klein’s (1980) comparison classes).

Linguistics 252: Ellipsis: Syntax and Acquisition (Hyams & Stowell)

Ellipsis constructions provide a wealth of insights into syntactic structure and at the same time present a number of interesting puzzles for syntactic theory. We will examine a number of these, including (as time permits):

  • Varieties of ellipsis: TP, VP, and NP ellipsis (perhaps others)
  • Environments licensing ellipsis: coordination, comparatives, questions, adjuncts, relative clauses, dislocation, parentheticals
  • Theories of ellipsis: true ellipsis or null XPs?
  • Ellipses with 2 or more possible antecedents; ‘backwards’ ellipsis
  • Apparent ‘Island repair’ exhibited by certain types of ellipsis.
  • Extraction from elided XPs: Sluicing and pseudo-gapping;
  • Defining the parameters of identity: ‘vehicle change’ phenomena: negative/positive polarity, anaphors/pronouns/r-expressions; tense, mood, aspect, and voice; morpho-syntactic vs. semantic identity;
  • Strict vs. sloppy identity, and verbal/predicative analogues;
  • Parallelism, Scope, and Economy;
  • Antecedent contained deletion and infinite regress.

We will also look at children’s acquisition of ellipsis constructions. The acquisition of ellipsis is especially interesting because it poses an extreme ‘poverty of the stimulus’ problem. The abstractness problem is compounded by the fact that languages vary with respect to the kinds of ellipsis they allow. Any theory of ellipsis has to reckon with these learnability issues. There is a small experimental literature on the children’s development comprehension of ellipsis (small as compared to syntax/semantics literature) that we can read. In addition, we’d like to consider the experimental (and naturalistic) data in light of recent theoretical approaches, which may suggest new directions to explore in acquisition.


Fall 2015

 

Linguistics 251: Recognition of morphologically complex words (Sundara)

In this proseminar, we will investigate how morphologically complex words are recognized cross-linguistically. We will focus on the architecture of the mental lexicon, the nature of underlying representations and implications for the role of phonology in this process.


Spring 2015


Linguistics 218 
(Keenan)

In Spring quarter 2015 I will be offering Ling 218, “Math Ling 2″ (expected time: TT 2:00 – 4:00). This is a variable content course that grad students can take more than once with permission of the instructor.
The first two weeks of the course focus on learning and practicing proof techniques. Then we read articles in the literature, beginning with ones in linguistic applications of generalized quantifier theory, to replicate or supply their proofs. We *may* later adventure into event semantics and adverbial quantification, generating and interpreting discontinuous constituents, or grammatical relation changing operations.
Prerequisites for the course: solid competence in the logical, set theoretical and lattice theoretical work covered in Ling 180/208. Undergrads are eligible to register for the course with permission of the instructor. Undergrads should understand that:
This is a grad course, designed for people who are interested in learning about language structure; it is not just a course where you do exercises, exams and get a grade. You are expected to attend every class, read at least some of the suggested reading, and make several class presentations – usually summaries or completions of proofs from the reading. A short paper may be required.

Linguistics 251: Metrics (Hayes & Schuh)

Metrics is the study of the deployment of phonological material (stress, syllable weight, phrasing) to manifest rhythmic patterns for artistic purposes. It is an old and traditional discipline but recent work has applied more sophisticated tools, offering hope of achieving improved rigor, insight, and analytical accuracy. These tools are often borrowed from current phonological work (notably, theories of stress and weight) but also from formal linguistics in general (constraint-based grammars, Harmonic Grammar, maxent, learning algorithms). This course will be a “how-to” with illustrations.  We will cover empirical material from our respective areas of expertise (Hayes:  English iambic pentameter; Schuh:  quantitative verse of the Chadic languages, especially Hausa).  Contemporary issues to be addressed include the form of metrical grammars, gradient syllable weight (work of Kevin Ryan), and the three-way interaction of phonology, verse form, and sung rhythm in sung/chanted verse. Students will be encouraged to take on their own verse data and use the methods taught to analyze them.

Linguistics 252: Relative Clauses (Sportiche)

I am planning to teach a seminar this Winter (or perhaps Spring) on Relative clauses. I would like to survey + the various types (headed, headless, appositive, restrictives, internally headed, externally headed, with or without resumptive pronouns etc…). + the main theories concerning how they should be syntactically analyzed (concentrating first on headed relatives, and time permitting looking at the others, and possibly the relation between headless relatives and questions). + possibly by bringing to bear on these questions data from a variety of languages (hopefully represented in the classroom!).

Linguistics 252: Negative Polarity (Sharvit & Stowell)

In this course we will discuss the syntactic, semantic and pragmatic aspects of the distribution of negative polarity items (NPIs). NPIs seem to be “comfortable” in the scope of negative licensors; compare *John said that Bill had any friends with John didn’t say that Bill had any friends. A considerable variety of NPIs is attested in the world’s languages.  English has lots of NPIs in addition to any, for example ever,  yet, epistemic uses of can, and numerous ‘minimizing’ NPIs  including some idioms (a single N, lift a finger, spend a red cent), among others. Accounts of NPIs are generally related to more general theories of negation and scope (some based on c-command). Since Ladusaw’s seminal work, many accounts of NPIs have also made reference to the semantics of downward-entailment (monotone-decreasing) environments.  Still, it has proven to be difficult to give a general theory of NPI-licensing.

For one thing, it is not clear whether an NPI is directly licensed by a licensing constituent (such as a negative particle that c-commands the NPI) or whether it is licensed by virtue of occurring in a particular type of syntactic or semantic environment (such as a yes/no question that contains the NPI.) Moreover, despite the persistence of the term ‘negative polarity item,’ it is widely assumed that NPIs can be licensed in some cases without the presence of negation per se. In addition to cases of licensors whose semantics arguably justifies positing a structure that contains a covert negative operator (e.g.  few, only, deny, and doubt), there are other licensing environments whose semantics is not specifically “negative” in any obvious way, including comparative constructions, conditional clauses, and relative clauses with universally quantified heads.In addition, not all NPIs are created equal; for example, some NPIs require local (e.g., clause-internal) licensing, and/or are more selective about what can license them. NPIs are often compared to Positive Polarity Items (PPIs), which are generally assumed to be permitted only in non-negative environments. We will discuss some classical works (e.g., Ladusaw 1979) as well as more recent work (e.g., Gajewski 2010).


Winter 2015

 

Linguistics 254: Evaluating perspective in meaning and discourse (Harris with Elsi Kaiser (USC)

Much of language appears to be sensitive to perspective and point of view of the situation in which it is to be evaluated. Point of view may be encoded lexically, as in perspective-dependent adjectives like local, nearby, andrecent, and predicates of personal taste, such as tasty or beautiful, or may emerge out of more general properties of the text, as in the case of free indirect discourse, in which a variety of cues, such as tense or choice of anaphor, indicate the relevant perspective.

Such cases have attracted the attention of both theoretical linguists and experimentalists. Our goal in this seminar course is to sample cases of recent and historical interest and to assess whether commonalities across the phenomena of interest permit a unified account. Perspective and point of view arguably pervade many levels of language. Thus, we adopt a strategy of initially constraining our attention to items that encode or otherwise involve the calculation of perspective lexically, dealing first with predicates of personal taste, modals, expressives and epithets. We then turn our attention to larger stretches of discourse, including relative clause types, common ground calculation, free indirect discourse, and pronominal interpretation. We hope to engage both theoretical and experimental literature in the course, and selected readings from both areas throughout the course. The ultimate aim is to develop a working model of perspective and point of view phenomenon that is informed and contributes to the theoretical literature, and can be tested and refined through experimental methods.

Linguistics 251:  Foundations and applications of continuous mathematics for linguistics   (Daland)

This course surveys particular aspects of continuous mathematics which are likely to be of special relevance to linguistics graduate students. It opens with a brief, axiomatic treatment of the real numbers, with the dual goals of accustoming students to formal proofs, and acquainting students with the occasionally counterintuitive properties of sequences of symbols. Next, the course turns to linear algebra. The foundations are covered, and then various linguistic applications are considered: the equivalence of Markov chains and probabilistic finite state automata (applications include Hale’s paper “The Information Conveyed by Words in Sentences” and a proof by Daland about convergence in phonotactic learning), the simplex algorithm as used for learning Harmonic Grammars, and linear regression. If there is time remaining, it will be devoted to learning, with a special emphasis on Bayesian modeling (naive Bayesian classifiers, conjugate priors, and maximum entropy models).


Fall 2014


Ling 213C: Linguistic Processing 
(Harris)

Theoretical linguistics is concerned with how language is organized in the abstract, creating models of linguistic competence, typically unencumbered by issues of performance. In contrast, psycholinguistics addresses how language might be realized as a component within the general cognitive system: specifically, how language is comprehended, produced, and represented. It is an interdisciplinary effort, drawing on research and techniques from linguistics, psychology, neuroscience, and computer science, and utilizes a variety of methods to investigate the underlying representations and mechanisms that are involved in linguistic computations.

The core areas of psycholinguistics include language acquisition, language perception, language production, language comprehension, language and the brain, and language disorders and damage. This course emphasizes depth over breadth, and so we will not delve into all of these topics. Instead, we will be focusing on just two areas of research: mental representations and processing of lexical units, and sentence comprehension and production. We start with the basics of lexical access and decision, exploring various models of the processes. We then move to an overview of classic models of sentence processing which vary according to a number of related properties such as the modularity/interactionism of information channels and the serialism/parallelism of processing. Finally, we discuss several topics in current and classical language research, including the filler-gap dependencies, semantic processing, and sentence production.

Crucially, psycholinguistics does more than simply describe the facts. It attempts to weave what is known about how humans produce and process language into a coherent cognitive model, with enough structure so that we can study its composition in a rigorous, hypothesis driven
way. An important theme of this course involves elements of model building and assessment, emphasizing explicit and concrete hypotheses that make testable, and linguisticallyinformative, predictions. The aims for this course include:

  • Identifying the major choice points of classic and current psycholinguistic models, as
    well as the essential arguments for and against them;
  • Creating and assessing explicit and concrete experimental hypotheses that develop
    aspects of a model;
  • Learning how to generate testable predictions from experimental hypotheses;
  • Acquiring practical experience with experimentation;
  • Presenting results and interpretations in clear and accessible way.

Spring 2014

 

LING 251A/B: Intonational Bootstrapping in Acquisition (Jun & Sundara)

In this course we are going to investigate how intonational features, especially phrasing and prominence marking, help to bootstrap language acquisition. We are particularly interested in infant and children’s development in comprehension, given various manipulations of
intonational features; production studies will also be discussed when they are relevant to the understanding of the results from comprehension studies. We will also look at other languages whose intonational features are different from those of English as well.


Ling 252: The semantics of speech acts 
(Rett)

The difference between speech acts like John is home now (an assertion) and John is home now? (a question) has traditionally been characterized as pragmatic (Wittgenstein, 1953; Austin, 1962; Searle, 1969). Linguists have, however, observed that these differences in illocutionary force are often explicitly marked across languages: syntactically; intonationally (as in English); or by sentence particles (as in Cheyenne, Murray 2014). This suggests the need for a compositional semantics of speech act markers (i.e. illocutionary mood), and consequently for a formalization of the semantics/pragmatics interface. We begin with a review of the philosophical typology of speech acts and some relevant pragmatic and syntactic considerations. We’ll then examine a variety of theories that have found the need to represent speech acts compositionally: Krifka’s account of quanti fiers in questions; Gunlogson’s semantics of intonation; along with analyses of responses in discourse, attitude markers, and illocutionary mood.

Ling 252: Phases, Economy, and Move, Merge, and Agree: the Minimalist Program Reconsidered (Sportiche & Stowell)


The seminar will present and critically discuss several key aspects of the architecture of the current minimalist program of syntactic theory. While we will devote some attention to historical precursors and discuss the considerations that motivated current assumptions, our primary aim will be to identify areas of inquiry that we believe will prove fruitful for future research, i.e. we hope to find topics that may be of interest for graduate students to work on. We will encourage students enrolled in the class to each select one or two empirical phenomena to work on (e.g. comparatives, inversion, reciprocals, etc.) that will enable them to engage at least some of the issues and to make a class presentation. The course will cover material relating to the following topics:

  • Derivation of Phrase Structure and Word order
    • elimination of X-bar projections and labeling
    • cyclic linearization and/or spell-out
    • interface-driven movement
    • Locality Domains
  • Movement and bounding theory (phases, edges)
  • Agreement and feature theory
  • Binding, quantification, anaphora
  • Economy
  • Numeration
  • Closest Attract/Intervention
  • Move over Merge

Ling 254: Acquisition of control (and possibly related things) (Hyams)

Beginning with Carol Chomsky’s (1969) seminal study, and again in the late 80s and early 90s there was a flurry of experimental work looking at children’s acquisition of control (into complements, temporal adjuncts, and purpose clauses) with very interesting results. Children were often not adult-like in their interpretations of control structures and the results were fairly uniform across studies. While acquisition work in this area largely stopped after that point, there continued to be developments in the theory of control (in adult grammar), including control as movement (Hornstein 1999), control as Agree (Landau 2001), logophoricity in control (Williams 1992), and semantic and pragmatic theories of control (e.g. Chierchia 1989) , etc. In this course I’d like to revisit the early acquisition results and reconsider these findings in light of more recent theoretical approaches, which I hope will suggest new directions to explore and inspire experimental work on aspects of control (and related areas) that have not previously been looked at in children, e.g. logophoric contexts.

I don’t expect that we will spend the whole quarter on control. Other (related) topics we might cover include (acquisition) of logophoric constraints on pronouns (Sell 1987; Reinhart & Reuland 1993, etc.). Condition C (of the Binding theory) is also an area that has been largely neglected in the acquisition world (but see e.g. Crain & McKee 1985), as compared to Conditions A and B. So that would be another topic to investigate.
The direction of the class will also depend on the interests of the participants, so I welcome those of you who would like to attend to suggest areas (related in some way) that you’d like to explore. It would be helpful for you to be familiar with basic acquisition issues and results, but I do not want to discourage anyone from attending, esp. if you are knowledgeable about the (adult) syntax/semantics of these structures. So if you’d like to attend but have not taken 213A or equivalent course, please contact me and get hold of a copy of Guasti’s Language Acquisition: the Growth of Grammar. MIT Press.

Minicourse 1: Trivalent Logic and Natural Language Semantics: Presuppositions, Vagueness and Plurals (Benjamin Spector)

This class will be concerned with various linguistic phenomena, such as vagueness and presuppositions, which have motivated the use of trivalent semantics, i.e. semantics in which a sentence’s truth value can be either `true’, `false’ or `undefined’. I will introduce various types of trivalent systems and compare their predictions for presuppositions and vagueness, with a focus on the so-called ‘projection’ problem. I will then move to the semantics/pragmatics of plurals,  and show that trivalent logics give us tools that can help us make sense of a number of puzzles in this domain.  Specifically, I will argue that plural expressions involve a specific form  of vagueness, which can help explain phenomena such as non-maximal readings of plural definites, so-called homogeneity presuppositions,  as well as the various readings of reciprocal sentences.

Minicourse 2: (Distributed) Morphology: the syntactic structure of words (Jonathan Bobaljik)

In this mini-course, we will look into the types of evidence that bear on current debates about the internal structure of words, and the relationship of morphology to other components of grammar (especially, but not only, syntax). We examine the central tenets of the framework of Distributed Morphology, namely arguments for hierarchical (syntactic) structure within complex words (syntax-all-the-way-down), and that this structure is abstract, independent of the phonological pieces that realize the structure (Late insertion). A central area of investigation concerns (apparent) mismatches, for example where the syntactic structure and morphological structure appear to differ, or where a form varies for context in ways that are not phonologically predictable (allomorphy). This leads to discussion of how complex the mapping from syntax to morpho(phonology) needs to be, how additional formal devices are to be constrained, and where the trade-offs may be found, enriching one component or the other in favour of a more straightforward mapping.

Evidence will be drawn from cross-linguistic surveys of morphological patterns, espeically those that stand as contenders for universal generalizations, including (time permitting) suppletion in adjectival morphology (Bobaljik 2012 Universals in Comparative Morphology); locative morphology (Radkevich 2010); and the expression of person and case morphology (Caha 2009), and other features that appear to participate in ‘markedness’ hierarchies.


Winter 2014

 

Ling 218 (Keenan)

  1. The boolean structure of major category types: Predicates and Arguments;  Modifiers; Quantifiers; Intensional modifiers (without possible
    worlds).
  2. Linguistic applications of generalized quantifiers: Conservativity; intersective (existential), co-intersective (universal) and proportionality quantifiers; sortal reducibility; “R-expressions” vs anaphors (possibly also predicate anaphors)
  3. Abstract Grammar: syntactic universals as automorphism invariance; the semantics of case marking and voice marking languages; free word order languages
  4. A deeper look at adjectives?

Ling 209A: Computational linguistics (Stabler)

The mechanisms of phonology and syntax are now known to be more similar in fundamental respects than we thought they were. Syntax is regular (i.e. finite state) over trees in the same way that phonology is usually thought to be regular over sequences (of segments), at least to a first, good approximation. (This is discussed at length in Thomas Graf’s thesis) But of course, phonology needs tree structures too, so we introduce sequences first but then immediately introduce trees. Not only are phonology and syntax both based on regular generators, but in both cases there are strong arguments for factoring the generators into interacting pieces: gen+constraints, derive+spellout. And both use statistics in roughly the same way.

So the class will design and implement ‘state of the art’ parsing models for phonology and syntax, with similar architectures. There is of course some debate about whether the regular generators for phonology and syntax are adequate – Some of these issues will be mentioned as topics for further work, after this class provides us with a basic framework. If we are serious about trying to understand language acquisition, the changing nature of a child’s linguistic abilities, what we need is really not just a parser for phonology and a parser for syntax. For a reasonable psychological model, we need what are called ‘universal’ parsers, that is, parsers that immediately accommodate changes in the grammar. These seem to be less common in computational phonology than in syntax, but needed in both.

  1. The lexicon as a regular transducer
  2. Weakly regular prosody and parsing
  3. Alternations by optimizing: Classical OT
  4. Optimizing over regular trees: Prosody again
  5. Linear harmonic regular tree grammars (just a glimpse)
  6. Phrase structure from regular tree grammars
  7. Movement and constraints over regular derivations
  8. A first minimalist grammar for English
  9. Harmonic minimalist grammar

Ling 252 (Sharvit)

In this course we will talk about copular constructions, focusing on asymmetries such as the following. Intuitively, (1a) and (1b) seek different kinds of information (Percus 2003); (2a) and (2b) are non-contradictory (Cumming 2008); (3a) does not presuppose the existence of a president, while (3b) does (Halliday 1967).

  1. a. Who (do you think) _ is Sam?          b. Who (do you think) Sam is _?
  2. a. Yael believes that Sam is Jessica.   b. Yael believes that Jessica is Sam.
  3. a. John is not the president.                b. The president is not John.

As we will see, “blaming” these asymmetries on a difference in meaning between the so-called ‘predicational’ be and the so-called ‘non-predicational’ be is neither simple nor obvious. I will do my best to convince you that accounting for these asymmetries requires digging deeply into the semantics, pragmatics and syntax of attitude reports, questions, conditionals, definite descriptions, names, superlatives and more. Especially helpful to us will be work by R. Higgins, I. Heim, B. Partee, P. Jacobson, O. Percus, S. Cumming, M. Romero, I. Yanovich, G. Thomas (and many others whose names are omitted from this list for no good reason). While Semantics I (or equivalent) is required background (in the sense that it will be hard for you to follow the formalisms if you’ve never taken any kind of semantics), and Semantics II is highly recommended background (in the sense that it will be hard for you to follow the formalisms if you’ve never “played with” possible worlds), everyone and anyone interested in copular constructions is welcome to attend.

Minicourse: Methods in Semantic Fieldwork (Sarah Murray)

This mini-course will introduce various methods for collaborating with native speaker consultants to collect semantic and pragmatic data. We will discuss how (not) to use translation, acceptability judgements, use of texts, eliciting stories, the importance of context, and how to establish it. We will also discuss the relation between semantic fieldwork and theory, which is potentially mutually beneficial. Examples of fieldwork on presupposition, not-at-issue/at-issue content, and underspecification vs ambiguity will be discussed. Though emphasis will be put on working with understudied languages, the methods discussed apply to semantic research on any language and semantic considerations important for fieldwork in general.


Spring 2013


Linguistics 251: Variation in Phonology 
(Hayes & Zuraw)

  • Free variation in output forms; research results of sociolinguistics
  • Variation in the lexicon—how it can/should be treated by the same models that treat output variation.
  • Degrees of productivity in phonology and morphology and how variation models can treat them.
  • Formal grammatical models of variation: OT, Harmonic Grammar, maxent grammars, logistic regression.
  • Practical advice in doing variation research: corpora, software, theory
  • The role of priors—language learning as frequency matching, guided by priors (“soft UG”). Work of Ryan, Martin, White, Wilson, McPherson, Hayes/Zuraw

Linguistics 251: Field methods for studying intonation—Tagalog (Jun & Zuraw)

  • Develop an intonational model of Tagalog/Filipino by eliciting data from native speakers.
  • Use corpus materials to test and further develop the model.
  • How well do intonational units predict the domains of various phonological processes?

Linguistics 252:  Comparative syntax: Exploring and Expanding SSWL (Koopman)

This seminar will focus on the interaction of syntactic theory and data and analysis of variation across languages and within “languages/individual speaker”. The goal is explore, develop and expand the SSWL (the Syntactic Structures of the World’s languages) database, and its future host TerraLing, from a theoretical point of view. SSWL is an open-ended, open source, expert crowd sourced database of syntactic (as well as semantic and morphological ) properties of the word’s languages. The database is open-ended in the sense that new properties can be added in finitely (which will happen gradually over time), and is intended as a tool to support research, eventually run by the community and for the community.  TerraLing is the next generation of the database project: it provides a  flexible platform for linguists to set up their specifi c individual projects and use the tools that come with the database.
Based on the theoretical developments of the past 50 to 60 years, and the accumulated body of knowledge about variation, the ultimate goal is to develop new property de finitions for a number of domains, building on speci fic theoretical predictions about what we expect to find, and not find. After a general introduction to the database, and an overview of the current landscape around databases (shortcomings and desires about what a database should be able to do), I will motivate a new hypothesis about the expected space of variation in the domain of word order (cf. a series of recent talks Koopman 13). This will be put to the test on data currently in the database(s), and beyond. The general “lessons” of these investigations
should help develop new property de finitions efficiently, ideally distinct from, and non-overlapping with WALS. In addition, I hope to move current property de finitions in development closer to their final stages, and to start develop a list of potential properties explicitly proposed in the literature (mostly by Kayne in various papers).

Linguistics 254B: Python for language research (Daland)

It is becoming increasingly important for researchers to be familiar with the rudiments of programming. This course is designed for beginners and near-beginners to learn the rudiments of programming using the Python scripting language. The course will be oriented towards linguistics graduate students, but may be of use to researchers in other fields and at other stages in their careers. Part I of the course will cover the absolute basics — the “print”
statement; atomic data types such as strings, lists, and dictionaries; flow control tools such as “if” statements and “for” loops. Part II will be geared toward corpus searching, and string processing more generally — regular expressions; how to handle Unicode; parsing XML; how to format output for analysis in a statistical package such as R or SPSS; and how to combine custom Python scripts with command-line tools such as SED for faster, more modular, and more transparent output. If there is time remaining, Part III will consider more advanced uses of Python, such as online data acquisition and/or multi-agent modeling, on the basis of student needs/desires. There will be a (modest) final project.


Winter 2013

 

Ling 218: Intensional types for natural language (Keenan)

We introduce boolean type theory to streamline standard extensional type theory for natural language.  Our initial semantic primitives are (1) truth values and (2) properties (common noun denotations), not entities.  Proper noun (and individual constant) denotations, individuals, are defined in terms of properties and truth values. Then we generalize to an intensional type theory using no semantic novelties (possible worlds, structured meanings).  We show that the surgeons and the flautists can be the same individuals even though surgeon and flautist denote different properties.  We represent evaluative adjectives like skillful which are inherently intensional: if the surgeons and the flautists are the same individuals the skillful surgeons and the skillful flautists may be different.

Our basic models ride on complete non-atomic boolean lattices.  The first few weeks of the course review boolean lattices (algebras) and include homework exercises from Introduction to Boolean Algebras by Givant and Halmos, available from the textbook section in Ackerman.
The second half of the course will be a research seminar.  We explore whether the boolean techniques used for nouns and adjectives extend to classical intensional phenomena – modals and sentential adverbs (must, necessarily).  We look at other sorts of intensional models (Moschovakis, Muskens, Lappin & Fox, Capretta) as well as subclasses of restricting adjectives – partial vs total (Rotstein & Winter), antonymous equatives (Rett), threshold constant vs variable (Burnett).

Ling 252: The Syntax (and an overview of the Semantics) of Tense, Aspect and Mood (Stowell)

Topics to be covered (approximately in this order)

  1. Morpho-syntax I: English TMA and the Early Generative Tradition
  2. The Reichenbachian Tradition
  3. Morpho-syntax II: the Later Generative Tradition
  4. Empirical phenomena (Tense and stativity; sequence of tense; mood and definiteness)
  5. GB/P&P/Minimalist Theories of the Syntax of Tense
  6. Analytical Issues of current theoretical interest (tense and intensionality; tenseless languages; distal tense systems)

Fall 2012
Ling 212: Learnability (Stabler)

The past 10 years have brought important advances in the theory of learning. We will begin with basics but then shift rather quickly to one tradition that is recently prominent. Many things are happening right now.

  1. In the past 10 years or so, a remarkable consensus emerged about appropriate measures of learning complexity, measures related to how much evidence might be required to distinguish one hypothesis from alternatives. “VC dimension” and a number of other, independently proposed measures turn out to be equivalent, showing that a number of independent traditions have homed in on essentially the same idea about learning.  Having a criterion like this is important because it allows us to to define the kind of learning process we should be searching for.
  2. With this measure, it is easy to establish the learnability of finitely parameterized grammars, OT grammars and harmonic grammars that assume a given, finite list of constraints. But these results are unsatisfying if you doubt that the number of parameters needed to define human languages (with their lexicons included) is small enough to be treated as a finite, given list.  And in certain cases it can be difficult or impossible to apply these methods directly to unanalyzed data of the sort plausibly available to the human learner.
  3. Recent results establish the learnability of certain aspects of linguistic structure from data more likely to be perceptually available, without assuming a given, finite parameterization. In phonology, it appears that at least some of the patterns may be in “sub-regular” classes that allow efficient learning, and
  4. some of these ideas for learning regular languages have been extended to the more complex patterns in syntax. In particular, we now have learners for significant classes of (non-context-free) languages definable by simple, formal ‘minimalist grammars’.

Ling 252 (Sharvit)

The main topic will be cross-linguistic manifestations of tense, especially in embedded environments, and with special emphasis on intensional environments (e.g., complement clauses of attitude reports; conditionals). We will consider recent theories of tense embedding, and relate them to general semantic and syntactic theories of sentence-embedding.
Ling 252 (Walkow)

This class will investigate two areas of research on agreement and the role of the operation Agree (Chomsky 2000) in it. The first are person based restrictions on agreement and cliticization that arise in environments like agreement with nominative objects in Icelandic, combinations of internal argument clitics in Romance or agreement with subjects or objects in Basque. Work over the last ten years has tried to reduce these seemingly idiosyncratic restrictions to general facts about the locality of Agree, the case filter, the syntactic representation of person and number, the relation between syntax and morphology, and case. The typology of such restrictions has lead to arguments about the location of variation in grammar (functional lexicon vs grammatical operations). A further aspect of interest will be the alternative mechanisms languages use to avoid person restrictions. These strategies often show morphosyntactic properties not otherwise found. These data have lead to new proposals about the nature of Last Resort-mechanisms and the role of agreement failure in grammar.

The second is closest conjunct agreement, a phenomenon where agreement with a conjoined argument expresses the person/number/gender-features of the conjunct closest to the agreement controller, rather than features representing the combined properties of all conjuncts. Closest conjunct agreement has been at the heart of recent arguments about whether the grammar has accesses to linear order as a primitive or by reference to syntactic structure, the interaction of movement and agreement, and how putative non-syntactic effects on agreement are regulated in the grammar.