Home Student Advising Graduate Students
Current Proseminars

Proseminars are the special topics advanced graduate courses taught in our department, with course numbers in the 250 range. Typically these are prepared lecture courses given by faculty, but with strong student participation. They usually assume that enrollees will have already taken the core graduate courses in the relevant area.

This page reports proseminars being taught in the current academic year. We also include other graduate courses when their content has changed for the current offering.

For an archive of old proseminar topics, please visit the archive page. 



Spring 2017 

On the Nature of Islands

I propose to begin this proseminar by laying out a buffet of topics that fall under the heading above and inviting the participants to select the ones that look most appetizing for us to pursue. I start with the disclaimer that I am by no means an expert on islands—far from it. I have a couple of little pockets of knowledge, but I hope everyone else will bring data and theoretical expertise to complement my own. Here is a sample of some of what you will find at the buffet table: 

[See end of message re: schedule] 

What are the criteria for identifying something as an island, anyway? Yes, we have definitions along the lines of “a constituent that is hard to extract from,” but in practice these constituents are often rather complex beasts in and of themselves, and we know from a couple of decades of processing literature that extracting (i.e., a long-distance dependency) is hard in and of itself, so people like Jon Sprouse have proposed that to warrant the label “island” (and hence the need for an island constraint), you need to be ‘harder to extract from than you would expect based on the nature of the constituent and the fact of doing extraction’. He operationalizes this definition as a statistical interaction in the acceptability ratings of a 2x2 set of sentence types, and by this measure, according to experiments conducted to date, not everything people have claimed is an island comes out as an island, and there are intriguing crosslinguistic differences (e.g. English vs. Italian). What to make of that? 

• Relatedly, although island constraints are the poster children for UG constraints, there is massive (insufficiently acknowledged) cross-linguistic variation in which islands which languages enforce. (How) can you learn which islands your language tolerates violations of? Is there a notion of “parameter” that fits current theory that could capture such variation? Are ANY islands universally bad? The best candidate seems to be the Coordinate Structure Constraint, though even that has apparent exceptions (see (2) below), and there have been proposals (e.g. by Jacobson) that the CSC should be explained in terms of (roughly) semantic type mismatch rather than anything syntactic.

To what extent does it makes sense in 2017 to keep talking about “islands” as if they are a unified phenomenon? There have been arguments (some recent, some dating back decades) that some “so-called” islands are syntactic, some are semantic/pragmatic, some are prosodic, some are processing-based, etc. To the extent these claims are right, could/should our theoretical architecture hook them in at a single locus? (Kyle Johnson shared some thoughts on this in his prosem this quarter.) If we were to “reduce” some islands to “processing considerations,” what sort of progress would this represent, as opposed to perhaps just kicking the can out of the syntacticians’ yard into the psycholinguists’?

Does Minimalism have a theory of islands, really? We have phases (weak, strong, vanilla, chocolate, …), though we can’t seem to agree on what categories they are, and we have some notion that there are restrictions on how you can get out of them (via edges, and “edge features”, whose virtual conceptual necessity one might question), and to some people this feels like a warmed-over dish formerly known as Barriers served on a fancier platter whose shininess makes it harder to discern the predictions, while to others the dish now has a more principled flavor grounded in much deeper notions tied to the fundamental nature of the architecture, e.g. cyclic spell-out. Can we tell who is right (yet)?

Language inaugurated a new “Perspectives” section recently with a target article entitled “Child language acquisition: Why universal grammar doesn’t help”, one of whose central arguments was that positing island constraints in UG is unnecessary (and actually counterproductive) because their consequences (to the extent they’re even empirically correct) fall out from discourse constraints that children must independently have/acquire anyway, drawing on work by i.a. Adele Goldberg. A response (2/3 of the authors of which have UCLA connections, including yours truly) argued that they completely failed to demonstrate this. Their reply to our response said we failed to make that case. Did anything useful come out of this exchange? Was it truly a scandal that Language published the target article in the first place (cf. Hornstein's blog)?

• There is some recent intriguing work that seeks to wrangle and unify many notorious “exceptions” to island constraints by thinking more deeply about the meanings of the relevant sentences. One such example is Robert Truswell’s book/dissertation “Events, Phrases, and Questions,” whose central claim can be caricatured as ‘You can extract out of one event but not two’. (Why, one should of course wonder.) Some examples to give a taste of what he hopes to explain [judgments reflect (at least) one speaker of mongrel English]:

(1) a.    Here is the influential professor that John went to college in order to impress.

     b. ??the book that I went to college because I liked

(2)  a.   Which dress has she gone and ruined now?

      b. *Which dress has she danced and worn?

 (3)  a.  What did John drive Mary crazy whistling?

       b. *What does John work whistling? 

(4) a.     Who did John go home without talking to?

     b.  ?*Who did John get upset despite talking to?

 Feel free to email me suggestions for other items to place on the buffet table.

Announcement: Mahajan/Sportiche Ling 252, Spring 2017, Mondays 2pm-5pm.

If you are interested in attending, please let us (Anoop and Dominique know).

Anoop  and Dominique will co-teach a Proseminar on Scrambling and Clitics.

As early as in Mahajan 1991 ( Clitic doubling, object agreement and specificity. NELS 21:263-277.) the suggestion is made that (some) Hindi Scrambling shares properties with Clitic Left Dislocation in Italian and Romanian. In Sportiche (1992) (Clitic Constructions, ms., UCLA. published as: 1996. Clitic Constructions, in Phrase Structure and the Lexicon, L. Zaring and J. Rooryck, 213-276, Kluwer Academic Publishers, Dordrecht.), the claim is made is that cliticization in French (and Romance) is the counterpart of Germanic Scrambling.

The purpose of the seminar is to explore the connections between all these: Hindi Scramblings, Romance (and Greek) Clitics via Clitic Left and Right Dislocation, and Germanic Scrambling (German and Dutch at least) (and possibly Japanese and Korean Scrambling), with the aim of deciding whether a general theory unifying these movement types across languages can be formulated.

Since these suggestions were made, analytical tools used to establish the existence and properties of movement dependencies have been refined. We will describe these tools and apply them systematically to the three (or four) classes of languages: French (and Romance), Hindi, German and Dutch (possibly Japanese and Korean).

Winter 2017 


Linguistics 252, Hilda Koopman

Within current Minimalists approaches, we are looking for the syntactic framework that is most suitable for the interfaces with the semantics and the phonology,  provides a likely path to acquisition,  models the data from an individual speaker, extends to capture linguistic variation, and allows the development of SSWL properties to test hypotheses about (im)possible crosslinguistic variation.  We will explore the question whether we can choose between (specific implementations of) different frameworks. (Not surprisingly) antisymmetry, and generalized U(niversal) 20 patterns, will play an important role throughout the quarter. We will look at left right asymmetries in scope interactions, and (further) test a U20 typology of morpheme orders, (attempt to test) a theory of the expected typology of second position phenomena, contrasting it with the Distributed Morphology framework.  We will start off with a comparative study of the distribution of NegXPs in Germanic languages to determine what part of the syntax is stable and what minimally varies.


Meets Mondays 2-4.50 --(week1: in the syntax lounge).




Linguistics 254, Kyle Johnson

Tuesday from 9:00am-12:00pm

A Multidominant Theory of Movement

A common assumption is that there is a single operation, ``movement,'' that is responsible for certain types of long-distance dependencies. In its classic formulation, movement gives a moved term a new location and puts in the original location a silent variable that is bound by the term in its new position. Attempts have been made in this century to decompose movement into more elementary operations, and this seminar will trace one of those attempts -- one that uses phrase markers that tolerate multidomance. The aim will be to explain some of the features that appear to define movement: semantic displacement, terseness and boundedness. The focus will be on Verb Movement, Wh Movement and Quantifier Raising. We will look at linearization schemes designed to flatten multidominant phrase markers into strings and how those schemes interact with the semantics of constituent questions, quantifiers and topicalized verbal projections. Key readings include Elisabet Engdahl's 1985 UMass dissertation, Jairo Nunes's 1995 University of Maryland dissertation and Hadas Kotek's 2014 MIT dissertation.


Linguistics 252, Yael Sharvit

Indexicality and ‘de se’-ness, and the relations between them, in English and cross-linguistically (in the person as well as tense domains), has been the topic of much exciting research in recent years. We will explore some of the “old” and current literature on this topic, with the goal of understanding the important questions and some possible (and impossible) answers.



Fall 2016 



Linguistics 251, Robert Daland

  The analog-continuous / discrete-categorical divide



Phonological categories are theorized as discrete entities. Phonetic signals are theorized as analog and continuously varying. How do speaker/hearers intermediate between these?


The two intentions of this seminar are (i) that students will learn about the various approaches to the discrete/analog interface in speech and language, and (ii) that students will emerge from the course with hands-on experience implementing models. Thus, the seminar will consist of theoretical readings, interleaved with practica. To the extent that it proves feasible, the practica will consist of implementing and assessing our own versions of models we are reading about. Prior programming experience is desirable, but not strictly necessary. Students taking the class for 4 credits are expected to complete a final project addressing any aspect of the discrete/continuous interface.


In-class time will consist of a mixture of lecture (by professor), oral presentations (by students), and discussion (by everyone). Lectures will be concentrated at the beginning and at the introduction of technical topics. Conditional on class size, each student will give an oral presentation on one paper that everyone has read, and one paper that they read on their own. Hands-on and out-of-class activities will be focused on implementing the simplest versions of the model classes we study. In HW1 we will assess how well a Gaussian Mixture Model learns the vowel space of a standard 5-vowel system (e.g. Japanese, Spanish). HW2 replicates a classic exemplar simulation of lenition and contrast preservation by Pierrehumbert. Homeworks are intended to provide a natural jumping-off point for projects. For example, one could extend HW1 by incorporating speaker normalization, robustness under noise, iterated learning, playing with the initialization, or comparing with an exemplar model



Spring 2016 

Linguistics 217: Experimental Phonology

Robert Daland

The focus of this course will be artificial grammar learning studies. On the first day of the course I will unveil the proposal for an experiment, which we as a class will implement and carry out. That is, I will have the general idea and a suggestion for the kind of stimuli that we will use, but we as a class will need to determine the exact number of stimuli and trial properties/design, create the stimuli, code up the experiment in some (to-be-decided) experiment software, debug the experiment, pilot it on ourselves, and then run it on our friends. The first month we will spend reviewing literature on artificial grammar learning studies, focusing on the following: issues of substantive bias versus formal complexity, learning sound patterns only versus learning morpho-phonological alternations, general design considerations for artificial grammar learning studies. The second month we will spend actually implementing and conducting the experiment. The remainder of the class will be divided up between writing up the experiment, and the final project. The final project is an experiment proposal, something like the Intro + Methods section of an experimental paper (except without results). In other words, we as a class will do one experiment together starting from an experiment proposal; then you as an individual student will design your own experiment for the final.

A complete syllabus will be available soon; please email me if you want one.

Note: It is unlikely that this course will be offered again in the next 3 years. If you were debating whether to take it now or in the future, I recommend you take it now -- the future may not be an option.



Fall 2015 

Linguistics 251: Recognition of morphologically complex words

Megha Sundara

Here is a description of the proseminar i will be offering in fall.  The class is scheduled on TR 10am-11:50am. Please email me if you would like to take the class but have a conflict. On thursday (24th), we will meet in Public Affairs 2319.

In this proseminar, we will investigate how morphologically complex words are recognized cross-linguistically. We will focus on the architecture of the mental lexicon, the nature of underlying representations and implications for the role of phonology in this process.


Spring 2016

Ellipsis: Syntax and Acquisition

Tim Stowell & Nina Hyams

Ellipsis constructions provide a wealth of insights into syntactic structure and at the same time present a number of interesting puzzles for syntactic theory. We will examine a number of these, including (as time permits):

-Varieties of ellipsis: TP, VP, and NP ellipsis (perhaps others)
-Environments licensing ellipsis: coordination, comparatives,
questions, adjuncts, relative clauses, dislocation, parentheticals
-Theories of ellipsis: true ellipsis or null XPs?
-Ellipses with 2 or more possible antecedents; ‘backwards’ ellipsis
-Apparent ‘Island repair’ exhibited by certain types of ellipsis.
-Extraction from elided XPs: Sluicing and pseudo-gapping;

-Defining the parameters of identity: ‘vehicle change’ phenomena:
negative/positive polarity, anaphors/pronouns/r-expressions; tense,
mood, aspect, and voice; morpho-syntactic vs. semantic identity;
-Strict vs. sloppy identity, and verbal/predicative analogues;
-Parallelism, Scope, and Economy;
-Antecedent contained deletion and infinite regress.

We will also look at children’s acquisition of ellipsis constructions. The acquisition of ellipsis is especially interesting because it poses an extreme ‘poverty of the stimulus’ problem. The abstractness problem is compounded by the fact that languages vary with respect to the kinds of ellipsis they allow. Any theory of ellipsis has to reckon with these learnability issues. There is a small experimental literature on the children’s development comprehension of ellipsis (small as compared to syntax/semantics literature) that we can read. In addition, we’d like to consider the experimental (and naturalistic) data in light of recent theoretical approaches, which may suggest new directions to explore in acquisition.



 Spring 2016

Linguistics 218: The semantics of degree constructions

Jessica Rett

Wednesdays, 2-5pm

I use the term 'degree construction' to refer to a variety of constructions traditionally associated with gradience: adjectival constructions like comparatives and equatives; intensifiers like very, measure phrases like two feet, and other modifiers; and verb phrases like degree achievements (e.g. the soup cooled). In this course, we'll explore compositional semantic treatments of these constructions, which are generally thought to require more sophisticated machinery than the basics in GQT/Heim & Kratzer. We will focus mostly on degree semantic treatments (beginning with Cresswell 1976), but we'll also look at some alternatives to degrees (in particular, Klein's (1980) comparison classes).


Spring 2015

Linguistics 218: Math Ling 2

Ed Keenan

In Spring quarter 2015 I will be offering Ling 218, “Math Ling 2" (expected time: TT 2:00 – 4:00). This is a variable content course that grad students can take more than once with permission of the instructor. 
      The first two weeks of the course focus on learning and practicing proof techniques. Then we read articles in the literature, beginning with ones in linguistic applications of generalized quantifier theory, to replicate or supply their proofs. We *may* later adventure into event semantics and adverbial quantification, generating and interpreting discontinuous constituents, or grammatical relation changing operations.
     Prerequisites for the course: solid competence in the logical, set theoretical and lattice theoretical work covered in Ling 180/208. Undergrads are eligible to register for the course with permission of the instructor. Undergrads should understand that:
     This is a grad course, designed for people who are interested in learning about language structure; it is not just a course where you do exercises, exams and get a grade. You are expected to attend every class, read at least some of the suggested reading, and make several class presentations – usually summaries or completions of proofs from the reading. A short paper may be required.


Linguistics 251:  Metrics


Bruce Hayes and Russell Schuh

Metrics is the study of the deployment of phonological material (stress, syllable weight, phrasing) to manifest rhythmic patterns for artistic purposes. It is an old and traditional discipline but recent work has applied more sophisticated tools, offering hope of achieving improved rigor, insight, and analytical accuracy. These tools are often borrowed from current phonological work (notably, theories of stress and weight) but also from formal linguistics in general (constraint-based grammars, Harmonic Grammar, maxent, learning algorithms).

This course will be a "how-to" with illustrations.  We will cover empirical material from our respective areas of expertise (Hayes:  English iambic pentameter; Schuh:  quantitative verse of the Chadic languages, especially Hausa).  Contemporary issues to be addressed include the form of metrical grammars, gradient syllable weight (work of Kevin Ryan), and the three-way interaction of phonology, verse form, and sung rhythm in sung/chanted verse. Students will be encouraged to take on their own verse data and use the methods taught to analyze them.


Linguistics 252:  Relative Clauses

Dominique Sportiche

I am planning to teach a seminar this Winter (or perhaps Spring) on Relative clauses. I would like to survey + the various types (headed, headless, appositive, restrictives, internally headed, externally headed, with or without resumptive pronouns etc...). + the main theories concerning how they should be syntactically analyzed (concentrating first on headed relatives, and time permitting looking at the others, and possibly the relation between headless relatives and questions). + possibly by bringing to bear on these questions data from a variety of languages (hopefully represented in the classroom!).


Linguistics 252: Negative Polarity

Yael Sharvit and Tim Stowell

In this course we will discuss the syntactic, semantic and pragmatic aspects of the distribution of negative polarity items (NPIs). 

NPIs seem to be “comfortable” in the scope of negative licensors; compare 

*John said that Bill had any friends             with 

‘John didn’t say that Bill had any friends.  

A considerable variety of NPIs is attested in the world's languages.  English has lots of NPIs in addition to any, for example ever,  yet, epistemic uses of can, and numerous 'minimizing' NPIs  including some idioms (a single N, lift a finger, spend a red cent), among others. 

Accounts of NPIs are generally related to more general theories of negation and scope (some based on c-command). 

Since Ladusaw's seminal work, many accounts of NPIs have also made reference to the semantics of downward-entailment (monotone-decreasing) environments.  Still, it has proven to be difficult to give a general theory of NPI-licensing. 

For one thing, it is not clear whether an NPI is directly licensed by a licensing constituent (such as a negative particle that c-commands the NPI) or whether it is licensed by virtue of occurring in a particular type of syntactic or semantic environment (such as a yes/no question that contains the NPI.) 

Moreover, despite the persistence of the term 'negative polarity item,' it is widely assumed that NPIs can be licensed in some cases without the presence of negation per se. In addition to cases of licensors whose semantics arguably justifies positing a structure that contains a covert negative operator (e.g.  few, only, deny, and doubt), there are other licensing environments whose semantics is not specifically “negative” in any obvious way, including comparative constructions, conditional clauses, and relative clauses with universally quantified heads.  

In addition, not all NPIs are created equal; for example, some NPIs require local (e.g., clause-internal) licensing, and/or are more selective about what can license them. 

NPIs are often compared to Positive Polarity Items (PPIs), which are generally assumed to be permitted only in non-negative environments. 

We will discuss some classical works (e.g., Ladusaw 1979) as well as more recent work (e.g., Gajewski 2010).


Winter 2015

Linguistics 254: Topics in Linguistics. Evaluating perspective in meaning and discourse.

Jesse Harris in collaboration with Elsi Kaiser (USC)

Class times: W 2:30 – 5:30PM
Location: Campbell 2122A
Email: This e-mail address is being protected from spambots. You need JavaScript enabled to view it (Harris); This e-mail address is being protected from spambots. You need JavaScript enabled to view it (Kaiser)
Office hours: Monday 3-5, Campbell 2224C, inside the Language Processing Lab suite (2226)
and most any other time by appointment.


"Much of language appears to be sensitive to perspective and point of view of the situation in which it is to be evaluated. Point of view may be encoded lexically, as in perspective-dependent adjectives like local, nearby, andrecent, and predicates of personal taste, such as tasty or beautiful, or may emerge out of more general properties of the text, as in the case of free indirect discourse, in which a variety of cues, such as tense or choice of anaphor, indicate the relevant perspective.

Such cases have attracted the attention of both theoretical linguists and experimentalists. Our goal in this seminar course is to sample cases of recent and historical interest and to assess whether commonalities across the phenomena of interest permit a unified account. Perspective and point of view arguably pervade many levels of language. Thus, we adopt a strategy of initially constraining our attention to items that encode or otherwise involve the calculation of perspective lexically, dealing first with predicates of personal taste, modals, expressives and epithets. We then turn our attention to larger stretches of discourse, including relative clause types, common ground calculation, free indirect discourse, and pronominal interpretation. We hope to engage both theoretical and experimental literature in the course, and selected readings from both areas throughout the course. The ultimate aim is to develop a working model of perspective and point of view phenomenon that is informed and contributes to the theoretical literature, and can be tested and refined through experimental methods."


Linguistics 251:  Foundations and applications of continuous mathematics for linguistics

Robert Daland

This course surveys particular aspects of continuous mathematics which are likely to be of special relevance to linguistics graduate students. It opens with a brief, axiomatic treatment of the real numbers, with the dual goals of accustoming students to formal proofs, and acquainting students with the occasionally counterintuitive properties of sequences of symbols. Next, the course turns to linear algebra. The foundations are covered, and then various linguistic applications are considered: the equivalence of Markov chains and probabilistic finite state automata (applications include Hale's paper "The Information Conveyed by Words in Sentences" and a proof by Daland about convergence in phonotactic learning), the simplex algorithm as used for learning Harmonic Grammars, and linear regression. If there is time remaining, it will be devoted to learning, with a special emphasis on Bayesian modeling (naive Bayesian classifiers, conjugate priors, and maximum entropy models).



Fall 2014

Ling 213C: Linguistic Processing

Jesse Harris

Class times: MW 10:00AM – 11:50AM

Location: Campbell 2122A
Office hours: Monday 3-5, Campbell 2224C, and most any other time by appointment.


Theoretical linguistics is concerned with how language is organized in the abstract, creating
models of linguistic competence, typically unencumbered by issues of performance. In contrast,
psycholinguistics addresses how language might be realized as a component within the general
cognitive system: specifically, how language is comprehended, produced, and represented. It is an
interdisciplinary effort, drawing on research and techniques from linguistics, psychology,
neuroscience, and computer science, and utilizes a variety of methods to investigate the
underlying representations and mechanisms that are involved in linguistic computations.

              The core areas of psycholinguistics include language acquisition, language perception,
language production, language comprehension, language and the brain, and language disorders
and damage. This course emphasizes depth over breadth, and so we will not delve into all of these
topics. Instead, we will be focusing on just two areas of research: mental representations and
processing of lexical units, and sentence comprehension and production. We start with the basics
of lexical access and decision, exploring various models of the processes. We then move to an
overview of classic models of sentence processing which vary according to a number of related
properties such as the modularity/interactionism of information channels and the
serialism/parallelism of processing. Finally, we discuss several topics in current and classical
language research, including the filler-gap dependencies, semantic processing, and sentence
              Crucially, psycholinguistics does more than simply describe the facts. It attempts to
weave what is known about how humans produce and process language into a coherent cognitive
model, with enough structure so that we can study its composition in a rigorous, hypothesisdriven
way. An important theme of this course involves elements of model building and
assessment, emphasizing explicit and concrete hypotheses that make testable, and linguistically
informative, predictions.
       The aims for this course include:

       • Identifying the major choice points of classic and current psycholinguistic models, as
          well as the essential arguments for and against them;
       • Creating and assessing explicit and concrete experimental hypotheses that develop
          aspects of a model;
       • Learning how to generate testable predictions from experimental hypotheses;
       • Acquiring practical experience with experimentation;
       • Presenting results and interpretations in clear and accessible way.

The course is likely to change as the quarter progresses. Please check the course website often:


Spring 2014

Ling 252: The semantics of speech acts

Jessica Rett

Tuesdays, 10am -1pm
Rolfe 3120

The di erence between speech acts like John is home now (an assertion) and John is home now? (a question) has traditionally been characterized as pragmatic (Wittgenstein, 1953; Austin, 1962; Searle, 1969). Linguists have, however, observed that these di erences in illocutionary force are often explictly marked across languages: syntactically; intonationally (as in English); or by sentence particles (as in Cheyenne, Murray 2014). This suggests the need for a compositional semantics of speech act markers (i.e. illocutionary mood), and consequently for a formalization of the semantics/pragmatics interface.

We begin with a review of the philosophical typology of speech acts and some relevant pragmatic and syntactic considerations. We'll then examine a variety of theories that have found the need to represent speech acts compositionally: Krifka's account of quanti ers in questions; Gunlogson's semantics of intonation; along with analyses of responses in discourse, attitude markers, and illocutionary mood.


wk date               topic                                                      readings (optional readings italicized)
1   October 7        introduction                                         Green 2009
2   October 14      philosophical background               Searle 1969 Chs 2 & 3
3   October 21      linguistic background                       Bierwisch 1980 (Lyons 1977 Ch 16)
4   October 28      speech acts & quanti ers                  Krifka 2001 (Cohen and Krifka 2014)
5   November 4    intonation & speech acts 1             Gunlogson 2001 Chs 1-3 (Stalnaker 1978)
6   November 11  intonation & speech acts 2             Gunlogson 2001 Chs 4-5
7   November 18  responses to speech acts              Farkas and Bruce 2010
8   November 25  semantics of sincerity conditions  Rett 2014
9   December 2    semantics of illocutionary mood    Murray 2014
10 December 9    student presentations                     (to be rescheduled)

Students will be responsible for attending class and doing the readings; I will ask each of you to pick a particular day and topic for which you will be more responsible than usual for facilitating class discussion. Students enrolled in the course for credit will be asked to write a term paper for the course; we will hopefully nd time near the end of the quarter for these students to briey present their work.

Austin, J. (1962). How to do things with words. Harvard University Press.
Bierwisch, M. (1980). Semantic structure and illocutionary force. In Searle, J., Kiefer, F., and Bierwisch, M., editors,
Speech act theory and pragmatics, pages 1{35. Springer.
Cohen, A. and Krifka, M. (2014). Superlative quanti ers and meta-speech acts. Linguistics and Philosophy, 37:41{90.
Farkas, D. and Bruce, K. (2010). On reacting to assertions and polar questions. Journal of Semantics, 27:81{118.
Green, M. (2009). Speech acts. In Zalta, E. N., editor, The Stanford Encyclopedia of Philosophy. Spring 2009 edition.
Gunlogson, C. (2001). True to Form: Rising and Falling Declaratives as Questions in English. Routledge.
Krifka, M. (2001). Quantifying into question acts. Natural Language Semantics, 9:1{40.
Lyons, J. (1977). Semantics, volume 2. Cambridge University Press.
Murray, S. (2014). Varieties of update. Semantics & Pragmatics, 7:1{53.
Rett, J. (2014). Sincerity conditions in a Stalnakarian update semantics. Ms., UCLA.
Searle, J. (1969). Speech acts. Cambridge University Press.
Stalnaker, R. (1978). Assertion. Syntax and Semantics, 9:315{332.
Wittgenstein, L. (1953). Philosophical investigations. John Wiley.


Spring 2014

Ling 252B Seminar 1: Trivalent Logic and Natural Language Semantics

Presuppositions, Vagueness and Plurals

Benjamin Spector


April 1, 2, 3, 8, 9, 10 (Tues, Wed, Thurs of the first 2 weeks of Spring 2014)


Campbell 2122A

Students: please register!

Course description

This class will be concerned with various linguistic phenomena, such as vagueness and presuppositions, which have motivated the use of trivalent semantics, i.e. semantics in which a sentence's truth value can be either `true', `false' or `undefined'. I will introduce various types of trivalent systems and compare their predictions for presuppositions and vagueness, with a focus on the so-called 'projection' problem. I will then move to the semantics/pragmatics of plurals,  and show that trivalent logics give us tools that can help us make sense of a number of puzzles in this domain.  Specifically, I will argue that plural expressions involve a specific form  of vagueness, which can help explain phenomena such as non-maximal readings of plural definites, so-called homogeneity presuppositions,  as well as the various readings of reciprocal sentences.


LING 252 Seminar 3: Phases, Economy, and Move, Merge, and Agree: the Minimalist Program Reconsidered

Dominique Sportiche and Tim Stowell


The seminar will present and critically discuss several key aspects of the architecture of the current minimalist program of syntactic theory.

While we will devote some attention to historical precursors and discuss the considerations that motivated current assumptions, our primary aim will be to identify areas of inquiry that we believe will prove fruitful for future research, i.e. we hope to find topics that may be of interest for graduate students to work on.

We will encourage students enrolled in the class to each select one or two empirical phenomena to work on (e.g. comparatives, inversion, reciprocals, etc.) that will enable them to engage at least some of the issues and to make a class presentation.

The course will cover material relating to the following topics:

·   Derivation of Phrase Structure and Word order

--elimination of X-bar projections and labeling --cyclic linearization and/or spell-out --interface-driven movement 

·   Locality Domains

--Movement and bounding theory (phases, edges)

--Agreement and feature theory

--Binding, quantification, anaphora

· Economy


--Closest Attract/Intervention

--Move over Merge


LING 252 Seminar 2: Mini-Course by Jonathan Bobaljik (UConn)
(Distributed) Morphology: the syntactic structure of words

Carson Schutze

Friday, May 16 2:00-4:00 2122 Campbell Hall
Monday, May 19 4:00-6:00 2122 Campbell
Tuesday, May 20 4:00-6:00 2122 Campbell
Thursday, May 22 4:00-6:00 2122 Campbell
Friday, May 23 2:00-4:00 2122 Campbell

In this mini-course, we will look into the types of evidence that bear on current debates about the internal structure of words, and the relationship of morphology to other components of grammar (especially, but not only, syntax). We examine the central tenets of the framework of Distributed Morphology, namely arguments for hierarchical (syntactic) structure within complex words (syntax-all-the-way-down), and that this structure is abstract, independent of the phonological pieces that realize the structure (Late insertion). A central area of investigation concerns (apparent) mismatches, for example where the syntactic structure and morphological structure appear to differ, or where a form varies for context in ways that are not phonologically predictable (allomorphy). This leads to discussion of how complex the mapping from syntax to morpho(phonology) needs to be, how additional formal devices are to be constrained, and where the trade-offs may be found, enriching one component or the other in favour of a more straightforward mapping.

Evidence will be drawn from cross-linguistic surveys of morphological patterns, espeically those that stand as contenders for universal generalizations, including (time permitting) suppletion in adjectival morphology (Bobaljik 2012 Universals in Comparative Morphology); locative morphology (Radkevich 2010); and the expression of person and case morphology (Caha 2009), and other features that appear to participate in 'markedness' hierarchies.

LING 251A/B: Intonational Bootstrapping in Acquisition

Sun-Ah Jun & Megha Sundara

Mon/Wed 2-4pm

In this course we are going to investigate how intonational features, especially phrasing and prominence marking, help to bootstrap language acquisition. We are particularly interested in infant and children’s development in comprehension, given various manipulations of

intonational features; production studies will also be discussed when they are relevant to the understanding of the results from comprehension studies. We will also look at other languages whose intonational features are different from those of English as well.

Hope to you there,
-Megha & Sun-Ah 


LING 254A/B Seminar: Topics in Linguistics Spring 2014

Nina Hyams

Acquisition of control (and possibly related things)

Beginning with Carol Chomsky’s (1969) seminal study, and again in the late 80s and early 90s there was a flurry of experimental work looking at children’s acquisition of control (into complements, temporal adjuncts, and purpose clauses) with very interesting results. Children were often not adult-like in their interpretations of control structures and the results were fairly uniform across studies. While acquisition work in this area largely stopped after that point, there continued to be developments in the theory of control (in adult grammar), including control as movement (Hornstein 1999), control as Agree (Landau 2001), logophoricity in control (Williams 1992), and semantic and pragmatic theories of control (e.g. Chierchia 1989) , etc. In this course I’d like to revisit the early acquisition results and reconsider these findings in light of more recent theoretical approaches, which I hope will suggest new directions to explore and inspire experimental work on aspects of control (and related areas) that have not previously been looked at in children, e.g. logophoric contexts.

I don’t expect that we will spend the whole quarter on control. Other (related) topics we might cover include (acquisition) of logophoric constraints on pronouns (Sell 1987; Reinhart & Reuland 1993, etc.). Condition C (of the Binding theory) is also an area that has been largely neglected in the acquisition world (but see e.g. Crain & McKee 1985), as compared to Conditions A and B. So that would be another topic to investigate.
The direction of the class will also depend on the interests of the participants, so I welcome those of you who would like to attend to suggest areas (related in some way) that you’d like to explore. It would be helpful for you to be familiar with basic acquisition issues and results, but I do not want to discourage anyone from attending, esp. if you are knowledgeable about the (adult) syntax/semantics of these structures. So if you’d like to attend but have not taken 213A or equivalent course, please contact me and get hold of a copy of Guasti’s Language Acquisition: the Growth of Grammar. MIT Press.

The class is currently scheduled to meet M,W 11-1. I’m open to changing the time/day depending on the schedules of the participants and I would prefer to meet once a week. We can discuss this at the first meeting. The class can be taken for 2 or 4 credits (254A,B respectively).


Winter 2014

LING 252B Seminar 3: Methods in Semantic Fieldwork

Sarah Murray

Tuesday 2-4 in Campbell 2101K (the phonetics lab)
Thursday 2-4 in Campbell 2122A (the seminar room)
Friday 2-4 in Campbell 2122A (the seminar room)

There is a reading assignment for the first day of class so please come prepared.

Methods in Semantic Fieldwork

This mini-course will introduce various methods for collaborating with native speaker consultants to collect semantic and pragmatic data. We will discuss how (not) to use translation, acceptability judgements, use of texts, eliciting stories, the importance of context, and how to establish it. We will also discuss the relation between semantic fieldwork and theory, which is potentially mutually beneficial. Examples of fieldwork on presupposition, not-at-issue/at-issue content, and underspecification vs ambiguity will be discussed.

Though emphasis will be put on working with understudied languages, the methods discussed apply to semantic research on any language and semantic considerations important for fieldwork in general.

Please read the following paper before the first meeting:


Sarah's website: http://conf.ling.cornell.edu/sem/index.shtml

Sign-up sheet for Sarah's dinners and office hours: http://doodle.com/z946c44a2itgmtis

Sarah's office hours will take place in Campbell 3101M (Dominique's office)


Ling 218: Math Ling 2

Ed Keenan

1. The boolean structure of major category types: Predicates and

Arguments;  Modifiers; Quntifiers; Intensional modifiers (without possible
Readings: class handouts + Introduction to Boolean Algebras  S. Givant
& P. Halmos, 2009
2. Linguistic applications of generalized quantifiers: Conservativity; intersective

co-intersective (universal) and proportionality quantifiers; sortal reducibility;
“R-expressions” vs anaphors (possibly also predicate anaphors)
Keenan, E.L.   Some logical properties of natural language quantifiers pp.60 –
85 in The Philosophy of David Kaplan (distributed electronically)
Peters, S. and D. Westerstähl, Quantifiers in Language and Logic  OUP

3.  Abstract Grammar: syntactic universals as automorphism invariance; the semantics of case
marking and voice marking languages; free word order languages
Class handouts
Keenan, E.L. and E.P. Stabler. 2003. Bare Grammar.  CSLI

4?  A deeper look at adjectives?

Topics 1. and 2. are background that a grad student in linguistics (from UCLA) should be conversant with in order to teach courses in formal semantics.  We have several mathematical results and will go over their proofs (in some cases).  The two books listed in Readings are very useful and high quality background.  
Topic 3 includes some elementary material and open ended exploration in formulating the syntax and semantics of case marking languages.  One issue: how to interpret discontinuous constituents whose constituency is signaled by matching case markers.  
Topic 4 is exploratory – we may or may not get to it.  
The first two thirds of the course will involve homework exercises, usually proofs.  Students will be required to do a short paper or a class presentation on a topic that arises in our work.

Ling 185A/209A: Computational linguistics

Ed Stabler

Ling 185a(/209a) Computational linguistics, Winter 2014

Lecture: MW2-4, Discussion W4-5

I have completely redesigned Ling 185a (thanks to a sabbatical break!!). It introduces some basic programming (python) to test our ideas, but I think the class is should be manageable for all grad students who have taken Ling 208, some syntax and some phonology. [The class is also open to advanced undergrads, but it's numbered 185a for the registrar, rather than 209a, for purely administrative reasons; the content is substantial.] This class will get to a contemporary perspective on the field much more quickly than previous versions, building a narrow but sure understanding of basic, recent ideas.

One advantage of the new approach is that I think it should be of interest to a broader range of people. This first class is designed for all linguists who have any interest in how grammar might be used in performance, or even anyone who cares what grammars really are and how they connect to the evidence. I am excited to try this new class out next quarter.

[Previous versions of 185a focused much more on a broad, basic background in parsing, which is now almost completely removed. So I think the class is approx 90% new. With the new approach, the background in alternative parsing strategies can be filled in - for those interested - in 185b and later classes.]

Two ideas, relatively new to the field and new to the class, allow this more direct approach to what's happening:

* The mechanisms of phonology and syntax are now known to be more similar in fundamental respects than we thought they were. Syntax is regular (i.e. finite state) over trees in the same way that phonology is usually thought to be regular over sequences (of segments), at least to a first, good approximation. (This is discussed at length in Thomas Graf's thesis http://thomasgraf.net/papers.html#Graf13Thesis.) But of course, phonology needs tree structures too, so we introduce sequences first but then immediately introduce trees.

Not only are phonology and syntax both based on regular generators, but in both cases there are strong arguments for factoring the generators into interacting pieces: gen+constraints, derive+spellout. And both use statistics in roughly the same way.

So the class will design and implement 'state of the art' parsing models for phonology and syntax, with similar architectures. There is of course some debate about whether the regular generators for phonology and syntax are adequate - Some of these issues will be mentioned as topics for further work, after this class provides us with a basic framework.

* If we are serious about trying to understand language acquisition, the changing nature of a child's linguistic abilities, what we need is really not just a parser for phonology and a parser for syntax. For a reasonable psychological model, we need what are called 'universal' parsers, that is, parsers that immediately accommodate changes in the grammar. These seem to be less common in computational phonology than in syntax, but needed in both.

We will not study the acquisition problem or how to set statistical parameters for these parsers in this class. That will be a topic in Robert Daland's Ling 236 (TR9-11) so these two classes complement each other perfectly. The methods for acquisition are essentially similar in phonology and syntax, so I think anyone interested in language acquisition should take 236 in addition to 185a (we expect to find many synergies!).

The new Ling185a syllabus is organized into 9 sections (approx 1 per week)

I Computing phonology ‘online' (for production and comprehension)

1 The lexicon as a regular transducer

2 Weakly regular prosody and parsing

3 Alternations by optimizing: Classical OT

4 Optimizing over regular trees: Prosody again

5 Linear harmonic regular tree grammars (just a glimpse)

II Computing syntax 'online'

6 Phrase structure from regular tree grammars

7 Movement and constraints over regular derivations

8 A first minimalist grammar for English

9 Harmonic minimalist grammar

Basic finite state models are introduced in 1. Context free models for prosody are introduced in 2. Then classical OT 3, with trees 4, and harmonic grammar 5. Turning to syntax in 6 then requires a change in basic elements but no new mechanisms, so we can parse phrase structure immediately, and we can get minimalist grammar 7,8,9 by a composition of regular mechanisms that is very similar to what we see in OT.


Ling 252: Topics in Semantics

Yael Sharvit

In this course we will talk about copular constructions, focusing on asymmetries such as the following.

(1) a. Who (do you think) _ is Sam?

b. Who (do you think) Sam is _?

(2) a. Yael believes that Sam is Jessica.

b. Yael believes that Jessica is Sam.

(3) a. John is not the president.

b. The president is not John.

Intuitively, (1a) and (1b) seek different kinds of information (Percus 2003); (2a) and (2b) are non-contradictory (Cumming 2008); (3a) does not presuppose the existence of a president, while (3b) does (Halliday 1967).

As we will see, “blaming” these asymmetries on a difference in meaning between the so-called ‘predicational’ be and the so-called ‘non-predicational’ be is neither simple nor obvious. I will do my best to convince you that accounting for these asymmetries requires digging deeply into the semantics, pragmatics and syntax of attitude reports, questions, conditionals, definite descriptions, names, superlatives and more. Especially helpful to us will be work by R. Higgins, I. Heim, B. Partee, P. Jacobson, O. Percus, S. Cumming, M. Romero, I. Yanovich, G. Thomas (and many others whose names are omitted from this list for no good reason). While Semantics I (or equivalent) is required background (in the sense that it will be hard for you to follow the formalisms if you've never taken any kind of semantics), and Semantics II is highly recommended background (in the sense that it will be hard for you to follow the formalisms if you've never "played with" possible worlds), everyone and anyone interested in copular constructions is welcome to attend.


Spring 2013

Linguistics 251: Variation in Phonology

Bruce Hayes and Kie Zuraw

  • Free variation in output forms; research results of sociolinguistics
  • Variation in the lexicon—how it can/should be treated by the same models that treat output variation.
  • Degrees of productivity in phonology and morphology and how variation models can treat them.
  • Formal grammatical models of variation: OT, Harmonic Grammar, maxent grammars, logistic regression.
  • Practical advice in doing variation research: corpora, software, theory
  • The role of priors—language learning as frequency matching, guided by priors (“soft UG”). Work of Ryan, Martin, White, Wilson, McPherson, Hayes/Zuraw


Linguistics 251: Field methods for studying intonation—Tagalog

Sun-Ah Jun and Kie Zuraw

  • Develop an intonational model of Tagalog/Filipino by eliciting data from native speakers.
  • Use corpus materials to test and further develop the model.
  • How well do intonational units predict the domains of various phonological processes?

Linguistics 254B: Python for language research

Robert Daland


Though not necessary for high-quality research, the ability to program is a valuable research skill. Programming is likely to become a more valuable research skill in the future, and it is also a valuable skill in case your career path leads outside of academe. For these reasons, I am offering a Python course in the spring quarter. (For those who took my mini-course last summer, we will cover some of the same material and expand on it.) The most likely time will be during the unofficial "lunch" hour on Tuesday and Thursday, but that is negotiable based on the constraints of registered students
(top-ranked) and secondarily of auditors. The course does not have a p-side or s-side bias; it is intended to benefit all graduate students who wish to use Python to study language. There will be a final project; you are encouraged to do something which supports an existing research project (e.g. some simple analysis of a large corpus).
Because this is a 250 class, there will be a final project; since the course is only 2 hours, the final project will not be demanding. The course description follows:

It is becoming increasingly important for researchers to be familiar with the rudiments of programming. This course is designed for beginners and near-beginners to learn the rudiments of programming using the Python scripting language. The course will be oriented towards linguistics graduate students, but may be of use to researchers in other fields and at other stages in their careers. Part I of the course will cover the absolute basics -- the "print"
statement; atomic data types such as strings, lists, and dictionaries; flow control tools such as "if" statements and "for" loops. Part II will be geared toward corpus searching, and string processing more generally -- regular expressions; how to handle Unicode; parsing XML; how to format output for analysis in a statistical package such as R or SPSS; and how to combine custom Python scripts with command-line tools such as SED for faster, more modular, and more transparent output. If there is time remaining, Part III will consider more advanced uses of Python, such as online data acquisition and/or multi-agent modeling, on the basis of student needs/desires. There will be a (modest) final project.


Linguistics 252:  Comparative syntax: Exploring and Expanding SSWL

Hilda Koopman

This seminar will focus on the interaction of syntactic theory and data and analysis of variation across languages and within "languages/individual
speaker". The goal is explore, develop and expand the SSWL (the Syntactic Structures of the World's languages) database, and its future host
TerraLing, from a theoretical point of view. SSWL is an open-ended, open source, expert crowd sourced database of syntactic (as well as semantic and morphological ) properties of the word's languages. The database is open-ended in the sense that new properties can be added in nitely (which will happen gradually over time), and is intended as a tool to support research, eventually run by the community and for the community.  Terraling is the next generation of the database project: it provides a  flexible platform for linguists to set up their speci c individual projects and use the tools that come with the database.

Based on the theoretical developments of the past 50 to 60 years, and
the accumulated body of knowledge about variation, the ultimate goal
is to develop new property de nitions for a number of domains, building
on speci c theoretical predictions about what we expect to nd, and not
nd. After a general introduction to the database, and an overview of the
current landscape around databases (shortcomings and desires about what
a database should be able to do), I will motivate a new hypothesis about
the expected space of variation in the domain of word order (cf. a series of
recent talks Koopman 13). This will be put to the test on data currently in
the database(s), and beyond. The general "lessons" of these investigations
should help develop new property de nitions eciently, ideally distinct
from, and non-overlapping with Wals. In addition, I hope to move current
property de nitions in development closer to their nal stages, and to start
develop a list of potential properties explicitly proposed in the literature
(mostly by Kayne in various papers).
Here is a (non-exhaustive) list of topics that we will touch on and choose
from: most of which have an extensive literature.
1. the nominal domain: adjectives (in particular based on Cinque 10), the
functional adjective "other", inde nites and de nites, ellipsis, demon-
stratives and determiners, numerals, relatives in all their guises (in par-
ticular reduced relatives, (participial) relatives), di erent types pos-
sessors, pronouns, compounds and "nominal complements", case and
2. the distribution of objects ("compounds"/ incorporated, generic, an-
imate/inanimate, inde nites, speci c inde nites, de nite, pronominal
quanti ed, focused, silent, ... ); the interaction between subject and
object marking, and case marking (cf the "dance of subjects and ob-
jects" (Koopman, 12)).
The plan is to balance both theory and analysis, and to meet for 30 hours (3
hours a week), currently distributed over the scheduled time Mo-We 11-1, If
need be, we can rescheduled for some other appropriate time, depending on
student demand and con
icts. Please contact me if you have any questions
or would like clari cations,
Can be taken for 2 or 4 units.
Apart from potential published (co-authored) property de nitions
(which are pieces of research in and by themselves), this class should


Winter 2013


Ling 218 Math Structures in Language 2 (Ed Keenan)

Math Ling 2 (Ling 218)
Winter 2013
Instructor: Ed Keenan            
Intensional Types for Natural Language

We introduce boolean type theory to streamline standard extensional type theory for natural language.  Our initial semantic primitives are
(1) truth values and (2) properties (common noun denotations), not entities.  Proper noun (and individual constant) denotations, individuals, are defined in terms of properties and truth values.
Then we generalize to an intensional type theory using no semantic novelties (possible worlds, structured meanings).  We show that the surgeons and the flautists can be the same individuals even though surgeon and flautist denote different properties.  We represent evaluative adjectives like skillful which are inherently intensional:
if the surgeons and the flautists are the same individuals the skillful surgeons and the skillful flautists may be different.
Our basic models ride on complete non-atomic boolean lattices.  The first few weeks of the course review boolean lattices (algebras) and include homework exercises from Introduction to Boolean Algebras by Givant and Halmos, available from the textbook section in Ackerman.
The second half of the course will be a research seminar.  We explore whether the boolean techniques used for nouns and adjectives extend to classical intensional phenomena – modals and sentential adverbs (must, necessarily).  We look at other sorts of intensional models (Moschovakis, Muskens, Lappin & Fox, Capretta) as well as subclasses of restricting adjectives – partial vs total (Rotstein & Winter), antonymous equatives (Rett), threshold constant vs variable (Burnett).
Students are required to give a class presentation on one or another of the outside readings and write a short paper on any topic relevant to the course.

Addendum:  To those interested in attending Ling 218:  I just found out that Givant and Halmos' Introduction to Boolean Algebra is available through the UCLA Library Catalog.  This is a very nice textbook with modestly advanced results clear presented witymhout presupposing you are a PhD in mathematics.


Tim Stowell

Ling 252: The Syntax (and an overview of the Semantics) of Tense, Aspect and Mood

Tuesday, Thursday 2-4, Kaufman 136

Topics to be covered (approximately in this order)


1.         Morpho-syntax I: English TMA and the Early Generative Tradition

1a.       Chomsky 1957, 1965:  AUX and Affix Hopping

1b.       Rosenbaum 1967: Nonfinite clauses

1c.       Ross 1969: Auxiliaries as main verbs; Complement Selection

1d.       Chomsky 1970, Hornstein 1975: X-bar theory

1e.       Cross-linguistic variation in the standard theory


2.         The Reichenbachian Tradition

2a.       Reichenbach 1947: S, E, and R; tense constructions and compositionality

2b.       Amendments: relative vs. absolute tense; complex tenses; Comrie 1985

2c.       Hornstein 1977, 1990: S/E/R and Clausal Embedding

2d.       Vendler 1967: aspectual classification, telicity


3.         Morpho-syntax II: the Later Generative Tradition

3a.       Infl and IP; Chomsky 1981, 1986, Stowell 1981

3b.       TP and AgrP, V-movement vs. Affix hopping; Pollock 1989

3c.       AspP, PerfP, ProgP, EverythingelseP; Cinque 1999, Various Authors

3d.       More affix hopping: LF V-movement, feature movement, Attract F

3e.       Pesetsky/Torrego:  Tense and Case


4.         Empirical phenomena (I)

4a.       Tense & Stativity; Progressive, Habitual, Imperfect, Perfect

4b.       Sequence of Tense; Simultaneous, shifted, and double access readings

4c.        Tense and Mood: Subjunctive/Indicative; cross-linguistic variation

4d.       Tense and Modals; Root, Epistemic, Metaphysical modality; Future Tense?

4e.        Tense and Adverbial clauses; agreement and scope

4f.        Tense & Definiteness:  Partee 1973, Present perfect puzzle (Klein)

4g.       Relative vs. Absolute/Indexical Tense; cross-linguistic variation

4h.       Past Tense and Counterfactual Modality: Iatridou 2000, Ippolito 2000

4i.         Infinitival Tense: Stowell 1982, 2007; Abusch 2004, Hackl/Nissenbaum 1998

4j.         Distal Tense systems (degrees of remoteness) and indexicality


5.         GB/P&P/Minimalist Theories of the Syntax of Tense

5a.       Tense and binding theory: Enc 1987, Zagona 1990, Klein 1994

5b.       Tense and argument structure, control, and polarity: Stowell 1995a,b, 2007

5c.       Tense and aspectual categories: Demirdache & Uribe 2004, etc.

5d.       Alternatives: morphological competition, upper limit constraint, scalar implicatures


6.         Analytical Issues of current theoretical interest

6a.       Predicative vs. Referential theories of tense semantics

6b.       Tense and intensionality: complement clauses vs. relative clauses & adjuncts

6c.       Tense and internally headed relative clauses in Japanese/Korean (various)

6d.       Temporal polarity vs. Zero Tense (Kratzer 2000)

6e.       Infinitival tense revisited: Wurmbrandt  2007, Stowell 2007, Zwart 2007

6f.        Distal tense systems and intervention effects

6g.       Tenseless languages, Creole languages;         tense interpretation in (derived) nominal

6h        Lifetime effects, anaphoric tenses, scalar implicatures.



Fall 2012

Linguistics 252, Topics in Syntax and Semantics

Yael Sharvit
The main topic will be cross-linguistic manifestations of tense, especially in embedded environments, and with special emphasis on intensional environments (e.g., complement clauses of attitude reports; conditionals). We will consider recent theories of tense embedding, and relate them to general semantic and syntactic theories of sentence-embedding.

Ling 212. Learnability Fall 2012

Ed Stabler

Tentatively: TR2-4, Haines A28

The past 10 years have brought important advances in the theory of learning. We will begin with basics but then shift rather quickly to one tradition that is recently prominent. Many things are happening right now.

(1) In the past 10 years or so, a remarkable consensus emerged about appropriate measures of learning complexity, measures related to how much evidence might be required to distinguish one hypothesis from alternatives. "VC dimension" and a number of other, independently proposed measures turn out to be equivalent, showing that a number of independent traditions have homed in on essentially the same idea about learning.  Having a criterion like this is important because it allows us to to define the kind of learning process we should be searching for.  (2) With this measure, it is easy to establish the learnability of finitely parameterized grammars, OT grammars and harmonic grammars that assume a given, finite list of constraints. But these results are unsatisfying if you doubt that the number of parameters needed to define human languages (with their lexicons
included) is small enough to be treated as a finite, given list.  And in certain cases it can be difficult or impossible to apply these methods directly to unanalyzed data of the sort plausibly available to the human learner.  (3) Recent results establish the learnability of certain aspects of linguistic structure from data more likely to be perceptually available, without assuming a given, finite parameterization. In phonology, it appears that at least some of the patterns may be in "sub-regular" classes that allow efficient learning, and (4) some of these ideas for learning regular languages have been extended to the more complex patterns in syntax. In particular, we now have learners for significant classes of
(non-context-free) languages definable by simple, formal 'minimalist grammars'.

PREREQUISITES: This class will begin from scratch, but the material is rather abstract! (a learner needs to have something like a grammar of all grammars) so you should have at least Ling 208 or equivalent.

READINGS will include some of the following -- to be adjusted depending on backgrounds and interests of the class.  (Readings marked with * were not covered in previous versions of this class, but I think we can get to all or most of them by focusing appropriately. They are more closely related to each other than you might think.)

1. Basics

Jain&al'99 text, "Systems that learn"
Kearns&Vazirani'94 text, "An Introduction to Computational Learning
Angluin'80 Inductive inference of formal languages from positive data

2. Finite parameter setting and historical perspectives

Chomsky'81 Lectures on Government and Binding (few pages in chap 1)
Pinker'82 A theory of the acquisition of lexical interpretive grammars
Frank&Satta'88 OT and the generative complexity of constraint
* Heinz,Kobele,Riggle'09 Evaluating the complexity of OT
* Bane,Riggle,Sonderegger'10 VC dimension of constraint based grammars
* Heinz&Riggle'12 Learnability

3. Regular languages and sub-regular patterns

Angluin'88 Inference of reversible languages
Angluin'88 Identifying languages from stochastic examples
* Clark&Thollard'04 PAC-learnability of probabilistic deterministic
finite state automata
* Castro & Galvada'08 Towards feasible PAC-learning of probabilistic
finite automata

4. Context free and MG languages

* Clark&Eyraud'07 Polynomial identification in the limit of
substitutable context-free languages
* Yoshinaka&Clark'10 Polynomial time learning of some multiple
context-free languages with a minimally adequate teacher

Topics in Syntax and Semantics (Ling 252A/B)

Martin Walkow

This class will investigate two areas of research on agreement and the role of the operation Agree (Chomsky 2000) in it.
The first are person based restrictions on agreement and cliticization that arise in environments like agreement with nominative objects in Icelandic, combinations of internal argument clitics in Romance or agreement with subjects or objects in Basque.
Work over the last ten years has tried to reduce these seemingly idiosyncratic restrictions to general facts about the locality of Agree, the case filter, the syntactic representation of person and number, the relation between syntax and morphology, and case. The typology of such restrictions has lead to arguments about the location of variation in grammar (functional lexicon vs grammatical operations). A further aspect of interest will be the alternative mechanisms languages use to avoid person restrictions. These strategies often show morphosyntactic properties not otherwise found.
These data have lead to new proposals about the nature of Last Resort-mechanisms and the role of agreement failure in grammar.
The second is closest conjunct agreement, a phenomenon where agreement with a conjoined argument expresses the person/number/gender-features of the conjunct closest to the agreement controller, rather than features representing the combined properties of all conjuncts. Closest conjunct agreement has been at the heart of recent arguments about whether the grammar has accesses to linear order as a primitive or by reference to syntactic structure, the interaction of movement and agreement, and how putative non-syntactic effects on agreement are regulated in the grammar.

The class meets Tuesdays and Thursdays 4-5:50 at HUMANTS A32.


Pro-seminar Fall 2012

“The Oxford Handbook of Laboratory Phonology”

Pat Keating, MW 11-1

Just published, The Oxford Handbook of Laboratory Phonology was edited by Abby Cohn, Cécile Fougeron, and Marie Huffman.  With 53 chapters by dozens of authors, it surveys many topics – content areas, and methods – from 25 years of work in laboratory phonology to give a “state-of-the-art assessment”.  In this pro-seminar we will select a subset of these chapters, read them, and discuss them in class.