Why Ranking Software?
I've been programming and using constraint ranking software for Optimality Theory (Prince and Smolensky 1993) since the mid 1990's. From time to time, I've thought about the role it should have in phonological analysis, and have discussed this problem with other participants in OT. In brief, I see both pluses and minuses to the use of ranking software, but weighing them, I think the plus side is stronger.
The software discussed below is "OTSoft" (Hayes, Tesar, and Zuraw 2003), a Windows program that, when given inputs, candidates, constraints, and violations, will rank the constraints in a way such that all and only winners are generated.1 OTSoft ranks constraints using the Constraint Demotion algorithm of Tesar and Smolensky (1993, 2000), as well as a number of other algorithms. It also computes factorial typologies, constructs ranking arguments, and creates tableaux and Hasse diagrams.
1. Advantages of using ranking software
1.1 Avoiding Error
Even the best analysts can make mistakes when they construct an OT analysis. These mistakes often arise at the level of overall integration, when all the input forms must be considered together and the result must pass the test of consistency.
When I serve as a journal referee, I sometimes check the analyses of articles that use OT by entering the constraints, inputs and candidates into OTSoft. As often than not, OTSoft detects errors, some of them nontrivial. I also apply this procedure to articles that have already been published, and occasionally find significant errors in these as well.
My purpose in saying this is not to threaten people with exposure or anything like that! Rather, I think my experience shows that even highly able analysts cannot always avoid mistakes when they attempt to do OT analysis entirely by hand. Surely, all linguists want their analyses to work, and ranking software helps to guarantee this.
1.2 Tasks not doable by hand
There are three tasks in OT that can get too large to do by hand:
These tasks can be reliably done by hand for very small constraint sets, but for anything more than a few constraints, the task takes a huge amount of time and paper if done by hand, and the result probably cannot be trusted.
1.3 The model of more established sciences
The use of computers to assist in the reliability and scope of modeling is an important trend in contemporary science. Computational simulations increase the predictive power of the underlying theoretical principles, by making it possible to deploy the principles in large simulations and compare the result to data.2 I doubt that participants in well-established sciences feel any sense of shame in using computers to assist them in their work. The job of the theorist is to understand the underlying principles and how they interact; to use people for extended low-level computation would be unreliable and a waste of their time.
1.4 Where linguistics is going?
When I first wrote OTSoft in the mid 1990's, my main goal was to make OT homeworks easier to grade. Then I started using it a lot in my own research. Later it emerged that the question of ranking constraints is part of a more general research program, in which the goals of generative grammar re. learnability are pursued through simulation; attempting actually to replicate the acquisition process. This is now an active research field with a fair number of participants. Software like OTSoft thus could be viewed as a way of conveniently interfacing an aspect linguistic theory (learnability) with language data.
2.1 Avoiding crutches
It is incumbent on users of ranking software to understand what they are doing; one wants to avoid the trap of considering the computer program as some kind of magic oracle.3 Often, puzzling patterns in OTSoft's output can be understood only in light of the ranking algorithms it uses (for example, Tesar and Smolensky's Constraint Demotion always ranks a constraint in the highest stratum it can occupy, sometimes producing valid but counterintuitive results). Thus, the user of OTSoft ought to spend a certain amount of time understanding the algorithms, even if his or her original intent was just to do some analysis.
2.2 Software misery
OTSoft is not bug-free (though I try to make it so), 4 and it's certainly possible that it will force the user to spend time on computer hassles rather than thinking about linguistics. Some users seem to have more trouble than others
2.3 Analyses that are too big to understand
With ranking software, it's often possible to throw together a large number of inputs, candidates, and constraints, and obtain an analysis that works--in ways the analyst may not understand. This seems highly undesirable. However, one can to some extent avoid this problem by using OTSoft more thoughtfully: (a) start with small analyses and work up; (b) use the ranking argument and Hasse diagram facilities of OTSoft to help understand what is happening at each stage.
My own judgment is that the pluses just given considerably outweigh the minuses, but that legitimate use of constraint ranking software should always exercise the precautions outlined in sections 2.1 and 2.3 above.
1. OTSoft is downloadable from http://www.linguistics.ucla.edu/people/hayes/otsoft/. Nowadays, it competes formidable alternatives, set up by Jason Riggle, Paul Boersma, Bruce Tesar/Alan Prince, and others. [back]
2. A fine explanation of computer modeling in physics, evidently by Prof. Julius Kuti of UC San Diego, can be found at http://www-physics.ucsd.edu/students/courses/winter2009/physics141/Lectures/Lecture1/Lecture1.html. Prof. Kutis description is strikingly applicable to linguistics. [back]
3. Prof. Julius Kuti, cited above: In computer modeling you are first and foremost a theorist. It is necessary to master the underlying physics of the problem we attack on the computer and to understand analytically as much as possible. We only want to ask the computer to do the things which cannot be done otherwise. [back]
4. Please send bug reports to my email address. [back]
Boersma, Paul and Bruce Hayes. 2001. Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry 32: 45-86. http://www.linguistics.ucla.edu/people/hayes/GLA/
Boersma, Paul. 1997. How we learn variation, optionality, and probability. Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam 21:43 -58. http://www.fon.hum.uva.nl/paul/papers/learningVariation.pdf
Hayes, Bruce, Bruce Tesar, and Kie Zuraw. 1993. OTSoft 2.1, software package, http://www.linguistics.ucla.edu/people/hayes/otsoft/.
Hayes, Bruce. 1997. Four rules of inference for ranking argumentation. Ms., Department of Linguistics, UCLA. http://www.linguistics.ucla.edu/people/hayes/otsoft/
Prince, Alan and Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Rutgers University Center for Cognitive Science Technical Report 2. http://roa.rutgers.edu/view.php3?roa=537
Tesar, Bruce, and Paul Smolensky. 2000. Learnability in Optimality Theory. Cambridge, Mass.: MIT Press
Last updated 1/10/2013
Back to OTSoft Home Page