8000
8000
8000
Spelling Reform
During the Middle Ages Latin had dominated the
linguistic scene, but gradually the European vernaculars began to be thought worthy of attention. Dante,
in his short work De vulgari eloquentia On the eloquence of the vernaculars, gave some impetus to this
in Italy as early as the fourteenth century. The sounds
of the vernaculars were inadequately conveyed by the
Latin alphabet. Nebrija in Spain (1492), Trissino in
Italy (1524), and Meigret (see Meigret, Louis (?15001558)) in France (1542) all suggested ways of improving the spelling systems of their languages. Most early
grammarians took the written language, not the spoken, as a basis for their description of the sounds.
Some of the earliest phonetic observations on
English are to be found in Sir Thomas Smiths De
recta et emendata linguae Anglicanae scriptione dialogus Dialogue on the true and corrected writing of
the English language, 1568. He tried to introduce
more rationality into English spelling by providing
some new symbols to make up for the deficiency of
the Latin alphabet. These are dealt with elsewhere
(see Phonetic Transcription: History). Smith was one
of the first to comment on the vowel-like nature
(i.e., syllabicity) of the hli in able, stable and the
final hni in ridden, London.
John Hart (d. 1574) (see Hart, John (?1501-1574)),
in his works on the orthography of English (1551,
1569, 1570), aimed to find an improved method of
spelling which would convey the pronunciation while
retaining the values of the Latin letters. Five vowels
are identified, distinguished by three decreasing
degrees of mouth aperture (ha, e, ii), and two degrees
of lip rounding (ho, ui). He believed that these five
simple sounds, in long and short varieties, were as
many as ever any man could sound, of what tongue or
nation soever he were. His analysis of the consonants
groups 14 of them in pairs, each pair shaped in the
mouth in one selfe manner and fashion, but differing
in that the first has an inward sound which the
second lacks; elsewhere he describes them as softer
and harder, but there is no understanding yet of the
nature of the voicing mechanism. Harts observations
of features of connected speech are particularly noteworthy; for example, a weak pronunciation of the
pronouns me, he, she, we with a shorter vowel, and
the regressive devoicing of word final voiced consonants when followed by initial voiceless consonants,
Amman was a Swiss-born doctor working in Amsterdam (see Amman, Johann Conrad (16691730)).
Like Holder and Wallis he had practiced teaching
the deaf, and had attained some fame through his
earlier work entitled Surdus Loquens The Dumb
Man Speaks in 1692, setting out his method. In the
preface to his Dissertatio de Loquela Treatise on
Speech (1700) he includes a letter of appreciation
directed to Wallis, who had written to Amman with
comments on the earlier book. Amman was apparently not acquainted with Walliss Tractatus or with
any other work on the subject when he wrote his own
treatise, but comments in his letter on what he (correctly) saw as deficiencies in Walliss account. These
included the restriction of the vowels to nine by his
oversymmetrical 3 3 classification, and his failure
to recognize the nature of what Amman calls mixed
vowels, namely those represented in German by hu i,
ho i, and ha i which Amman interpreted as mixtures of
hei with hui, hoi, and hai respectively. He makes the
perceptive comment that each vowel has a certain
variability (Latin latitudo); it may be more open or
more closed, but this does not change its nature.
Wallis had separated [j] and [w] (as consonants)
from [i] and [u], because in words such as ye and
woo the initial segments had a closer articulation
than the vowels following them. Amman, however,
includes this variation within the vowels latitudo,
which perhaps hints at an early notion of the phoneme. He makes sensible criticisms of three aspects of
Walliss treatment of consonants: (a) the failure
to mention the explosion phase of stops; (b) his
Although there had been reasonably accurate descriptions of the anatomy of the larynx, notably by Casserius (15521616), no one had so far understood the
mechanism of phonation. Some, like Holder, had
come close to it, but it was not until the early 18th
century that experiments were made with excised
larynxes. Denys Dodart (16341707) and Antoine
Ferrein (16931769) were both medical doctors.
Dodart published a series of articles in the Me moires
de lAcade mie de Paris between 1700 and 1707,
u
o
a
i
e
a
the ladder, while hui and hii form the upper extremities, and the other vowels are equally spaced between
these three. The transition from hui to hii goes
through hu i, and that from hoi to hei through ho i,
just as there is a theoretical point between ha i and ha i.
Between these designated points an infinite number of
others can be interpolated. Hellwag asks rhetorically
Will it not be possible to specify all the vowels and
diphthongs ever uttered by the human tongue by
reference to their position [on the ladder], as it were
mathematically? He says he has determined their
position not only auditorily, but also by observation
of their articulatory formation, and gives a very detailed and mostly accurate description of tongue and
lip positions. The way coarticulation occurs between
vowel and adjacent consonant is noticed and exemplified by the variation between [c ] and [x], according to whether the vowel is front or back. The
relative positions of the perceived pitches of whispered vowels (i.e., the formant resonances) are specified as (from lowest to highest): hu, o, a , a, a , e, ii,
and Hellwag remarks on the change induced in these
resonances by nasality. Diphthongs can be indicated
on his ladder by lines joining the starting and
ending points, with the implication that a series of
intermediate points must be traversed.
The description of consonants is accurate but
less notable than that of the vowels. It includes
descriptions of labial, alveolar, and uvular trills.
In 1779, the Imperial Academy of St Petersburg specified the following questions as topics for its annual
prize: (a) What is the nature and character of the
sounds of the vowels A, E, I, O, U?; (b) Could one
construct a device of a kind like the organ stop called
vox humana which would produce exactly the sound
of these vowels? The prize was won by Christian
Gottlieb Kratzenstein (172395), Professor of Experimental Physics and of Medicine at the University of
Copenhagen, a distinguished scholar, whose father
was German and mother Danish (see Kratzenstein,
Kratzensteins tubes were much less versatile, however, than the machine constructed by Wolfgang von
Kempelen (17341804), which he describes in his
book Mechanismus der menschlichen Sprache nebst
der Beschreibung seiner sprechenden Maschine
Mechanism of Human Speech together with the
Description of his Speaking Machine (1791). Von
Kempelen was a man of many parts lawyer, physicist, engineer, phonetician, and held a high position in
the government of the Habsburg monarchy (see
Kempelen, Wolfgang von (17341804)). He first
started on his project of constructing a speaking machine in 1769, so it took him over 20 years to bring it
to completion. One of the things which led him to
attempt it was his interest in teaching the deaf. His
book is in five parts, starting with more general questions about speech and its origins, and going on to
describe the speech mechanism in detail, the sounds
of European alphabets, and finally his machine. The
account of the speech mechanism is excellent, with
numerous diagrams, and amusing interludes, such as
his description of three kinds of kiss. The vowels are
characterized by reference to two factors: (a) the lip
aperture; (b) the aperture of the tongue-channel; he
specifically rules out any participation in vowel production by the nose (i.e., nasal vowels) or the teeth.
Each of the two apertures is divided into five degrees;
going from narrowest to widest, the lip aperture
gives U, O, I, E, A, and the tongue-channel aperture
I, E, A, O, U. These relationships were later seen to
correspond with variations of the first and second
formants respectively. It is clear from what von
Kempelen says that he was able to perceive pitch
differences (i.e., formants) in different vowels spoken
on the same note.
The machine which he eventually devised and built
was the culmination of several earlier unsuccessful
attempts. He describes it in great detail, with accompanying diagrams (see Experimental and Instrumental Phonetics: History) so precisely that attempts
were made later, more or less successfully, to build it
again on the basis of these directions.
The sounds produced (which included 19 consonants) undoubtedly left something to be desired in
their intelligibility; von Kempelen himself says that
if the hearer knows what word to expect it is more
likely to seem correct; otherwise one might treat it
like a young child who makes mistakes from time to
time but is intelligible. Nevertheless, his book represents a major contribution to phonetic description
and experimental research. The theory of speech
acoustics was not yet understood, but the attempt to
produce a satisfactory auditory output from a machine based on an articulatory model made several
steps in the right direction.
The Development of an Acoustic Theory of
Speech Production
Sweet was perhaps the greatest 19th-century phonetician, and had an equal, if not greater, reputation as an
Anglicist (see Sweet, Henry (18451912)). He was
one of Bells pupils, and made himself fully familiar
with the works of other British and foreign writers on
phonetics such as Ellis, Merkel, Bru cke, Czermak,
Sievers, and Johan Storm. He was always guided by
his belief that it was vital to have a firsthand knowledge of the sounds of languages, and this is reflected
in his detailed studies of the pronunciation of Danish,
Russian, Portuguese, North Welsh, and Swedish in
the Transactions of the Philological Society from
1873 onward, and of German, French, Dutch, Icelandic, and English in his Handbook of Phonetics, published in 1877. This was his first major work on
general phonetics. He emphasizes in the preface that
phonetics provides an indispensable foundation
of all study of language, and deplores the current
poor state of language teaching and the lack of
practical study of English pronunciation in
schools. Phonetics, he says, can no more be acquired by mere reading than music can. It must be
learnt from a competent teacher, and must start with
the study of the students own speech. Bell, he believed, has done more for phonetics than all his
predecessors put together, and as a result of his
work and Elliss England may now boast a flourishing phonetic school of its own. He criticizes the
German approach to vowel description for focusing
entirely on the sounds, without regard to their formation, and for the resulting triangular arrangement of
the vowels, which has done so much to perpetuate
error and prevent progress. Several examples of the
triangular diagram being used to indicate the relationships between vowels have been given, beginning
with Hellwag. Du Bois-Reymond departed from this,
and Bells system led to the vowel quadrilateral as an
alternative. However, as Vie tor points out, Bell uses a
vowel triangle elsewhere, so his choice of the quadrilateral in Visible Speech relates more to the character
of his notation than to a basic difference in interpretation of the vowel area. For a discussion of various
19th-century vowel schemes see Vie tor (1898: 4165)
and Ungeheuer (1962: 1430).
The Handbook was a response to a request from
Johan Storm, the outstanding Norwegian phonetician
(see Storm, Johan (18361920)), for an exposition of
the main results of Bells investigations, with such
additions and alterations as would be required to
bring the book up to the present state of knowledge.
Sweets Primer of Phonetics (first published in 1890)
was intended in part as a new edition of the Handbook and in part as a concise introduction to phonetics, rigorously excluding all details that are not
directly useful to the beginner. The changes from
the Handbook are relatively few, but he made some
further changes in later editions of the Primer. The
following brief account refers to his final system.
Sweet has a short section on the organs of speech,
and then goes on to what he calls Analysis. This
starts with the throat sounds, and the categories
narrow and wide (modified from Bells primary
and wide). Sweet attributes this distinction not to
differences in the pharynx, but to the shape of the
tongue, and, unlike Bell, he applies it to consonants as
well as vowels. For narrow sounds he maintains that
the tongue is comparatively tensed and bunched,
giving it a more convex shape than for the wide
sounds. He adopts Bells classification for the vowels,
but introduces a further distinction, between shifted
and normal vowels, giving 72 categories in all. The
shifted vowels are said to involve a movement of
the tongue in or out, away from its normal position (front, mixed, or back), while retaining the
basic slope associated with this position, but few,
if any, phoneticians have found this a satisfactory
category.
The consonant classification is also based on Bell,
but with some modifications. Lip, point, front,
and back are retained, but a further category blade
is added and also fan (spread), to characterize the
Arabic emphatic consonants. Sweet preferred to use
the active articulator to define the place, rather than
the passive, as in modern terminology. Bell had somewhat curiously classed [f] [v] as lip-divided. and [y]
[] as point-divided (i.e., lateral) consonants: for
Sweet the first pair are lip-teeth, and the second
pair point-teeth. He also added an extra form category, namely trills. Each place of articulation could
be further modified forward (outer) or backward
(inner), and there are two special tongue positions:
inversion (in later terminology called retroflexion),
and protrusion (tongue tip extended to the lips).
Vie tor was less of a theorist than a practical phonetician (see Vie tor, Wilhelm (18501918)). In this capacity he made a most important contribution to the
teaching of phonetics in Germany. His Elemente der
Phonetik Elements of Phonetics (1st edn. 1884) was
highly successful and was frequently republished (the
7th edn. in 1923, after his death). He was a pioneer in
Modern 3). Laryngoscopy, kymography, palatography, and sound recording were all being employed,
notably by Abbe Rousselot (see Rousselot, Pierre
Jean, Abbe (18461924)) in Paris (Principes de phone tique expe rimentale, 190108), but many phoneticians, among them Sievers, Sweet, and Jespersen,
were critical of the attention being given to these
techniques. They were not willing to accept results
produced by machines as convincing evidence in themselves. This is understandable, in view of the comparatively crude nature of some of the devices used. Sweets
comment in his article for Encyclopdia Britannica
(11th edn. 1911) sums it up:
It cannot be too often repeated that instrumental phonetics is, strictly speaking, not phonetics at all. It is only
a help: it supplies materials which are useless till they
have been tested and accepted from the linguistic phoneticians point of view. The final arbiter in all phonetic
questions is the trained ear of a practical phonetician.
See also: Amman, Johann Conrad (16691730); Bell, Alexander Melville (18191905); Bhartrhari; Brucke, Ernst
(18191891); Burnett, James, Monboddo, Lord (1714
1799); Dalgarno, George (ca. 16191687); Dionysius
Thrax and Hellenistic Language Scholarship; Ellis, Alexander John (ne Sharpe) (18141890); Experimental
and Instrumental Phonetics: History; First Grammatical
Treatise; Grimm, Jacob Ludwig Carl (17851863); Hart,
John (?1501-1574); Holder, William (16161698); Jespersen, Otto (18601943); Kempelen, Wolfgang von (1734
1804); Kratzenstein, Christian Gottlieb (17231795);
Lepsius, Carl Richard (18101884); Madsen Aarhus,
Bibliography
Abercrombie D (1965). Studies in phonetics and linguistics.
Oxford: Oxford University Press.
Allen W S (1953). Phonetics in ancient India. Oxford:
Oxford University Press.
Asher R E & Henderson E J A (eds.) (1981). Towards a
history of phonetics. Edinburgh: Edinburgh University
Press.
Austerlitz R (1975). Historiography of phonetics:
A bibliography. In Sebeok T A (ed.) Current trends in
linguistics, vol. 13. The Hague: Mouton.
Firth J R (1946). The English school of phonetics. TPhS
92132.
Fischer-Jrgensen E (1979). A sketch of the history of
phonetics in Denmark until the beginning of the 20th
century. ARIPUC 13, 135169.
Halle M (1959). A critical survey of the
acoustical investigation of speech sounds. In The sound
pattern of Russian. The Hague: Mouton.
Jespersen O (1897, 190506). Zur Geschichte der a lteren
Phonetik. In Jespersen O (ed.) (1933) Linguistica: Selected Papers in English, French, and German. Copenhagen:
Levin & Munksgaard.
Kemp J A (1972). John Wallis. Grammar of the English
language with an introductory grammatico-physical treatise on Speech. London: Longman.
Kohler K (1981). Three trends in phonetics: The development of phonetics as a discipline in Germany since the
nineteenth century. In Asher R E & Henderson E J A
(eds.) Towards a history of phonetics. Edinburgh: Edinburgh University Press.
Laver J (1978). The concept of articulatory settings:
A historical survey. HL 5, 114.
Malmberg B (1971). Reflections sur lhistoire de la phone tique. In Les domaines de la phone tique. Paris: Presses
Universitaires de France.
Mller C (1931). Jacob Madsen als Phonetiker. In Mller
C & Skautrup P (eds.) Jacobi M. Arhusiensis de literis
libri duo. Aarhus: Stiftsbogtrykkeriet.
Robins R H (1979). A Short history of linguistics. London:
Longman.
Sebeok T A (ed.) (1966). Portraits of linguists (2 vols).
Bloomington: Indiana University Press, IN.
Syllable
Rhyme
Greeka
Turkishb
Norwegianc
Germand
Frenche
Englishf
Chineseg
98
94
83
81
73
90
90
91h
73i
71j
68
Porpodas, 1999.
Durgunoglu and Oney, 1999.
c
Hoien et al., 1995.
d
Wimmer et al., 1991.
e
Demont and Gombert, 1996.
f
Liberman et al., 1974.
g
Ho and Bryant, 1997.
h
rhyme matching task.
i
Wimmer et al., 1994.
j
Bradley and Bryant, 1983.
% Phonemes counted
correctly
Greeka
Turkishb
Italianc
Norwegiand
Germane
Frenchf
Englishg
Englishh
Englishi
98
94
97
83
81
73
70
71
65
Nonwords
Greek
Finnish
German
Austrian German
Italian
Spanish
Swedish
Dutch
Icelandic
Norwegian
French
Portuguese
Danish
Scottish English
98
98
98
97
95
95
95
95
94
92
79
73
71
34
97
98
98
97
92
93
91
90
91
93
88
76
63
41
awareness across all languages so far studied. Although apparently universal, specific characteristics
of this developmental relationship appear to vary
with language. Of course, languages vary in the nature of their phonological structure, and also in the
consistency with which phonology is represented by
orthography. This variation means that there are predictable developmental differences in the ease with
which phonological awareness at different grain
sizes emerges across orthography (and also in the
grain size of lexical representations and developmental reading strategies across orthographies; see
Ziegler and Goswami, 2005).
According to psycholinguistic grain size theory,
beginning readers are faced with three problems:
availability, consistency, and granularity of symbolto-sound mappings. The availability problem reflects
the fact that not all phonological units are accessible
prior to reading. Most of the research discussed in
this article has been related to the availability problem. Prior to reading, the main phonological units of
which children are aware are large units: syllables,
onsets, and rimes. In alphabetic orthographies, the
units of print available are single letters, which correspond to phonemes, units not yet represented in an
accessible way by the child. Thus, connecting orthographic units to phonological units that are not
yet readily available requires further cognitive development. The rapidity with which phonemic awareness is acquired seems to vary systematically with
orthographic consistency (see Table 2).
The consistency problem reflects the fact that some
orthographic units have multiple pronunciations and
that some phonological units have multiple spellings
(as discussed above, both feedforward and feedback
consistency are important; see Ziegler et al., 1997).
Psycholinguistic grain size theory assumes that both
types of inconsistency slow reading development.
Importantly, the degree of inconsistency varies both
between languages and for different types of orthographic units. For example, English has an unusually
high degree of feedforward inconsistency at the rime
level (from spelling to sound), whereas French has a
high degree of feedback inconsistency (from sound to
spelling; most languages have some degree of feedback inconsistency). This cross-language variation
makes it likely a priori that there will be differences
in reading development across languages, and indeed
there are (see Table 3, and Ziegler and Goswami,
2005, for a more comprehensive discussion). Finally,
the granularity problem reflects the fact that there are
many more orthographic units to learn when access
to the phonological system is based on bigger grain
sizes as opposed to smaller grain sizes. That is, there
are more words than there are syllables, more
Conclusion
There is now overwhelming evidence for a causal link
between childrens phonological awareness skills and
their progress in reading and spelling across languages. Indeed, the demonstration of the importance
of phonological awareness for literacy has been
hailed as a success story of developmental psychology
(see Adams, 1990; Lundberg, 1991; Stanovich,
1992). Nevertheless, perhaps surprisingly, there are
still those who dispute that the link exists. For example, Castles and Coltheart (2004) recently argued that
no single study has provided unequivocal evidence
that there is a causal link from competence in phonological awareness to success in reading and spelling
acquisition (Castles and Coltheart, 2004: 77). In a
critical review, they considered and dismissed studies regarded by developmental psychologists as
very influential (for example, the large-scale studies
by Bradley and Bryant, 1983; Bryant et al., 1990;
Lundberg et al., 1988; and Schneider et al., 1997,
2000 described in this article). This is surprising,
because these studies used strong research designs
whereby (a) they were longitudinal in nature; (b) they
began studying the participants when they were
prereaders; and (c) they tested the longitudinal correlations found between phonological awareness and
literacy via intervention and training, thereby demonstrating a specific link that did not extend (for
example) to mathematics.
However, the apparent conundrum is easily solved.
Castles and Coltheart based their critique on two
a priori assumptions concerning phonological development and reading acquisition that are misguided. One was that the most basic speech units of a
language are phonemes (Castles and Coltheart,
2004: 78), and the second was that it is impossible
to derive a pure measure of phonological awareness if
a child knows any alphabetic letters (Castles and
Coltheart, 2004: 84). In fact, many psycholinguists
Bibliography
Adams M J (1990). Beginning to read: Thinking and
learning about print. Cambridge, MA: MIT Press.
Anthony J L & Lonigan C J (2004). The nature of phonological awareness: converging evidence from four studies
of preschool and early grade school children. Journal of
Educational Psychology 96, 4355.
Anthony J L, Lonigan C J, Burgess S R, Driscoll K, Phillips
B M & Cantor B G (2002). Structure of preschool phonological sensitivity: overlapping sensitivity to rhyme,
words, syllables, and phonemes. Journal of Experimental Child Psychology 82, 6592.
Anthony J L, Lonigan C J, Driscoll K, Phillips B M &
Burgess S R (2003). Phonological sensitivity: a quasi-
As has normally been the case for all major phonological frameworks, the relationship between Optimality Theory (OT) and historical phonology works
both ways: OT provides new angles on long-standing
diachronic questions, whereas historical data and
models of change bear directly on the assessment of
OT. For our purposes, it is convenient to classify
phonological changes under two headings, roughly
corresponding to the neogrammarian categories of
sound change and analogy:
(1a) VOICEDOBSTRUENTPROHIBITION
Assign one violation mark for every segment
bearing the features [-sonorant, voice].
(1b) CODACOND-[voice]
Assign one violation mark for every token of the
feature [voice] that is exhaustively
dominated by rhymal segments.
As has normally been the case for all major phonological frameworks, the relationship between Optimality Theory (OT) and historical phonology works
both ways: OT provides new angles on long-standing
diachronic questions, whereas historical data and
models of change bear directly on the assessment of
OT. For our purposes, it is convenient to classify
phonological changes under two headings, roughly
corresponding to the neogrammarian categories of
sound change and analogy:
(1a) VOICEDOBSTRUENTPROHIBITION
Assign one violation mark for every segment
bearing the features [-sonorant, voice].
(1b) CODACOND-[voice]
Assign one violation mark for every token of the
feature [voice] that is exhaustively
dominated by rhymal segments.
faithfulness constraints
predicts a wide range of repair strategies for NC clusters, of which some are
voicing, nasal deletion, and
attested (e.g., postnasal
denasalization) and some are not (e.g., metathesis and
vowel epenthesis). The gaps, it is argued, cannot be
eliminated by revising the theory of phonological
representations or the composition of CON; rather,
the unattested repairs are held to be impossible because they cannot arise from a phonetically driven
change or series of changes (Myers, 2002).
Factorial typology is also charged with excessive
restrictiveness because it fails to allow for so-called
crazy rules (Bach and Harms, 1972), which are
claimed to violate markedness laws. The following
examples have figured prominently in the debate:
1. In some dialects of English, an intrusive [r] or [l] is
inserted in certain hiatus contexts. This is alleged
to refute OTs prediction that epenthetic segments
should be unmarked (Blevins, 1997; Hale and
Reiss, 2000; McMahon, 2000).
2. Lezgi exhibits monosyllabic nouns in which a long
voiced plosive in the coda alternates with a short
plain voiceless plosive in the onset (3). This alternation is claimed to violate the markedness law in
(2a) (Yu, 2004).
PL/OBL
pab:
gad:
meg:
papa
gatu
meker
wife
summer
hair
Figure 1 Linking r is replaced by intrusive r through rule inversion (based on Vennemann, 1972: x2). Phonological environments
are stated segmentally in order to avoid controversial analytical
commitments on syllabic affiliation. The distributional statements
are merely descriptive. In stratal versions of OT, intrusive r can be
analyzed as a two-step process, with FINALC driving insertion of [r]
in the environment Vi__o] at the (stem and) word level, followed by
deletion of nonprevocalic [r] at the phrase level.
OT supporters challenge many of the basic assumptions of diachronic reductionism. Hayes and Steriade
(2004: 2627), for example, rejected Ohalas model
of phonologization by misparsing, asserting instead
that phonological innovations typically originate in
child errors retained into adulthood and propagated
to the speech community. Child errors normally reflect endogenous strategies for adapting the adult
phonological repertory to the childs restricted production capabilities. It is assumed that these strategies
capitalize on knowledge acquired through the childs
experience of phonetic difficulty and represented by
means of markedness constraints (Bermu dez-Otero
and Bo rjars, 2006; Boersma, 1998; Hayes, 1999).
Compared with Ohalas, this approach to phonologization assigns a very different, although equally crucial, role to factors related to perception: highly
salient errors are less likely to be retained into adulthood or to be adopted by other speakers. In this light,
consider Amahls famous puzzlepuddle shift (Smith,
1973):
(4)
puzzle
puddle
Adult target
pVzel
pVdel
Amahls adaptation
pVdel
pVgel
The knowledge that (5) is incorrect and, in fact, universally impossible is clearly beyond the reach of
inductive cognitive mechanisms.
In OT, in contrast, the emergence of (5) is blocked
because (5) is not a member of CON, nor is there a
Table 1 A diachronic scenario potentially conflicting with
markedness law (2a)
1. Initial state
2. Lenition
3. Apocope
a."ta.ta
a."ta.da
a."tad
a."ta.da
a."ta.da
a."tad
a."da.ta
a."da.da
a."dad
a."da.da
a."da.da
a."dad
[drO:lIN]
drawling
[kru:welkt]
cruel act
after /O:/
after /A:/
after /e/
the law[l] is . . .
the bra[] is . . .
the idea[] is . . .
Figure 2 English /l/ is a complex segment with coronal C-place and dorsal V-place. In most American English dialects with l -intrusion,
the insertion of [l] as a hiatus breaker is licensed by V-place sharing with /O:/ but blocked by V-place disagreement with /e/. The feature
geometry assumed here, with the C-place node dominating the V-place node, is standard but is not crucial to the argument.
gift
tac
tages
tage
tage
tage
tagen
tac
tages
tagen
geben
geben
tag
tag
tag
therein). In Middle High German, word-final obstruents were subject to devoicing; in Yiddish, however,
the alternations created by devoicing were leveled,
with underlying voiced obstruents freely surfacing in
word-final position.
(8)
Old High German
Middle High German
Yiddish
day
tag
tac
tog
days
tag-a
tag-e
teg
SING
PL
/tag/
tak
[tak]
/tag-e/
tag
[tag]
geb
geb
geb
geb
gebe
gebe
gebe
gebe
geben
geben
Ra:t
Ra:d-es
wheel wheel-GEN.SG
(10b) IDENT-[voice]
If a is segment in the output and b is a
correspondent of a in the input, then assign
one violation mark if a and b do not have the
same value for the feature [voice].
[lQn]
[lQngItju:d]
[lQnIS]
[lQnIfEkt]
normal application
normal nonapplication
overapplication
overapplication
ky, Noam (b. 1928); Constructivism; E-Language versus ILanguage; Elphinston, James (17211809); Experimental
Phonology; Feature Organization; Formal Models and
Language Acquisition; Formalism/Formalist Linguistics;
Generative Phonology; Infancy: Phonological Development; Inference: Abduction, Induction, Deduction;
Jakobson, Roman (18961982); Lexicalization; Linguistic Universals, Chomskyan; Linguistic Universals,
Greenbergian; Markedness; Morphological Change,
Paradigm Leveling and Analogy; Morphologization;
Morphophonemics; Neogrammarians;
Phonological
Figure 4 The life cycle of phonological patterns in Stratal OT. In the history of English, successive rounds of input restructuring at
progressively higher levels in the grammar cause the domain of homorganic cluster simplification to shrink. Stage I corresponds to the
formal speech of orthoepist James Elphinston (mid-18th century), stage II corresponds to Elphinstons colloquial speech, and stage III
corresponds to contemporary Received Pronunciation.
Bibliography
Bach E & Harms R T (1972). How do languages get crazy
rules? In Stockwell R P & Macaulay R K S (eds.) Linguistic change and generative theory. Bloomington:
Indiana University Press. 121.
Bermu dez-Otero R (2003). The acquisition of phonologi (eds.)
cal opacity. In Spenader J, Ericksson A & Dahl O
Variation within Optimality Theory: Proceedings of the
Stockholm Workshop on Variation within Optimality
Theory. Stockholm: Stockholm University, Department
of Linguistics. 2536. (ROA 593, Rutgers University
Optimality Archive, http://roa.rutgers.edu.)
Bermu dez-Otero R (2006). Stratal Optimality Theory.
Oxford: Oxford University Press.
Bermu dez-Otero R & Bo rjars K (2006). Markedness in
phonology and in syntax: the problem of grounding. In
Honeybone P & Bermu dez-Otero R (eds.) Linguistic
knowledge: perspectives from phonology and from syntax. Lingua 116(2) (Special issue).
Bermu dez-Otero R & Hogg R M (2003). The actuation
problem in Optimality Theory: phonologization, rule
inversion, and rule loss. In Holt (ed.). 91119.
Blevins J (1997). Rules of Optimality Theory: two case
studies. In Roca I (ed.) Derivations and constraints in
phonology. Oxford: Clarendon Press. 227260.
Blevins J (2004). Evolutionary phonology: the emergence of
sound patterns. Cambridge, UK: Cambridge University
Press.
Boersma P (1998). Functional phonology: formalizing
the interaction between articulatory and perceptual
drives. The Hague, The Netherlands: Holland Academic
Graphics.
Bybee J (2001). Phonology and language use. Cambridge,
UK: Cambridge University Press.
Chomsky N (1986). Knowledge of language: its nature,
origin, and use. Westport, CT: Praeger.
Clayton M L (1976). The redundance of underlying morpheme-structure conditions. Language 52, 295313.
Gick B (1999). A gesture-based account of intrusive
consonants in English. Phonology 16, 2954.
Hale M (2000). Marshallese phonology, the phonetics
phonology interface and historical linguistics. The Linguistic Review 17, 241257.
Hale M & Reiss C (2000). Phonology as cognition. In
Burton-Roberts N, Carr P & Docherty G (eds.) Phonological knowledge: conceptual and empirical issues.
Oxford: Oxford University Press. 161184.
Hayes B (1999). Phonetically-driven phonology: the role of
Optimality Theory and inductive grounding. In Darrell
M, Moravcsik E, Newmeyer F J et al. (eds.) Functional-
Lexical Phonology
Although sound change was the main focus of phonological investigation in the 19th and early 20th
centuries, generative phonological theory initially
concentrated primarily on synchronic description
and explanation. Where sound changes were dealt
with at all, they were typically recast as rules in a
synchronic grammar. The development of lexical
phonology in the 1980s represented a partial refocusing on change, since the architecture of the model
lends itself particularly well to analyzing the lifecycle of phonological processes as they enter and
work through the grammar historically. Some of
these organizational insights are maintained in the
developing model of stratal optimality theory.
Although standard generative phonology has nothing to say about these different types of change, they
can be modeled in lexical phonology. The phonological processes mentioned above all operated in the
lexicon; but lexical phonology also recognizes a postlexical component. Postlexical processes interact not
with the morphology, but with the syntax, and can
apply across word boundaries, as does Tyneside
weakening (Carr, 1991), which changes /t/ to [r] in
word-final intervocalic positions, as in not a chance,
put it down, beat it. Postlexical rules are characteristically gradient, apply across the board without
lexical exceptions, are phonetically conditioned, and
can often be suspended stylistically. However, lexical processes tend to be categorical, may have lexical
exceptions, are morphologically conditioned, and
persist regardless of sociolinguistic factors such as
formality. Kiparsky (1988) equates Neogrammarian
sound changes with postlexical rules, and lexically
diffusing sound changes with lexical rules. More
accurately, we are dealing here with rule applications,
since palatalization in English, for instance, applies
both lexically and postlexically, though the different
applications have different properties. Thus, while
palatalization is obligatory word-internally in racial,
it is optional postlexically in Ill race you, where
we find variably either [sj] or [S].
Future Work
Lexical phonology provides an appealing way of
modeling the difference between Neogrammarian
and diffusing sound changes, and of capturing the
insight that a synchronic process developing from
gradient, phonetic roots can later become a lexical
process. It will then shift upward through the lexical levels, gaining exceptions and becoming increasingly bound up with the morphology and less
finely phonetically conditioned, until in the end the
process ceases to apply productively, leaving only an
indication of its existence in underlying contrasts
or redistributions. This reintegrates synchronic and
diachronic phonology, showing both that phonological theory can increase our understanding of
change, and of how changes become phonological
processes, and that historical processes can provide
useful tests of theories.
However, there remain some questions for the
future. First, lexical phonology is primarily a
model of phonological organization. Consequently,
Bibliography
Aitken A J (1981). The Scottish vowel length rule. In
Benskin M & Samuels M L (eds.) So meny people,
longages and tonges. Edinburgh: Middle English Dialect
Project. 131157.
Bermu dez-Otero R (1999). Constraint interaction in
language change. Ph.D. diss., University of Manchester.
Bermu dez-Otero R (in press). The life-cycle of constraint
rankings: studies in early English morphophonology.
Oxford: Oxford University Press.
Browman C P & Goldstein L (1986). Towards an articulatory phonology. Phonology yearbook 3, 219252.
Browman C P & Goldstein L (1991). Gestural structures:
distinctiveness, phonological processes, and historical
change. In Mattingly I G & Studdert-Kennedy M (eds.)
Modularity and the motor theory of speech perception.
Hillsdale, NJ: Lawrence Erlbaum. 313338.
Carr P (1991). Lexical properties of postlexical rules:
postlexical derived environment and the Elsewhere
Condition. Lingua 85, 4154.
Chen M & Wang W S-Y (1975). Sound change: actuation
and implementation. Language 51, 255281.
Chomsky N & Morris Halle (1968). The sound pattern of
English. New York: Harper & Row.
Giegerich H J (1999). Lexical strata in English: morphological causes, phonological effects. Cambridge: Cambridge
University Press.
Halle M & Mohanan K P (1985). Segmental phonology of
modern English. Linguistic Inquiry 16, 57116.
Harris J (1989). Towards a lexical analysis of sound change
in progress. Journal of Linguistics 25, 3556.
Holt D E (ed.) (2003). Optimality Theory and language
change. Kluwer: Dordrecht.
Kager R (1999). Optimality Theory. Cambridge: Cambridge University Press.
Kiparsky P (1982). Lexical phonology and morphology. In
Yang I-S (ed.) Linguistics in the morning calm. Seoul:
Hanshin. 391.
Introduction
In the present contribution to this volume, we will
briefly discuss some recent work in neurolinguistic
modeling that once again considers the human language cerebral system as a functional mosaic, more
diffuse in its overall functional operation and slightly
Bibliography
Aitken A J (1981). The Scottish vowel length rule. In
Benskin M & Samuels M L (eds.) So meny people,
longages and tonges. Edinburgh: Middle English Dialect
Project. 131157.
Bermudez-Otero R (1999). Constraint interaction in
language change. Ph.D. diss., University of Manchester.
Bermudez-Otero R (in press). The life-cycle of constraint
rankings: studies in early English morphophonology.
Oxford: Oxford University Press.
Browman C P & Goldstein L (1986). Towards an articulatory phonology. Phonology yearbook 3, 219252.
Browman C P & Goldstein L (1991). Gestural structures:
distinctiveness, phonological processes, and historical
change. In Mattingly I G & Studdert-Kennedy M (eds.)
Modularity and the motor theory of speech perception.
Hillsdale, NJ: Lawrence Erlbaum. 313338.
Carr P (1991). Lexical properties of postlexical rules:
postlexical derived environment and the Elsewhere
Condition. Lingua 85, 4154.
Chen M & Wang W S-Y (1975). Sound change: actuation
and implementation. Language 51, 255281.
Chomsky N & Morris Halle (1968). The sound pattern of
English. New York: Harper & Row.
Giegerich H J (1999). Lexical strata in English: morphological causes, phonological effects. Cambridge: Cambridge
University Press.
Halle M & Mohanan K P (1985). Segmental phonology of
modern English. Linguistic Inquiry 16, 57116.
Harris J (1989). Towards a lexical analysis of sound change
in progress. Journal of Linguistics 25, 3556.
Holt D E (ed.) (2003). Optimality Theory and language
change. Kluwer: Dordrecht.
Kager R (1999). Optimality Theory. Cambridge: Cambridge University Press.
Kiparsky P (1982). Lexical phonology and morphology. In
Yang I-S (ed.) Linguistics in the morning calm. Seoul:
Hanshin. 391.
Introduction
In the present contribution to this volume, we will
briefly discuss some recent work in neurolinguistic
modeling that once again considers the human language cerebral system as a functional mosaic, more
diffuse in its overall functional operation and slightly
other than imaging, he suggests two types of physiological interconnectionistic routes in short, the
insula or the arcuate fasciculus, which as we have
known for over 100 years courses through the opercular regions of the temporal, parietal, and frontal
lobes. This, in turn, allows the Poeppel/Hickok model
to again reach into preimaging models and to suggest
that there are two kinds of conduction aphasias, one
perhaps involving lesions in and around the supramarginal gyrus (SMG) in such a way that the lesion
would extend to the operculum, thereby interfering
in one way or another with smooth acousticmotor
cooperation between Wernickes area and Brocas
area. An old notion, to be sure. Lesions to the left
posterior supratemporal plane (location of soundbased representations), including Heschls gyrus
(BA 41) and the planum temporale (PT), slightly inferior to the SMG, may produce a more anterior type
of conduction aphasia. In any event, Hickok and
Poeppel do point out that activation levels of the auditory signal, although bilateral, tend to be somewhat
stronger in the left.
Other recent significant studies of the perception
of phonemic elements, called tonemes, have been carried out by Jack Gandour and his colleagues. In one
(Gandour et al., 2000), the investigators looked at the
perception of tone in speakers of three languages:
Thai, Chinese, and English.
Tonemes are very short stretches of fast fluctuating
fundamental frequencies over the range of a vowel
production. As a simple example, the segmental
stretch of a CV, such as /ma/, may have a number
of differing tone patterns over the /a/, such that a
high-falling fundamental frequency (Fo) pattern shift
will give you one word, while the /a/ with a tone
pattern of low-rising will give you quite another.
These represent minimal phonemic pair distinctions
and like minimal pair distinctions call for close, digital, and highly analytical processing skills upon the
acoustic spectrum. In these kinds of studies, success
on the part of the subjects depends upon the ability to
link those analytical analyses of the perceptual system
with words in their language. Each tone language
has its own set of parameters that fit to the lexicon.
Linguists frequently define pitch as Fo, and therefore
what is involved physically is a set of fast-changing
pitches. Pitch, however, tends to avoid mention of
something so communicative that it would serve
as the sole element to distinguish one word from
another in some languages. The term toneme is
therefore applied to a rapid pitch change that calls
up in the minds of the listener different words. Thai
and Chinese are tone languages, English is not.
The method of study was PET, and there were three
to four subjects for each language. The subjects were
asked to simply listen to the prosodic, fast fluctuations of Fo and to press a same or different button
if they thought they heard the same pitch pattern or
not. For tonemes, only Thai patterns were chosen for
this study.
Several earlier imaging studies have shown activation in the left opercular regions of the frontal lobe,
very near Brocas area, during phonemic perception.
Recall that most consider both areas 44 and 45 en
toto to be Brocas area (see Foundas et al., 1998, for
an extremely close and detailed neuroanatomical
study of pars triangularis [45] and pars opercularis
[44], using volumetric MRI). The Gandour et al.
(2000) study now shows this for the perception of
the toneme, if you will, a phoneme of prosody. All
subjects showed similar metabolic mosaics for the
perception of rapidly changing pitch patterns that
were nonlinguistic. However, only the Thai speakers
revealed a significantly added component to the mosaic of metabolism when the pitch changes matched
the tonemes of Thai. They were not only perceiving
the rapid pitch changes, and therefore able to press
same or different buttons, they were hearing words.
That is to say, the perception was linguistic. And, crucially, that added metabolic zone was in the left frontal operculum. Both the Chinese speakers and the
English speakers revealed similar patterns of metabolism, but no added left Brocas area metabolism.
Another equally sophisticated and significant study
along somewhat similar lines is found in Hseih et al.
(2001). Here, however, there were 10 Chinese speakers and 10 English speakers, and all were analyzed as
they perceived consonants and vowels, as well as
pitches (nonspeech but physically similar to tones)
and tones. The general metabolic mosaic patterns
were different with each group of speakers, thus
providing evidence that the cerebral metabolic patterns were largely reflective of the fact that Chinese
and English involve different linguistic experiences.
Subjects either listened passively or were instructed
to do samedifferent responding by clicking left or
right, same vs. different. Subjects still had to click in
the passive condition, but they simply had to alternate from one to the other for each presentation a
mindless task involving similar digit movements.
The findings here show a task-dependent mosaic
of metabolic functioning that reflects how acoustic,
segmental, and suprasegmental signals may or may
not directly tap into linguistic significance, with nondominant hemisphere mechanisms activated for cues
that eventually work themselves into dominant hemisphere activation. Brocas area on the left was activated for the Chinese-speaking group for consonants,
vowels, tones, and pitches, while the right Brocas
area was activated for English speakers on the pitch
The picture is one of empty slots within these templatic frames and the access of the contents to fill those
slots. Finally, each element, structure or unit may be
dissociated from any other. The syllable itself is typically considered as an encasement with slots labeled
as onset, rime, nucleus, and coda and groups of segments placed there as segments in production. It is
stressed that syllables themselves are not subject to
productive sequencing, but rather their contents are.
Levelt et al. (1999) constructs both phonological/syllabic frames and metrical frames. There has been a
long-standing article of faith held by many psycholinguists, which claims that when phonemes move in
linear ordering errors they move from and to the
same syllabic constituent slots: onsets move to onset
positions, nuclei move to nuclei positions and codas
move to coda positions. This has been variously
called the syllable constraint condition. For Levelt,
this constraint is overly lopsided in that according to
his numbers 80% of English language slips-of-thetongue involve syllable onsets, but these are crucially
word onsets as well. According to Levelts numbers,
slips not involving word onsets, . . . are too rare to be
analyzed for adherence to a positional constraint
(Levelt, 1999: 21). He rules out on other principles
the nucleus to nucleus observation, claiming that
phonotactic constraints are operating here, since a
vowel ! consonant will not likely result in a pronounceable string. In addition, Levelt feels that
vowels and coda consonants operate under more
tightly controlled conditions whereby . . . segments
interact with similar segments. Phonologists have
observed that there are often fewer coda consonants
than onset consonants, which is especially true for a
language like Spanish. The number of vowels in a
language is always smaller than the total number of
consonants. In any event, there is ample evidence
(also see Shattuck-Hufnagel, 1987) that onsets are
much less tied to the syllable than are codas, which
being sister nodes of the nucleus, both dominated by
the rime, are much more glued to the vowel of the
syllable. In terms of a qualitatively different status
for word onsets, there is evidence that the word
onset position is significantly more involved in the
phonemic carryover errors in recurrent perseveration
(e.g., Buckingham and Christman, 2004).
Two new notions of the nature of segmental targets
have been introduced recently. One is seen in Square
(1997) with her movement formulae. These are
stored in posterior left temporoparietal cortex and
seem very much like the centuries-old memories of
articulatory procedures of Jean-Baptiste Bouillaud
(1825). Targets are now understood by some as idealized gestures for sound production and that voluntary speech would involve the access of these stored
gestural engrams for articulation. Both of these conjure up theories of embodiment and both are forms
of representative memory (see Glenberg, 1997 for a
cogent treatment of memory and embodiment).
Goldman et al. (2001) have analyzed the effect
of the phonetic surround in the production of phonemic paraphasias in the spontaneous speech of a
Wernickes aphasic. Through the use of a powerful
statistic, the authors were able to control for chance
occurrence of a copy of the error phoneme either
in the past context or in the future context. The idea
was that there could be a context effect that caused
the phonemic paraphasia to occur. Chance baselines
were established, and it was found that relative to this
baseline, the error-source distances were shorter
than expected for anticipatory transpositions but
not for perseverative transpositions. That could be
taken to mean that anticipatory errors are more indicative of an aphasia than perseveratory errors in
that this patient seemed more unable to inhibit a
copy of an element in line to be produced a few
milliseconds ahead. Thus, this could be taken to support the claim that anticipatory bias in phonemic
paraphasia correlates with severity. The authors also
observed that many but not all anticipatory errors
involved word onsets, mentioned earlier in this
review as vulnerable to movement or substitution.
The much larger distances between error and source
for perseverations could have been due to slower
decay rates of activated units whereby they return to
their resting states. The patients anterior/perseveration ratios measured intermediate between a nonaphasic error corpus and that of a more severe
aphasic speaker. One troubling aspect of this study
was that a source was counted in the context whether
or not it shared the same word or syllable position as
the error. This may represent a slight stumbling block
in interpreting the findings, since, Levelt notwithstanding, it would imply that syllable position had
no necessary effect. The authors, however, presented
their findings with caution, especially so because some
recent work (Martin and Dell, 2004) has demonstrated a strong correlation between anticipatory
errors and normality vs. perseveratory errors and
abnormality. It is a long-noted fact that slips-of-thetongue in normal subjects are more anticipatory than
perseverative. Perseveration is furthermore felt by
many to be indicative of brain damage. Martin and
Dell (2004) set up an anticipation ratio, which
is obviously higher in normality through slips. They
also find that more severe aphasics produce more
perseverative paraphasias than anticipations, but
that the ratio increases throughout recovery such
that in the later stages of recovery patients produce fewer and fewer perseverations as opposed to
neology the anomia theory, and suggested the metaphor of a random generator as a principle device
that could produce a phonetic form in light of retrieval failure.
Butterworth had studied with Freida GoldmannEisler (1968), who had analyzed large stretches of
spontaneous speech and had looked closely at the
on-going lexical selection processes online. She had
noted time delays before the production of nouns of
high information (i.e., low redundancy) in the speech
of normal subjects. Time delays for her indicated the
action of word search, and that search would obviously be a bit more automatic and fast, to the extent
that the word sought was highly redundant, therefore
carrying less information. Butterworth very cleverly
extended his mentors methods to the analysis of
neologistic jargon stretches of spontaneous speech
in Wernickes aphasia. What he found was extremely
interesting. Before neologisms that were totally opaque as to any possible target (subsequently termed
abstruse neologisms by A. R. Lecours (1982)), Butterworth noted clear delays of up to 400 ms before
their production. Crucially, he did not notice this
delay before phonemic paraphasias, where targets
could nevertheless be clearly discerned, nor before
semantic substitutions, related to the target. This
indicated failed retrieval for Butterworth and he
went on to suggest that perseverative processes and
nonce word production capabilities could play a role
in this random generator. It was random for Butterworth, since his analysis of the actual phonemic
makeup of neologisms did not follow the typical patterns of phoneme frequency in English. He never
implied that random meant helter skelter; he knew
enough about phonotactic constraints in aphasic
speech. Neither did he imply that the patient actually,
with premeditated intentionality, produced the surrogate. The whole issue was subsequently treated in
Buckingham (1981, 1985, 1990).
Each of these accounts of neology makes different
predictions concerning recovery. The conduction theory predicts that as the patient recovers, target words
will slowly but surely begin to reappear. Paraphasic
infiltration will lessen throughout the months and
ultimately the word forms will be less and less
opaque. In the endstage of recovery, there should be
no anomia. The theory also predicts that the error
distributions in the acute stage will produce some
errors with mild phonemic transformation, others
with more, others with a bit more, etc., up to the
completely opaque ones, i.e., a nonbimodal distribution. The anomia theory, on the other hand, predicts a
bimodal distribution with neologisms on one end and
more or less simple phonemic paraphasia on the
other, and few in the middle that were more severe
Conclusions
We have considered a vast array of findings on sublexical linguistic elements, their brain locations and
tight sensorymotor links. We have claimed that many
new imaging studies have conjured up and vindicated
several earlier theories laid down long before the
modern technology before us in the neurosciences.
We have seen the motorsensory interface in the functional linguistic descriptions of phoneme level production and perception, and we have even seen that
much new work with modern technology has served
to sharpen our understanding at somewhat closer
levels, but nonetheless has vindicated much previous
thinking back to the Haskins Labs and even further
back to the classic aphasia models of the late 1800s.
The focus upon the perisylvian region in the left
hemisphere has not changed, and in fact there is
even more growing interest in charting the anatomy
and function of the arcuate fasciculus, the opercular
regions, and the insula. At a slight remove from the
physical system, we have discussed and compared
various new findings and principles in the phonetics
and phonology of segments, syllables, and meter and
how they impact on sublexical processing in aphasia.
Finally, we considered the enigma of the neologism
and provided evidence that there are at least two quite
reasonable accounts of how they may be produced
and under what conditions they may appear. This
leads to a consideration of how recovery from neology may take different paths as the patient improves
language control. Many questions remain.
Bibliography
Alajouanine T (1956). Verbal realization in aphasia. Brain
79, 128.
Bouillaud J B (1825). Recherches cliniques propres a`
de montrer que la perte de la parole correspond a` la le sion
des lobes ante rieures du cerveau, et a` confirmer lopinion
de M. Gall, sur le sie`ge de lorgane du langage articule .
Archives Ge ne rales Medicales 8, 2545.
Buckingham H W (1977). The conduction theory and
neologistic jargon. Language and Speech 20, 174184.
Buckingham H W (1981). Where do neologisms come
from? In Brown J W (ed.) Jargonaphasia. New York:
Academic Press. 3962.
Buckingham H W (1985). Perseveration in aphasia. In
Newman S & Epstein R (eds.) Current perspectives in
dysphasia. Edinburgh: Churchill Livingstone. 113154.
Buckingham H W (1989). Phonological paraphasia. In
Code C (ed.) The characteristics of aphasia. London:
Taylor & Francis. 89110.
Buckingham H W (1990). Abstruse neologisms, retrieval
deficits and the random generator. Journal of Neurolinguistics 5, 215235.
Buckingham H W & Kertesz A (1976). Neologistic jargonaphasia. Amsterdam: Swets & Zeitlinger.
Buckingham H W & Yule G (1987). Phonemic false evaluation: theoretical and clinical aspects. Clinical Linguistics and Phonetics 1, 113125.
Buckingham H W & Christman S S (2004). Phonemic
carryover perseveration: word blends. Seminars in
Speech and Language 25, 363373.
Butterworth B (1979). Hesitation and the production of
verbal paraphasias and neologisms in jargon aphasia.
Brain and Language 18, 133161.
Christman S S (1992). Abstruse neologism formation:
parallel processing revisited. Clinical Linguistics and
Phonetics 6, 6576.
Christman S S (1994). Target-related neologism formation in jargon aphasia. Brain and Language 46,
109128.
Foundas A L, Eure K F, Luevano L F & Weinberger D R
(1998). MRI asymmetries of Brocas area: the pars triangularis and pars opercularis. Brain and Language 64,
282296.
Gagnon D & Schwartz M F (1996). The origins of neologisms in picture naming by fluent aphasics. Brain and
Cognition 32, 118120.
Gandour J, Wong D, Hsieh L, Weinzapfel B, VanLancker D
& Hutchins G D (2000). A cross-linguistic PET study of
tone perception. Journal of Cognitive Neuroscience 12,
207222.
Glenberg A M (1997). What memory is for. Behavioral
and Brain Sciences 20, 155.
Goldmann R E, Schwartz M F & Wilshire C E (2001). The
influence of phonological context in the sound errors of a
Phonological Phrase
M Nespor, University of Ferrara, Ferrara, Italy
2006 Elsevier Ltd. All rights reserved.
Introduction
Connected speech is not just a sequence of segments
occurring one after the other. It also reflects an analysis into hierarchically organized constituents ranging
from the syllable and its constituent parts to the utterance (Selkirk, 1984; Nespor and Vogel, 1986).
Two elements that form a constituent of a certain
level have more cohesion between them than two
elements that straddle two constituents of the same
level. For example, in the Italian example in (1), the
syllables ra and men are closer together in (1a) than
in (1b), and in both cases they are closer than the
same syllables in (1c).
(1a) veramente
truly
(1b) era mente sublime
(it) was (an) exceptional mind
(1c) Vera mente sempre
Vera lies all the time
(3a) ev
eve
evde evden
evler evlerden
house to . . . in . . . from . . . PL. from . . . PL.
(3b) ada
adaya
adada
adadan
adalar
island to . . .
in . . . from . . . PL.
adalardan
from . . . PL.
The phonological phrase is one of the phrasal constituents of the phonological, or prosodic, hierarchy,
and we will see that it is crucial in the signaling of
syntactic constituency and, additionally, in providing
cues to the value of a basic syntactic parameter. Like
the other constituents of the prosodic hierarchy, it is
signaled both by phenomena bound to it and by edge
phenomena. These cues are used both in the processing of speech by adults, for example to disambiguate
certain sentences with an identical sequence of words
but different syntactic structures, and in language
acquisition by infants.
Besides the nonisomorphism argument, there is another reason to justify the prosodic hierarchy, and
thus the phonological phrase, as a separate level of
representation: the influence of syntax on phonology
is restricted to just those constituents. No other
domains for postlexical phonology are predicted to
exist.
We can thus draw the conclusion that the phonological phrase must be a constituent of grammar because it defines a domain that is phonologically
marked in many ways (of which more below), and it
is not necessarily isomorphic to any syntactic constituent. It is identifiable because it is the domain of
application of several phonological phenomena in
different languages and, in certain cases, it disambiguates two sentences with the same sequence of words,
such as those in (13).
(13a) Ho comprato delle fotografie di citta` [v]ecchie
(I) bought old maps of cities
(13b) Ho comprato delle fotografie di citta` [v:]ecchie
(I) bought pictures of old cities
Bibliography
Beckman M E & Edwards J (1990). Lengthenings and
shortenings and the nature of prosodic constituency. In
Kingston J & Beckman M E (eds.) Papers in laboratory
phonology i: between the grammar and the physics
of speech. Cambridge: Cambridge University Press.
152178.
Cambier-Langeveld T (2000). Temporal marking of accents
and boundariest. HIL Dissertations.
Christophe A, Guasti M T, Nespor M, Dupoux E & van
Ooyen B (1997). Reflections on phonological bootstrapping: its role for lexical and syntactic acquisition.
Language and Cognitive Processes 12, 585612.
Christophe A, Nespor M, Dupoux E, Guasti M-T & van
Ooyen B (2003). Reflexions on prosodic bootstrapping:
its role for lexical and syntactic acquisition. Developmental Science 6(2), 213222.
Christophe A, Peperkamp S, Pallier C, Block E & Mehler J
(2004). Phonological phrase boundaries constrain lexical access: I. Adult data. Journal of Memory and Language 51, 523547.
Gout A, Christophe A & Morgan J L (2004). Phonological
phrase boundaries constrain lexical access. II Infant
data. Journal of memory and language 51, 548567.
Hayes B (1989). The prosodic hierarchy in meter. In
Kiparsky P & Youmans G (eds.) Phonetics and phonology. Rhythm and meter. New York: Academic Press.
201260.
Hayes B & Lahiri A (1991). Bengali intonational phonology. Natural Language and Linguistic Theory 9, 4796.
Liberman M & Prince A (1977). On stress and linguistic
rhythm. Liguistic Inquiry 8(2), 249336.
Nespor M, Guasti M T & Christophe A (1996). Selecting
word order: the Rhythmic Activation Principle. In
Kleinhenz U (ed.) Interfaces in phonology. Berlin:
Akademie Verlag. 126.
Nespor M & Vogel I (1986). Prosodic phonology.
Dordrecht: Foris.
Selkirk E O (1984). Phonology and syntax. The relation
between sound and structure. Cambridge: MIT Press.
Truckenbrodt H (1999). On the relation between syntactic
phrases and phonological phrases. Linguistic Inquiry 30,
219255.
Wightman C W, Shattuck-Hufnagel S, Ostendorf M &
Price P J (1992). Segmental duration in the vicinity
of prosodic phrase boundaries. Journal of the Acoustic
Society of America 91, 17071717.
Phonological Typology
M Hammond
2006 Elsevier Ltd. All rights reserved.
This article is reproduced from the previous edition, volume 6,
pp. 31243126, 1994, Elsevier Ltd.
Figure 1
Surface Typology
Surface typologies are built on surface properties of
phonological systems. For example, click languages
are those languages that have sounds produced with a
velaric ingressive airstream. Stress languages have variations in prominence marked on different syllables.
It is in the domain of phonological inventories (see )
that some of the most intensive work has been done
on surface typology. In the tradition inaugurated by
Joseph Greenberg (and continued by his students),
work on phonological typology has gone hand in
hand with the development of phonological universals (see Phonological Universals). Investigating the
Figure 2
Figure 3
Figure 4
Underlying Typology
Typologies can be built on underlying properties of
phonological systems as well. For example, three
basic kinds of suprasegmental systems can be distinguished underlyingly: accent, tone, and stress. Accent
languages, like Japanese, mark surface pitch contrasts
underlyingly with a diacritic mark associated with
particular syllables (perhaps even a linked tone as in
a tone language, or metrical structure as in a stress
language). The surface patterns are arrived at by
associating a specific melody with the diacritically
marked word. Archetypical tone languages are also
typified by surface pitch contrasts, but have a different underlying structure. In a tone language, the underlying representation of a word is associated with a
tonal melody. Stress languages, like English, are typified by metrical structure assignment at some point in
the derivation (see Metrical Phonology). However, at
the surface, stresses may be realized by pitch contrasts, as in Greek, resulting in a superficial conflation
Parametric Typology
One of the most exciting developments in phonological typology has been parametric typology.
A parametric typology is built on parameters, which
correspond to the points where languages can exhibit
variation. In this respect, parametric typologies are
like other typologies. However, parameters are also
supposed to correspond to the particular choices that
a child must make in learning his or her language.
One parameter that has been proposed is directionality of syllabification. This parameter has several consequences with respect to the size of onset clusters and
the placement of epenthetic vowels. Languages exhibiting left-to-right syllabification can be distinguished
from those exhibiting right-to-left syllabification. The
diagram in Figure 4 shows how different syllabifications are achieved when CCVC syllables are aligned
from left to right and from right to left.
Another example of a parameter is headedness in
metrical constituents. In most versions of the metrical
theory of stress, the head of a constituent occurs on
either the left or the right edge of a constituent, as in
Figure 5. The xs at the lower level of the hierarchy
mark syllables; the xs at the higher level mark headship (see Metrical Phonology). Headedness has consequences with respect to the initial assignment of
stress and subsequent stress shifts that arise when a
vowel becomes unstressable for some reason. As with
the directionality of syllabification, headedness corresponds to one of the choices that language learners
supposedly make.
Typologies of Elements
Typologies can also be applied to the elements of
phonological systems. The simplest case of this is distinctive feature theory. A feature like [high] imposes a
Figure 5
Evaluating Typologies
Figure 6
that it aids in the construction of a theory or follows from a theory. A good typology is one where
the proposed distinction between phonological systems or elements correlates with other phonological
properties. Specifically, a typology is highly valued to
the extent that it provides a classificatory scheme
based on a multiplicity of factors as opposed to just
a few.
See also: Distinctive Features; Lexical Phonology and
Morphology; Metrical Phonology; Phonological Universals; Rule Ordering and Derivation in Phonology.
Bibliography
Maddieson I (1984). Patterns of sounds. Cambridge:
Cambridge University Press.
Phonological Universals
M Hammond, University of Arizona, Tucson, AZ, USA
Greenbergian Universals
Introduction
To understand what a phonological universal is, we
must first agree on what phonology is. In point of
fact, much of the divergence between different views
of phonological universals stems from this question.
A very simple characterization of phonology would
be the study of the sound systems of language. This
definition leaves a number of issues open and is therefore a reasonable albeit ambiguous starting point.
Given this definition, a phonological universal is a
statement about how sound systems may and may
not vary across the languages of the world.
There have been a number of approaches to phonological universals over the history of phonology
and this article surveys some of the more influential
ones of the last 50 years.
1.
2.
3.
4.
5.
6.
7.
8.
Figure 5
Evaluating Typologies
Figure 6
that it aids in the construction of a theory or follows from a theory. A good typology is one where
the proposed distinction between phonological systems or elements correlates with other phonological
properties. Specifically, a typology is highly valued to
the extent that it provides a classificatory scheme
based on a multiplicity of factors as opposed to just
a few.
See also: Distinctive Features; Lexical Phonology and
Morphology; Metrical Phonology; Phonological Universals; Rule Ordering and Derivation in Phonology.
Bibliography
Maddieson I (1984). Patterns of sounds. Cambridge:
Cambridge University Press.
Phonological Universals
M Hammond, University of Arizona, Tucson, AZ, USA
Greenbergian Universals
Introduction
To understand what a phonological universal is, we
must first agree on what phonology is. In point of
fact, much of the divergence between different views
of phonological universals stems from this question.
A very simple characterization of phonology would
be the study of the sound systems of language. This
definition leaves a number of issues open and is therefore a reasonable albeit ambiguous starting point.
Given this definition, a phonological universal is a
statement about how sound systems may and may
not vary across the languages of the world.
There have been a number of approaches to phonological universals over the history of phonology
and this article surveys some of the more influential
ones of the last 50 years.
1.
2.
3.
4.
5.
6.
7.
8.
There are a number of properties that are indicative of greenbergian universals. First, their logical
structure is transparent. For example, the first universal given above can be interpreted directly as a
logical restriction with universal scope, e.g., for all
x, if x is a language, then x has vowels: 8x
(L(x) ! V(x)).
A second general property of greenbergian universals is that they usually apply to relatively superficial elements of a phonological system. For
example, greenbergian universals typically though
not exclusively refer to elements of the phonetic, not
phonological, inventory. Likewise, greenbergian universals are typically not formulated over relatively
abstract phonological elements like features, rules,
constraints, or prosodic elements, but are typically
formulated over segments or classes of segments. (The
examples above that refer to syllables or phonemic
inventory are thus counterexamples.)
Finally, greenbergian universals are typically arrived at on the basis of empirical typological work,
rather than by deduction from some theory; they are
thus more empiricist, rather than rationalist on that
philosophical continuum.
All of these features are relatively uncontroversial
and can be used to argue the virtues or shortcomings
of this approach to phonological universals. For example, the fact that these types of universals are
typically formulated to refer to superficial entities
can be taken as a virtue, since the properties of relatively superficial entities are more amenable to direct
examination. Thus, universals over such entities are
more falsifiable. On the other hand, one might argue
that universals over superficial entities are less plausible, on the assumption that surely the cognitive
elements that underlie those entities and phonology
generally are of a significantly different character.
Notice that the list above also includes statistical
generalizations, e.g., generalizations that include qualifiers like nearly or usually. These obviously cannot
be interpreted in the same way as absolute restrictions
and are not as readily falsified. For example, a universal
that says that all languages have vowels can be falsified
by finding a language that has no vowels. On the
other hand, finding a language that does not have
/i, a, u/ does not falsify a universal that says that most
languages have those sounds. Rather, to falsify a statistical universal, one must present an appropriate
sampling of languages that violate the universal.
Since the statistical universal was presumably already
determined by such a sample, it follows that to be
falsified, a sample of similar size or greater must be
made. (Though see Bybee, 1985 for another view.)
Statistical greenbergian universals are therefore troubling because they are harder to falsify.
Rule-based Universals
The advent of generative phonology in the late 1960s
came with a different approach to phonological universals (Chomsky and Halle, 1968: henceforth SPE).
This approach to phonological universals can
be characterized as rule-based or formal(ist).
The theory is formalized as a notation for writing
language-specific phonological generalizations, or
rules. The basic idea behind the approach to universality in this theory is that the restrictions of the notation
are intended to mirror the innate capabilities of the
language learner. Thus, if a rule can be written with
the notation, then the theory is claiming that that is a
possible rule. If a rule cannot be written with the
notation, then it is an impossible rule.
In addition, this theory posited an evaluation metric whereby the set of possible rules could be compared with each other. It was thought at the time that
rules/analyses that were simpler in terms of the notation were more natural. They would show up more
frequently in languages, and they were preferred by
the language learner if more than one formalization
could describe the available data.
Feature theory provides a simple example of how
this theory made claims about universality. Vowel
height is treated in terms of two binary-valued features: [high] and [low]. The following chart shows
how these features can be used to group vowel heights
in different ways in a hypothetical five-vowel system.
(1) class
high
mid
low
high & mid
mid & low
high & low
vowels
i, u
e, o
a
i, u, e, o
e, o, a
i, u, a
features
[high]
[high, low]
[low]
[low]
[high]
?
nasal
coronal
3
vocalic
(4) 4 high 5! [nasal]/_[nasal]
low
vocalic
! [nasal]/_[nasal]
high
vocalic
! [nasal]/_[nasal]
(6)
low
(5)
Nonlinear Phonology
In the early 1970s, phonological theory underwent
a shift, from a focus on the linear rules of SPE to
a focus on richer nonlinear representations for suprasegmental phenomena, i.e., stress, tone, syllable
structure, and prosodic morphology. The richer
representations required for these phenomena were,
in fact, later extended to account for traditionally
segmental phenomena as well.
This change in the focus of phonology entailed a
concomitant shift in the form of phonological universals. While previously the focus had been on the form
of rules, now attention shifted to the form of representations. Here are a few examples of universals that
stem from this era.
1. No-Crossing Constraint: autosegmental association lines cannot cross.
2. Sonority Hierarchy: syllables must form a sonority
peak.
3. Foot Theory: stress is assigned by building from a
restricted set of metrical feet.
The No-Crossing Constraint (Goldsmith, 1979)
holds that tonal autosegments, elements initially
representing pitches on a separate tier of representation, are linked to segmental elements via association
lines. That linking partially respects linear order in
that the lines connecting these two strings of elements
may not cross each other. Thus, the first representation below for the hypothetical word bada`ga is a legal
one, but the second is not. (Acute accent represents a
high tone and grave accent a low tone; H represents
a high autosegment and L a low one.)
(7)
Optimality-Theoretic Universals
Phonological universals have a rather different character in Optimality Theory (OT) (Prince and Smolensky,
1993; McCarthy and Prince, 1993).
Lets exemplify this universal with basic syllable
structure. The ONSET constraint requires that syllables have onsets. The NOCODA constraint requires
that syllables have no codas. These two constraints
militate for unmarked syllables and interact with
other constraints that militate for lexical distinctness:
PARSE and FILL. (It is convenient to describe the theory
before the development of Correspondence Theory
[McCarthy and Prince, 1995].)
In a language like English, PARSE and FILL outrank
ONSET and NOCODA, allowing onsetless syllables and
codas to arise when present lexically. In a language
like Hawaiian, where onsets are required and codas
ruled out entirely, the ranking is the other way around.
The following constraint tableau shows how
this hierarcchy works for English with a word like
[ks].
(10)
Since PARSE and FILL are ranked highest, any candidate that violates one or both of them is ruled
out. Consequently, ONSET and NOCODA are
ranked lower, and can be violated. The upshot is
that a form like [ks] surfaces as is, with a coda and
without an onset.
It works the other way around in Hawaiian.
(11)
Evaluating Universals
The approaches to phonological universals reviewed
above are quite varied, but we can distinguish them
along the following parameters.
First, these approaches differ to the degree that they
are formalized or notational. Generative approaches
have typically been of this sort, though they have
varied in whether universals govern rules, representations, or constraints. Greenbergian universals have
typically not been formal in this sense.
Second, these approaches have differed in the extent to which universals have been based on explicit
typological sampling. Early greenbergian universals
were based on such samples, but early generative universals were not. Over the evolution of generative
phonology, there has been more and more typological basis to universals, though work like Hayes (1981)
or Hayes (1995) still stands as an exception, rather
than the rule. (It is interesting to compare the work of
Hayes to an early paper by Hyman, 1977. The latter is
a preliminary typological investigation of stress from
an essentially greenbergian perspective.)
Third, approaches to universals have differed in
terms of abstractness. Greenbergian universals have
typically been concrete, applying to elements that are
more or less observable directly. Early generative
universals were applicable to more abstract entities,
and thus less easily falsified on direct examination
of language phenomena.
Bibliography
Anderson S R (1985). Phonology in the twentieth century:
theories of rules and theories of representations. Chicago:
University of Chicago Press.
Archangeli D & Pulleyblank D (1994). Grounded phonology. Cambridge: MIT Press.
Bagemihl B (1989). The crossing constraint and backwards
languages. Natural Language and Linguistic Theory 7,
481549.
Boersma P (1998). Functional phonology. Ph.D. diss.,
University of Amsterdam.
Bybee J L (1985). Morphology. Philadelphia: John Benjamins
Press.
Chomsky N & Halle M (1968). The sound pattern of
English. New York: Harper & Row.
Dell F & Elmedlaoui M (1985). Syllabic consonants and
syllabification in Imdlawn Tashlhiyt berber. Journal
of African Languages and Linguistics 7, 105130.
Goldsmith J (1979). Autosegmental phonology. New York:
Garland.
Phonological Words
I Vogel, University of Delaware, Newark, DE, USA
Introduction
The Phonological Word (or Prosodic Word) is located
within the phonological hierarchy between the constituents defined in purely phonological terms (i.e.,
mora, syllable, foot) and those that involve a mapping
from syntactic structure (i.e., clitic group, phonological phrase, intonational phrase, utterance). The phonological word (PW) itself involves a mapping from
morphological structure onto phonological structure.
(See, among others, Selkirk, 1980, 1984, 1986, 1996;
Nespor and Vogel, 1982, 1986.)
The PW is motivated on the grounds that it (a)
correctly delimits the domain of phonological phenomena (i.e., phonological rules, stress assignment, phonotactic constraints), (b) comprises a domain distinct
from those defined in morphosyntactic terms, and (c)
has consistent properties across languages. The Morphosyntactic Word (MW), by contrast, fails to meet
these criteria, typically providing strings that are
either too small (e.g., the, a) or too large (e.g.,
re-randomizations, dishwasher repairman). (See Hall
and Kleinhenz, 1999 for papers addressing these
issues in a variety of languages.)
Phonological Words
I Vogel, University of Delaware, Newark, DE, USA
Introduction
The Phonological Word (or Prosodic Word) is located
within the phonological hierarchy between the constituents defined in purely phonological terms (i.e.,
mora, syllable, foot) and those that involve a mapping
from syntactic structure (i.e., clitic group, phonological phrase, intonational phrase, utterance). The phonological word (PW) itself involves a mapping from
morphological structure onto phonological structure.
(See, among others, Selkirk, 1980, 1984, 1986, 1996;
Nespor and Vogel, 1982, 1986.)
The PW is motivated on the grounds that it (a)
correctly delimits the domain of phonological phenomena (i.e., phonological rules, stress assignment, phonotactic constraints), (b) comprises a domain distinct
from those defined in morphosyntactic terms, and (c)
has consistent properties across languages. The Morphosyntactic Word (MW), by contrast, fails to meet
these criteria, typically providing strings that are
either too small (e.g., the, a) or too large (e.g.,
re-randomizations, dishwasher repairman). (See Hall
and Kleinhenz, 1999 for papers addressing these
issues in a variety of languages.)
Recursivity
We can now provide a simple mapping of morphological structure onto the PW:
It has been suggested that the PW is universally subject to a Minimality Condition: it must consist of at
least one foot, where a foot must consist of two moras
(e.g., McCarthy and Prince, 1986, 1990), although
languages such as French and Japanese also permit
monomoraic words (cf. Ito , 1990; Ito and Mester,
1992).
A central principle of early models of the phonological (prosodic) hierarchy, the Strict Layer Hypothesis (SLH), led to systematic violations of the
Minimality Condition. According to the SLH, a phonological constituent may only dominate constituents
of the immediately lower level. Thus, the constituent
C that dominates the PW may only dominate PWs.
This resulted in structures such as the Italian example
below (cf. Nespor and Vogel, 1986). The label C is
used to avoid the controversy about the constituent
that dominates the PW.
(2) [[ve]PW [le]PW [ri]PW [lavo]PW]C
CL
CL
Pre
Stem
(I) rewash them for you (lit. for you
them re (I) wash)
Kabak and Vogel (2001) identify the elements excluded from the stems PW as Phonological Word Adjoiners (PWAs). All PWAs share a common property:
they introduce a PW boundary where they are in
contact with the stem PW (i.e., . . .stem]PW XPWA . . .
or . . . XPWA [PW stem . . .). PWAs combine with the
PW within the higher constituent C (i.e., Clitic Group
or Phonological Phrase) and crucially predict that
structures with PWAs and PWs themselves may exhibit different properties. Once a PWA is excluded
from a PW, subsequent items are also excluded, as
illustrated in Turkish:
(6) [giydir]PW
mePWA di
wear-CAUS NEG
-PAST
You didnt let (it) be worn.
(Kabak and Vogel, 2001: 343)
-niz
-2 PL
Conclusions
It is clear that the Phonological Word, not the morphosyntactic word, is the appropriate domain for
numerous phonological phenomena across languages, and within different theoretical frameworks.
The examples presented here could also be analyzed
in OT in terms of alignment constraints, which yield
the crucial mismatch between the PW and the MW,
and different constraint rankings, which permit different roles for minimality. Furthermore, the minimal
(bimoraic) PW has been reported as the basis of
childrens early words in a variety of languages,
although in languages such as French that do not
require minimality, childrens early words are not
necessarily bimoraic (cf. Demuth and Johnson, 2004).
The cross-linguistic validity of the PW is thus
clear, though questions remain regarding details of
the PWs content and structure, including the roles
of minimality, clitics, and recursivity.
Bibliography
Booij G (1999). The prosodic word in phonotactic generalizations. In Hall T A & Kleinhenz U (eds.) Studies on
the phonological word. Philadelphia: John Benjamins.
4772.
Demuth K & Johnson M (2004). Truncation to subminimal
words in early French. Ms. Brown University.
Hall T A (1999). German function words. In Hall T A &
Kleinhenz U (eds.) Studies on the phonological word.
Philadelphia: John Benjamins. 9131.
Hall T A & Kleinhenz U (eds.) (1999). Studies on the
phonological word. Philadelphia: John Benjamins.
Inkelas S (1989). Prosodic constituency in the lexicon.
Ph.D. diss. Stanford University.
Ito J (1990). Prosodic minimality in Japanese. In Deaton
K, Noske M & Ziolkowski M (eds.) Papers from the
parasession on the syllable in phonetics and phonology.
CLS. 26-II. 213239.
Ito J & Mester A (1992). Weak layering and word binarity.
Santa Cruz: Linguistics Research Center.
Kabak B & Vogel I (2001). Stress in Turkish. Phonology
18, 315360.
McCarthy J & Prince A (1986). Prosodic morphology.
Ms. U Mass Amherst and Brandeis University.
McCarthy J & Prince A (1990). Foot and word in prosodic
morphology: the Arabic broken plural. NLLT 8,
209283.
Nespor M & Vogel I (1982). Prosodic domains of external
sandhi rules. In Hulst H V D & Smith N (eds.) The
structure of phonological representations. I. Dordrecht:
Foris. 225255.
Nespor M & Vogel I (1986). Prosodic phonology.
Dordrecht: Foris.
Peperkamp S (1997). Prosodic words. The Hague: Holland
Academic Graphics.
Selkirk E (1980). Prosodic domains in phonology: Sanskrit
revisited. In Aronoff M & Kean M L (eds.) Juncture.
Saratoga, CA: Anma Libri. 107129.
Selkirk E (1982). The syntax of words. Cambridge: MIT
Press.
Selkirk E (1984). Phonology and syntax: the relation
between sound and structure. Cambridge: MIT Press.
Selkirk E (1986). On derived domain in sentence phonology. Phonology 3, 371405.
Selkirk E (1996). The prosodic structure of function words.
In Morgan J & Demuth K (eds.) Signal to syntax: bootstrapping from speech to grammar in early acquisition.
Mahwah, NJ: Lawrence Erlbaum Associates. 187213.
Vogel I (1994). Phonological interfaces in Italian. In
Mazzola M (ed.) Issues and theory in Romance linguistics: selected papers from the Linguistics Symposium on
Romance Languages XXIII. 109125.
Vogel I (1999). Subminimal constituents in prosodic
phonology. In Hannahs S J & Davenport M (eds.)
Phonological structure. Philadelphia: John Benjamins.
249267.
Autism
Autism is a neurodevelopmental disorder characterized by impairments in language, communication,
imagination, and social relations (American Psychiatric Association, 1994). Estimates of occurrence in the
general population range from approximately 1 in
200 to 1 in 1000 (Fombonne, 1999). Although nearly
25% of children with autism have essentially normal
vocabulary and grammatical abilities (Kjelgaard and
Tager-Flusberg, 2001), another 25% may remain
mute for their entire lives (Lord and Paul, 1997).
Many underlying language problems found in autistic
children are believed to be linked to social and emotional deficits. Although the leading causes of autism
Neurologically, many of the social aspects of language acquisition (e.g., social orienting, joint attention and responding to emotional states of others) are
tied to differences in the medial temporal lobe (amygdala and hippocampal), which is larger in autistic
children than in age-matched controls (Brambilla
et al., 2003; Sparks et al., 2002). This brain region
is thought to be related to performance on deferred
imitation tasks a skill that may be important in
language acquisition (Dawson et al., 1998). Further,
the increase in amygdala size may have consequences
for important skills such as discriminating facial
expressions (Adophs et al., 1995; Whalen et al.,
1998) and joint attention (Sparks et al., 2002). Both
skills appear to be important for language acquisition
and are often impaired or absent in autism.
Phonology and Autism
Dyslexia
Developmental dyslexia refers to the abnormal acquisition of reading skills during the normal course of
development despite adequate learning and instructional opportunities and normal intelligence. Estimates are that 510% of school-age children fail to
learn to read normally (Habib, 2000). Dyslexia can
exist in isolation, but more commonly it occurs with
other disabilities, such as dyscalculia (mathematic
skill impairment) and attention deficit disorder, both
with and without hyperactivity. Studies of dyslexia
usually indicate the involvement of left-hemisphere
perisylvian areas during the reading process. The specific areas identified vary somewhat depending on the
component of reading being engaged in but overall the
extrastriate visual cortex, inferior parietal regions,
superior temporal gyrus, and inferior frontal cortex
appear to be activated.
When one examines specific skills, visual word
form processing is associated with occipital and
occipitotemporal sites, whereas reading-relevant phonological processing has been associated with superior temporal, occipitotemporal, and inferior frontal
Relatively few studies have investigated brain activation when the child is reading continuous text (Backes
et al., 2002). One exception is a report by Johnstone
et al. (1984), who monitored silent and oral reading,
noting that reading difficulty affected the central and
parietal ERPs of dyslexics but not the controls. In
addition, different patterns of asymmetry were found
for the two groups in silent compared to oral reading
at midtemporal placements.
Down Syndrome
Down syndrome (DS) is characterized by a number of
physical characteristics and learning impairments, as
well as IQ scores that may range from 50 to 60.
Individuals with DS typically are microcephalic and
have cognitive and speech impairments, as well as
neuromotor dysfunction. In addition, problems generally occur in language, short-term memory, and
task shifting. Typical language problems involve
delays in articulation, phonology, vocal imitation,
mean length utterance (MLU), verbal comprehension, and expressive syntax. Spontaneous language
is often telegraphic, with a drastic reduction in the
use of function words: articles, prepositions, and pronouns (Chapman et al., 2002). Language deficits may
arise from abnormalities noted within the temporal
lobe (Welsh, 2003). Individuals afflicted with DS
commonly suffer from a mild to moderate hearing
loss (78% of DS children have a hearing loss; StoelGammon, 1997), which may partially account for the
delay in phonological processing and poor articulation.
DS occurs in approximately 1 in 8001,000 live
births. Ninety to 95% of cases are caused by a full
trisomy of chromosome 21, and 5% result from
translocation or mosaicism. Considerable individual
variability exists in cognitive development among
those afflicted, with the greatest deficits in development observed with full trisomy21, where specific
genes have been associated with brain development,
specifically the cerebellum development, and produce
Alzheimer-type neuropathology, neuronal cell loss,
accelerated aging, and so on (Capone, 2001). Individuals with DS commonly exhibit neuropathology resembling that seen in Alzheimer disease, with some
patients showing symptoms beginning as early as age
35 years.
General Brain Imaging Results for DS
Brains of DS individuals appear to have a characteristic morphologic appearance that includes decreased
size and weight, a foreshortening of the anterior
posterior diameter, reduced frontal lobe volume, and
flattening of the occiput. The primary cortical gyri
may appear wide, whereas secondary gyri are often
poorly developed or absent, with shallow sulci and
reduced cerebellar and brain stem size (Capone,
2004). MRI studies indicate a volume reduction for
the whole brain, with the cerebral cortex, white
matter, and cerebellum totaling 18% (Pinter et al.,
2001a). Hippocampal dysfunction occurs in DS
(Pennington et al., 2003), perhaps because of the
reduced size of the hippocampus, as determined by
MRI (Pinter et al., 2001b), and the cerebral cortex
Dichotic listening tasks involving DS children generally result in a left-ear advantage, indicating that these
individuals use their right hemisphere to process for
speech (Welsh, 2002). On the basis of such findings,
Capone (2004) argued that difficulties in semantic
processing in DS occur from a reduction in cerebral
and cerebellar volume. In addition, the corpus callosum is thinner in the DS brain in the rostral fifth, the
area associated with semantic communication. Welsh
(2002) speculated that the thinner corpus callosum
isolates the two hemispheres from each other, making
it more difficult to integrate verbal information.
Vocabulary growth in DS children is delayed increasingly with age (Chapman, 2002). Studies using
dichotic listening tasks report a left-ear advantage for
DS, indicating that lexical operations are carried out
primarily in the right hemisphere, a finding opposite
Children with DS exhibit a delay in syntax production that generally becomes evident with the emergence of two-word utterances, and syntax is often
more severely impaired than lexical development
(Chapman, 1997). Verbal short-term memory may
be affected, limiting the ability to understand syntactic relations. Research on short-term memory points
to hippocampal dysfunction in DS children (Pinter
et al., 2001a). MRI studies of adults with DS highlight the possibility that reductions in volume size
observed in DS may contribute to the development
of language and memory deficits. It has been hypothesized that the cause of language deficits observed in
children with DS are primarily related to memory and
learning and are most associated with deficits observed in the hippocampal region (Nadel, 2003).
Semantic abilities are problematic in SLI. Investigations into the neural substrate of these issues have
made some headway in recent years. In particular,
the N400 (Kutas and Hillyard, 1980), a large negative
component of the ERP that correlates with semantic
ability and occurs approximately 400 milliseconds
after a stimulus begins, is altered in populations of
SLI children, as well as in their parents. This brain
component is enhanced in fathers of SLI children
compared to controls in response to the unexpected
ending of a sentence (Ors et al., 2001). For example,
the N400 response is normally larger in response to
the last word in the sentence, The train runs on the
banana than if the final word is track. Atypical
N400 amplitudes also are found in children with
other language deficits (Neville et al., 1993). MEG
studies have pinpointed the lateral temporal region
as the origin of the N400 response (Simos et al.,
1997). Intracortical depth recordings in response to
written words point to the medial temporal structures
near the hippocampus and amygdala (Smith et al.,
1986).
Williams Syndrome
Williams-Beuren syndrome (WS) results from a rare
genetic deficit (about 1 in 20 000 births) caused by a
microdeletion on chromosome 7 (Levitin et al., 2003).
This genetic etiology present in WS allows researchers
to identify developmental abnormalities associated
with WS from birth. Characteristics of WS include
dysmorphic facial features, mental retardation, and a
unique behavioral phenotype (Bellugi et al., 1999,
2000; Levitin et al., 2003). Recently, Mervis and colleagues (Mervis, in press; Mervis et al., 2003) have
formulated a cognitive profile for WS by analyzing the
relative weaknesses and strengths often associated
with the genetic syndrome. Markers for this profile
include a very low IQ and weakness in visuospatial
construction, as well as strengths in recognition of
faces, verbal memory, and language abilities. These
findings have been replicated by other researchers
(Galaburda et al., 2003).
Anatomical Aspects of WS
Studies indicate that WS children are capable of semantic organization, although the onset is often
delayed (Mervis and Bertrand, 1997; Mervis, in
press). WS children tend to list low-frequency words
when asked to complete the task (Mervis, in press). In
studies of lexical and semantic processing, unique
ERP patterns are recorded from WS children to auditory stimuli during a sentence completion task that
includes anomaly words at the end of the sentence
(Bellugi et al., 2000). In general, the expected component at N400 associated with anomaly words in WS
was more evenly distributed across the scalp, with no
hemispheric interaction (Bellugi et al., 2000). This
finding is unusual, given the left-hemisphere activation common in typically developing children (Bellugi
from the dysfunction in a single, discrete brain structure. Rather, language disorders are multidimensional
and involve neural processes that arise out of complex
interactions between multiple cortical brain regions,
and neural pathways, as well as from genetic factors
whose phenotypic expression is mitigated through dynamic environmental factors. There are, no doubt,
other as-yet-unknown factors. Clearly, we are still in
the earliest stages of our quest to understand the complex relationships that exist between developmental
language disabilities and the brain. Lest we get discouraged, it is important to keep in mind is that we at least
have begun that quest.
Acknowledgments
This work was supported in part by grants to
D. L. M. (NIH/NHLB HL01006, NIH/NIDCD
DC005994, NIH/NIDA DA017863) and V. J. M.
(DOE R215R000023, HHS 90XA0011).
Bibliography
Adolphs R, Tranel D, Damasio H & Damasio A R (1995).
Fear and the human amygdala. Journal of Neuroscience, 15, 58795891.
Akshoomoff N, Pierce K & Courchesne E (2002). The neurobiological basis of autism from a developmental perspective. Development and Psychopathology 14, 613634.
Alt M, Plante E & Creusere M (2004). Semantic features in
fast mapping: performance of preschoolers with specific
language impairment versus preschoolers with normal
language. Journal of Speech, Language and Hearing
Research 47, 407420.
American Psychiatric Association (1994). Diagnostic and
statistical manual of mental disorders (4th edn.).
Washington, DC: American Psychiatric Association.
Backes W, Vuurman E, Wennekes R, Spronk P, Wuisman M,
van Engelshoven J & Jolles J (2002). Atypical brain activation of reading processes in children with developmental
dyslexia. Journal of Child Neurology 17, 867871.
Bellugi U, Lichtenberger L, Mills D, Galaburda A &
Korenber J R (1999). Bridging cognition, the brain and
molecular genetics: evidence from Williams syndrome.
Trends in Neuroscience 22(5), 197207.
Bellugi U, Lichtenberger L, Jones W & Lai Z (2000). I. The
neurocognitive profile of Williams syndrome: a complex
pattern of strengths and weaknesses. Journal of Cognitive
Neuroscience 12(1), 729.
Billingsley R L, Jackson E F, Slopis J M, Swank P R,
Mahankali S & Moore B D 3rd (2003). Functional magnetic resonance imaging of phonologic processing in neurofibromatosis 1. Journal of Child Neurology, 18, 731740.
Bishop D V M & McArthur G M (2004). Immature cortical responses to auditory stimuli in specific language
impairment: evidence from ERPs to rapid tone
sequences. Developmental Science 7, 1118.
(2002). Less developed corpus callosum in dyslexic subjects a structural MRI study. Neuropsychologia 40,
10351044.
Welsh T N, Digby E & Simon D (2002). Cerebral specialization and verbal-motor integration in adults with
and without Down syndrome. Brain and Language 84,
153169.
Whalen P J, Rauch S L, Etcoff N L, McInerney S C, Lee M B
& Jenike M A (1998). Masked presentation of emotional face expression modulates amygdala activity without explicit knowledge. Journal of Neuroscience 18,
411418.
(2002). Less developed corpus callosum in dyslexic subjects a structural MRI study. Neuropsychologia 40,
10351044.
Welsh T N, Digby E & Simon D (2002). Cerebral specialization and verbal-motor integration in adults with
and without Down syndrome. Brain and Language 84,
153169.
Whalen P J, Rauch S L, Etcoff N L, McInerney S C, Lee M B
& Jenike M A (1998). Masked presentation of emotional face expression modulates amygdala activity without explicit knowledge. Journal of Neuroscience 18,
411418.
Figure 3 The semantic interference effect in speech production. Naming latencies are slower when the distracter word is
semantically related to the picture than when it is unrelated. The
results are taken from a study by Schriefers et al. (1990).
not only receive activation from the auditory presentation of the distracter word, but also via the conceptual network garden utilities from the picture
of the hark (rake) due to the fact that there are
connections between conceptually similar entries.
Therefore, gieter (watering can) is a stronger lexical
competitor than the unrelated distracter bel (bell),
which does not receive activation from the picture of
the rake (see also Levelt et al., 1999: 1011). The
phonological facilitation is accounted for by assuming that the phonological distracter harp preactivates
segments (phonemes) in the production network.
The segments that are shared between distracter and
target (/h/, /a/, /r/) can be selected faster when the
target picture name hark is phonologically encoded.
One can infer from this pattern that semanticcategorically related distracters have an influence on
Figure 7 The phonological facilitation effect in speech production. Naming latencies are faster when the distracter word is
phonologically related to the picture than when it is unrelated.
The results are taken from a study by Schriefers et al. (1990).
Segmental Encoding
Speech error research has been an important source
of information for the understanding of segmental
encoding (Shattuck-Hufnagel, 1979 for an overview).
The vast majority of sound errors are single-segment
errors, but sometimes also consonant clusters get substituted, deleted, shifted, or exchanged. Most often,
the word onset is involved in a sound error, although
sometimes also nuclei (beef needle instead of beef
noodle) or codas (god to seen instead of gone
to seed) form part of an error. This points to the
general importance of the word onset in phonological
encoding (see Schiller, 2004 for more details). Some
errors suggest the involvement of phonological
features in planning phonological words, e.g., glear
plue sky (Fromkin, 1971). In this latter example, it
Metrical Encoding
Roelofs and Meyer (1998) investigated how much
information about the metrical structure of words is
stored in memory. Possible candidates are lexical
stress, number of syllables, and syllable structure. In
one experiment, for instance, they compared the production latencies for sets of homogeneous disyllabic
words such as ma.NIER (manner; capital letters
indicate stressed syllables), ma.TRAS (mattress),
and ma.KREEL (mackerel) with sets including
words with a variable number of syllables such
as ma.JOOR (major), ma.TE.rie (matter), and
ma.LA.ri.a (malaria). Lexical stress was kept constant (always on the second syllable). Relative to a
heterogeneous control condition, there was strong
and reliable facilitation for the disyllabic sets but
not for the sets with variable numbers of syllables.
This showed that the number of syllables of a word
must be known to the phonological encoding system.
Hence, this information must be part of the metrical
representation of words.
Similarly, the production of sets of homogeneous
trisyllabic words with constant stress (e.g., ma.RI.ne
navy, ma.TE.rie matter, ma.LAI.se depression,
ma.DON.na madonna) and variable stress (e.g.,
ma.RI.ne navy, ma.nus.CRIPT manuscript,
ma.TE.rie matter, ma.de.LIEF daisy) was
measured and compared to the corresponding heterogeneous sets. Again, facilitation was obtained for the
constant sets but not for the variable ones. Therefore,
one can conclude that the availability of stress information is indispensable for planning of polysyllabic
words at least when stress is in nondefault position.
However, CV structure did not yield an effect. When
the production latencies for words with a constant
Figure 13 Summary of the stress-monitoring latencies. Depicted are the reaction times of three experiments, two with
disyllabic words (example words LEpelliBEL and TOrentoMAAT)
and one with trisyllabic words (example words asPERgeartiSJOK) as a function of the position of the lexical stress in the
picture name. The results are taken from a study by Schiller et al.
(2005).
Syllables
The contribution of the syllable in the speech production process is quite controversial. Studies by Ferrand
et al. (1996) reported a syllable-priming effect in
French speech production. The visually masked
prime ca primed the naming of ca.rotte better than
the naming of car.table. Similarly, the prime car
primed the naming of car.table better than the naming
of ca.rotte (see Figure 14). This effect is a production
equivalent of the syllabic effect reported by Mehler
et al. (1981). Ferrand et al. (1996) concluded that
the output phonology must be syllabically structured
since the effect disappears in a task that does not
make a phonological representation necessary, such
as a lexical decision task. Furthermore, Ferrand et al.
(1996) argued that their data are compatible with
Levelts idea of a mental syllabary, i.e., a library of
syllable-sized motor programs. Interestingly, Ferrand
et al. (1997) also report a syllable-priming effect for
English. This is surprising considering the fact that
Cutler et al. (1983) could not get a syllabic effect for
English speech perception.
However, when Schiller (1998) tried to replicate
the syllabic effects in Dutch speech production, he
failed to find a syllabic effect. Instead, what he
obtained was a clear segmental overlap effect, i.e.,
the more overlap between prime and target picture
name, the faster the naming latencies. That is, the
prime kan yielded not only faster responses than ka
for the picture of a pulpit (kan.sel) but also for the
picture of a canoe (ka.no) (see Figure 15).
Similar results were obtained in the auditory modality, i.e., presenting either /ro/ or /rok/ when Dutch
participants were requested to produce either ro.ken
(to smoke) or rook.te (smoked). In fact, in the
auditory modality also a segmental overlap effect
was obtained, i.e., /rok/ was a better prime than /ro/
independent of the target. The failure to find a syllable-priming effect in Dutch is in agreement with the
statement that syllables are never retrieved during
phonological encoding (Levelt et al., 1999). The
syllable-priming effect found by Ferrand et al.
Bibliography
Berent I & Perfetti C A (1995). A rose is a REEZ: the twocycles model of phonology assembly in reading English.
Psychological Review 102, 146184.
Cholin J, Levelt W J M & Schiller N O (in press). Effects of
syllable frequency in speech production. Cognition.
Cholin J, Schiller N O & Levelt W J M (2004). The
preparation of syllables in speech production. Journal
of Memory and Language 50, 4761.
Crompton A (1981). Syllables and segments in speech
production. Linguistics 19, 663716.
Cutler A, Mehler J, Norris D and Segui J (1983).
A language-specific comprehension strategy. Nature
304, 159160.
Dell G S (1988). The retrieval of phonological forms
in production: tests of predictions from a connectionist model. Journal of Memory and Language 27,
124142.
Ferrand L, Segui J & Grainger J (1996). Masked priming of
word and picture naming: the role of syllabic units.
Journal of Memory and Language 35, 708723.
Ferrand L, Segui J & Humphreys G W (1997). The syllables role in word naming. Memory and Cognition 35,
458470.
Introduction
Optimality theory, introduced in the early 1990s
(Prince and Smolensky, 1993; McCarthy and Prince,
1993a,b), offers an extremely simple formal model of
language, with far-reaching implications for how language works. The formal component of each grammar consists of a ranked ordering of a universal set
of constraints; this ordering is used to identify the
best pairing between a given input and all potential
Introduction
Optimality theory, introduced in the early 1990s
(Prince and Smolensky, 1993; McCarthy and Prince,
1993a,b), offers an extremely simple formal model of
language, with far-reaching implications for how language works. The formal component of each grammar consists of a ranked ordering of a universal set
of constraints; this ordering is used to identify the
best pairing between a given input and all potential
that input /a/ maps to [d], input /b/ to [a], and input /c/
to [b] (these relations are shown graphically in Figure
2A). We could also imagine that the input /a,b/ maps
directly to candidate [a,b], but that input /c/ has no
counterpart in the candidate, and candidate [d] has
no counterpart in the input (Figure 2B).
EVAL is responsible for determining the best pairing
between the input and candidate set in each language.
For EVAL to function in a language-particular fashion,
it takes into account the input, the candidate set (i.e.,
the results of GEN), and the constraint hierarchy as
defined for the language in question. In classic optimality theory, EVAL compares each candidate to the
requirements of the highest ranking constraint (Ci)
and eliminates all candidates but the candidates
with the least number of violations of Ci. EVAL then
examines the remaining viable candidates against the
second constraint (Cj) in the language and again eliminates all but those with the least number of violations
for Cj. EVAL continues through the constraint hierarchy in this fashion, eliminating candidates until only
one remains, which is the optimal candidate, given
the input and the constraint hierarchy. (Alternative
versions of GEN and EVAL have been proposed in order
to make the model finite and so to introduce the
potential of psychological reality; the versions presented here are the classic functions, as introduced
in the original works.)
Tableaux
Practitioners of optimality theory use two conventions for visually representing the function of EVAL
for a given input. First, the candidate set is limited
to those most likely to be contenders for the best
match. Second, the evaluation of those candidates is
depicted in a chart, called a tableau. The tableau
helps to understand even relatively simple cases.
To understand how a tableau works, it is useful to
start with a concrete, but limited, example: a hypothetical CON with only two conflicting constraints in
it, Ci and Cj. This allows for two languages. In one
language, the constraint hierarchy ranks Ci above Cj,
denoted Ci Cj, whereas in the other language,
the opposite ranking holds, Cj Ci. Assume for the
moment that for a given input, GEN produces only
two candidates, CAND1 and CAND2, such that CAND1
Summary
The remaining component to examine is the constraint set. There are two basic types of constraints,
Faithfulness constraints and Markedness constraints.
Figure 3 Tableaux for two simple languages, varying by the ranking of the two constraints: in each case, the optimal candidate is
identified with a pointing finger symbol ( ).
constraints, comprising the languages constraint hierarchy. The constraint hierarchy for a given language
provides the ranking of Markedness and Faithfulness
constraints for that language. This ranking results in
interaction among the different types of constraints,
which determines the shape of output forms, and so
defines possible words for the language. GEN creates
input/candidate pairings and EVAL uses the constraint
hierarchy to evaluate those pairings to find the best
match for a given input.
Consequences
Testing optimality theory against a variety of different linguistic effects shows that numerous effects
follow directly from the architecture of the model.
There is no need to add particular constraints, types
of representations, or additional operations to the
straightforward model sketched in the preceding
section. This section presents a number of phonological phenomena and shows how optimality theory explains these effects through the interaction of
Markedness and Faithfulness constraints (for a lucid
and thorough discussion of these and other results of
optimality theory, see McCarthy (2002)).
Inventories
However, as McCarthy (2002: 78) pointed out, lexicon optimization exists mainly to provide reassurance
that the familiar underlying forms are still identifiable
in the rich base. It is irrelevant as a principle of
grammar.
Distributional Restrictions
Figure 5 Tableaux showing that top-ranked Faithfulness obviates lower ranked Markedness constraints; the optimal candidate
is identified with a pointing finger symbol ( ).
Ci Cj Ck
Ci Ck Cj
Cj Ci Ck
Cj Ck Ci
Ck Ci Cj
Ck Cj Ci
This observation is known as the factorial typology:
for n constraints, freely ranked, there are n! possible
languages. The research challenge is to check each of
those possible languages to ascertain whether they
are attested or, if not attested, at least appear to be
feasible as human languages.
There is one caveat about the free ranking of constraints: certain phenomena, such as the sonority hierarchy, suggest that there might be some sets of
constraints with fixed rankings with respect to other
members of the set. Accepting fixed rankings within
CON restricts the number of possible grammars somewhat but does not change the essence of the factorial
typology.
Summary
Extensions
Optimality theory was introduced as a model of synchronic adult phonological grammar. Since the inception of the model, research has explored extending the
model to better understand domains as diverse as learnability, syntax, semantics, language change, and language variation. This following sections introduce the
optimality theory perspective on a few of these topics.
Learnability
Opacity is the greatest empirical challenge to optimality theory. It simply does not follow from the
basic architecture of the model and, to date, there
is no broadly accepted solution to the problem.
Opacity, a term from derivational phonology, refers
to cases where (a) it appears that a rule has applied,
but should not have, or (b) it appears that a rule has
not applied but should have. Examples of both
types are found in the Yokuts dialects. The vowel
/i/ harmonizes to [u] following [u], but not following
[o]:
. Type (a): Surface strings of [. . .o. . .u. . .] occur
where [. . .o. . .i. . .] would be expected (the [o] corresponds to an input long u, which lowers to a midvowel). The constraints inducing harmony in
Yokuts refer to vowel height so that in other contexts, the [o. . .i] sequence can occur. The harmony
constraints outrank IDENT(round) so that harmony
can take place. HARMONY, a Markedness constraint,
is evaluated with respect to the surface form, and
[o. . .u] does not conform to the height restrictions
and so should be eliminated as it is in other contexts.
. Type (b): In one dialect, [. . .u. . .i. . .] strings also
surface where [. . .u. . .u. . .] is expected (the [u] corresponds to an input short /o/, which raises before
[i]). The challenge to optimality theory is again that
the HARMONY constraint is a Markedness constraint and so is evaluated on the output alone,
and so is unable to distinguish the [u. . .i] sequence
that derives from an input /o/ and therefore should
be exempt from HARMONY.
Efforts to resolve the opacity challenge all share the
property of introducing an additional level of
representation. One type of response is to identify a
Bibliography
Archangeli D & Langendoen D T (eds.) (1997). Optimality
theory: an overview. Oxford: Blackwell Publishers.
Archangeli D & Pulleyblank D (1994). Grounded phonology. Cambridge, MA: MIT Press.
Archangeli D & Pulleyblank D (2002). Kinande vowel
harmony: domains, grounded conditions, and one-sided
alignment. Phonology 19(2), 139188.
Bermu dez-Otero R (2005). Oxford studies in theoretical
linguistics: stratal optimality theory: synchronic and diachronic applications. Oxford: Oxford University Press.
(In press.)
Chomsky N & Halle M (1968). The sound pattern of
English. New York: Harper and Row.
Goldsmith J (1993). The last phonological rule: reflections
on constraints and derivations. Chicago: University of
Chicago Press.
Kager R (1999). Optimality theory. Cambridge: Cambridge
University Press.
Phonology: Overview
R Wiese, Philipps University, Marburg, Germany
2006 Elsevier Ltd. All rights reserved.
items can be described as the result of a set of phonological processes, in this case the so-called Great
Vowel Shift.
Generative phonology in the version proposed by
Chomsky and Halle (1968) and in other works was
strongly process oriented in giving rules a central
place in the model. Phonological rules are based on
features in the sense introduced above and apply
to underlying forms (abstract phonemes) to derive
surface forms. More recent theories, in particular
declarative phonology and optimality theory, diverge
from this view. Processes are now modeled not by
means of feature-changing rules, but as the surface
result of static constraints which put different
demands on sounds under different conditions.
Prosody and Its Categories
Phonology in many recent conceptions also is a theory of phonological grouping, that is, of units of a size
larger than the phoneme. The size of these groupings
ranges from the small subphonemic unit (such as
the atomic features) to the most comprehensive one,
often called the utterance. These units are usually
assumed to be hierarchically related to each other.
The most widely used of the prosodic units is the
syllable. It usually consists of a vowel and some flanking consonants, which may or may not be present.
The principles of syllable structure largely determine
what sequences of sounds are permissible in a language. Furthermore, the syllable is an important
domain for phonological processes. Syllables often
come in two types, stressed and unstressed, as in the
English word Piccadilly [%pIk@"dIlI], which consists of
two stressed and two unstressed syllables, resulting in
an alternating sequence of stressed and unstressed
syllables. Groupings consisting of a stressed syllable
plus any unstressed syllables are identified as
members of a further prosodic category, called the
foot. (Other languages display other types of feet;
see Hayes, 1995 for a typology of feet.)
The example just given also illustrates that stress
(also called accent) is a phonological phenomenon. It
can even be phonemic in the sense of being contrastive, as in English mport (n.) with initial stress versus
import (v.) with final stress. Such phenomena of stress
can be observed in units of several sizes. The domain
of word stress is the unit called the prosodic word
or phonological word, and the domain of stress in
larger units is the phonological or prosodic phrase.
Thus, prosodic words and prosodic phrases are
other units of phonology. This hierarchy extends upwards to the intonational phrase, the domain of intonational patterns. These units also serve as the
domains of phonological restrictions or processes.
The phonological word, for example, is the
prosodic units are formed, syntax indirectly influences the placement of phrasal stress and intonation
contours. As for semantics, the main connection to
phonology is again through intonation. Intonation is
constrained by phonology (through the principles
governing the assignment of tonal features within
intonational phrases), syntax, and semantics. The
result of such constraints deriving from syntax, semantics, and phonology is often called information
structure, a particular packaging and ordering of
information within a sentence.
See also: Autosegmental Phonology; Chinese (Mandarin):
Bibliography
Anderson S R (1985). Phonology in the twentieth century:
theories of rules and theories of representations. Chicago/
London: University of Chicago Press.
Baudouin de Courtenay J ([1895] 1972). An attempt at a
theory of phonetic alternations. In Stankiewicz E (ed.)
Selected writings of Baudouin de Courtenay. Bloomington,
Indiana: Indiana University Press. 144212.
Bloomfield L (1933). Language. New York: H. Holt.
Boas F (1889). On alternating sounds. American Anthropologist 2, 4753.
Chomsky N A & Halle M (1968). The sound pattern of
English. New York: Harper and Row.
Hayes B P (1995). Metrical stress theory: principles and
case studies. Chicago/London: University of Chicago
Press.
Jakobson R (1939). Observations sur le classement phonologique des consonnes. In Proceedings of the 3rd
International Congress of Phonetic Sciences. 3441.
Jakobson R, Fant G & Halle M (1952). Preliminaries to
speech analysis. Cambridge, MA: MIT Press.
Maddieson I (1984). Patterns of sound. Cambridge:
Cambridge University Press.
Sapir E (1921). Language. New York: Harcourt, Brace &
World.
Sapir E (1933). La re alite psychologique du phone`me.
Journal de Psychologie Normale et Pathologique 30,
247265.
Saussure F de (1916). Cours de linguistique generale. Paris:
Payot.
Trubetzkoy N S ([1939] 1967). Grundzu ge der Phonologie.
Go ttingen: Vandenhoeck & Ruprecht.
PhonologyPhonetics Interface
Sound Inventories