Rhymes and Brain Activities

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

r Human Brain Mapping 34:3182–3192 (2013) r

Right and Left Perisylvian Cortex and Left Inferior


Frontal Cortex Mediate Sentence-Level Rhyme
Detection in Spoken Language as Revealed by
Sparse fMRI

Martina A. Hurschler,1,2* Franziskus Liem,1,2 Lutz Jäncke,1,2,3


and Martin Meyer2,3,4
1
Division of Neuropsychology, Institute of Psychology, University of Zurich, Zurich,
Switzerland
2
Institute of Psychology, Neuroplasticity and Learning in the Healthy Aging Brain (HAB LAB),
University of Zurich, Zurich, Switzerland
3
International Normal Aging and Plasticity Imaging Center, University of Zurich, Zurich,
Switzerland
4
Center for Integrative Human Physiology, University of Zurich,
Zurich, Switzerland

r r

Abstract: In this study, we used functional magnetic resonance imaging to investigate the neural basis
of auditory rhyme processing at the sentence level in healthy adults. In an explicit rhyme detection task,
participants were required to decide whether the ending syllable of a metrically spoken pseudosentence
rhymed or not. Participants performing this task revealed bilateral activation in posterior–superior tem-
poral gyri with a much more extended cluster of activation in the right hemisphere. These findings sug-
gest that the right hemisphere primarily supports suprasegmental tasks, such as the segmentation of
speech into syllables; thus, our findings are in line with the ‘‘asymmetric sampling in time’’ model sug-
gested by Poeppel ([2003]: Speech Commun 41:245–255). The direct contrast between rhymed and non-
rhymed trials revealed a stronger BOLD response for rhymed trials in the frontal operculum and the
anterior insula of the left hemisphere. Our results suggest an involvement of these frontal regions not
only in articulatory rehearsal processes, but especially in the detection of a matching syllable, as well as
in the execution of rhyme judgment. Hum Brain Mapp 34:3182–3192, 2013. VC 2012 Wiley Periodicals, Inc.

Key words: rhyme detection; functional lateralization; auditory fMRI; frontal operculum; anterior
insula; perisylvian cortex; asymmetric sampling in time; phonological judgment

r r

Contract grant sponsor: Swiss National Science Foundation; Zurich, Switzerland. E-mail: m.hurschler@psychologie.uzh.ch
Contract grant number: 320030-120661; Contract grant sponsor: Received for publication 20 August 2011; Revised 27 April 2012;
Fonds zur Forderung des akademischen Nachwuchses (FAN); Accepted 1 May 2012
Contract grant sponsor: Zurcher Universitatsverein (ZUNIV).
DOI: 10.1002/hbm.22134
*Correspondence to: Martina A. Hurschler, Institute of Psychol- Published online 19 June 2012 in Wiley Online Library
ogy, Neuroplasticity and Learning in the Healthy Aging Brain (wileyonlinelibrary.com).
(HAB LAB), University of Zurich, Sumatrastrasse 30, CH-8006

C 2012 Wiley Periodicals, Inc.


V
r Sentence-Level Rhyme Detection in Spoken Language r

INTRODUCTION registration of phonological input, the processing of pho-


nemic segmentation, the retention of information in the
The ability to detect rhyme is considered to be one of articulatory loop, the comparison of critical word-ending
the earliest developing and most simple phonological sounds, and both decision making and response provision
awareness skills [Coch et al., 2011]. The sensitivity to spo- [Baddeley et al., 1984]. As regards the suprasegmental
ken rhyme has previously been linked to the development processes, which form the basis of rhyme detection, one
of different language functions, such as, reading and spell- might predict a right-lateralized activation in the poste-
ing. Nevertheless, barely any neuroimaging studies about rior–superior temporal gyrus (pSTG) as suggested by the
the neural correlates of auditory rhyme processing exist ‘‘asymmetric sampling in time’’ (AST) hypothesis pro-
today. posed by Poeppel [2003]. According to this framework, au-
Young children appear to appreciate rhyme [Bryant ditory information is preferentially integrated in
et al., 1989], and there is evidence that they are able to ful- differential temporal windows by the nonprimary auditory
fill rhyme detection tasks as early as 3-year-old [Stanovich fields residing in the two hemispheres. While the left
et al., 1984]. Hence, children seem to ascertain rhyme in hemisphere is suggested to be specialized for the percep-
spoken language before they have reached the ability to tion of rapidly changing acoustic cues (40 Hz), this
detect phonetic segments. This observation is consistent model predicts a better adaption of the right auditory cor-
with the linguistic status hypothesis, which maintains that tex for slowly changing acoustic modulations (4 Hz).
syllables have an advantage over intrasyllabic units and In support of the ‘‘AST’’-hypothesis, different studies
that intrasyllabic units, in turn, have an advantage over were able to demonstrate that the right supratemporal
individual phonemes [Treiman, 1985]. plane is especially amenable to slow acoustic modulations
Numerous behavioral longitudinal and crosscultural in speech [e.g., Hesling et al., 2005; Ischebeck et al., 2008;
studies have been able to show that preschool experiences Plante et al., 2002; Zhang et al., 2010]. In particular, activa-
with auditory rhyme detection have a significant effect on tion in the posterior supratemporal region of the right
later success in learning to read and write [Bryant et al., hemisphere was associated with speech melody processing
1989]. Both sensitivity to spoken rhyme and measures for [Gandour et al., 2004; Meyer et al., 2002, 2004] and explicit
memory span are related to vocabulary development in processing of speech rhythm [Geiser et al., 2008].
preschoolers [Avons et al., 1998]. According to Poeppel [2003], the AST model permits
With respect to the neural correlates of auditory rhyme different predictions regarding the lateralization of differ-
processing, evidence is currently sparse. Speech perception ent speech perception tasks. One such prediction states
relies on mechanisms of time-resolution at a time scale that ‘‘phonetic phenomena occurring at the level of sylla-
level of milliseconds. The predominance of the left perisyl- bles should be more strongly driven by right hemisphere
vian region for most domains within speech processing is mechanisms’’ [Poeppel, 2003, p 251]. The problem with
an evidenced fact in neuroscientific research [e.g., Frieder- investigating this assumption is that syllables always con-
ici, 2011; Narain et al., 2003; Price, 2000; Vigneau et al., tain their phonemic constituents [Poeppel, 2003]. There-
2006]. Following the traditional model of language, the fore, an insightful experiment should disentangle selective
majority of colleagues, who do research in aphasia, processing of syllables from the more general processing
emphasize the superior and cardinal role of the left hemi- of their constituent phonemes. This reasoning has found
sphere. Clinical literature has often reported sensory apha- some support by a dichotic listening study that showed
sic problems resulting from left temporal lobe lesions [e.g., increased rightward lateralization when the focus of the
Kuest and Karbe, 2002; Stefanatos, 2008; Turner et al., task emphasized syllabicity instead of the phonemic struc-
1996]. This left perisylvian region is the site for both ele- ture of the stimuli [Meinschaefer et al., 1999].
mental functions, such as, phonetic processing, and higher We believe, that akin to speech meter, rhymes serve as
purposes, namely, syntactic and semantic detection. How- structural devices. Geiser et al. [2008] have previously
ever, gradually mounting evidence obtained from neuroi- investigated the neural correlates of explicit rhythm proc-
maging studies in non brain-damaged individuals essing in spoken sentences by using German pseudosen-
proposes that the contribution of the right hemisphere to tences spoken in either an isochronous, or a conversational
the processing of speech perception must not be underesti- rhythm. In the explicit task, subjects had to judge, whether
mated [Jung-Beeman, 2005; Meyer, 2008; Poeppel and the heard pseudosentence was ‘‘isochronous’’ or ‘‘noniso-
Hickok, 2004; Shalom and Poeppel, 2008; Stowe et al., chronous’’ (rhythm task) that is whether the sentence had
2005; Vigneau et al., 2011]. a metrical structure or not. In the implicit condition, unat-
In the current study we investigate the neural signatures tended rhythm processing was measured, while partici-
of auditory rhyme processing at the sentence level because pants had to decide, whether the sentence they heard was
we believe that learning more about this issue will contrib- a question or a statement (prosody task). One particular
ute to the topic of functional lateralization in speech proc- result that they provided is increased rightward lateraliza-
essing. This assumption is based on the very nature of tion in temporal and frontal regions associated with
different processes that are involved in the performance of explicit processing of speech rhythm. Interestingly, they
an auditory rhyme detection task, such as, the automatic did not find this right lateralized temporal activation in

r 3183 r
r Hurschler et al. r

the implicit stimulus-driven processing condition. The to the phonological loop of the WM, such as, the left infe-
observed difference in activation between implicit and rior parietal lobe and the (left) frontal operculum.
explicit condition is in line with previous auditory func- Since our approach investigates hemispheric lateraliza-
tional imaging studies that were able to demonstrate task- tion in processing acoustic suprasyllabic spoken language,
dependent modulation of auditory cortical areas involved we further explore the division of labor between the right
in speech processing [Noesselt et al., 2003; Poeppel et al., and the left auditory-related cortex. The goal of this study
1996; Scheich et al., 2007; Tervaniemi and Hugdahl, 2003]. is to investigate neural signatures of auditory rhyme detec-
The task used in our study resembles the explicit task tion at the sentence level. This should not only enhance
used in the study by Geiser et al. [2008] insofar as the the understanding of the neural processes underlying the
focus of subjects’ attention is explicitly set to suprasegmen- detection of rhyme in rhymed (metrical) sentences, but
tal analysis. Based on the aforementioned findings, we also the relationship between slowly changing acoustic
hypothesize that an explicit rhyme detection task at the modulations and right auditory-related cortex functions in
sentence level should be associated with increased involve- general.
ment of the right perisylvian cortex.
With respect to the direct comparison between rhymed
and nonrhymed stimuli we have to consider cognitive METHODS
demands that may be involved. To accurately perform a
rhyme detection task, the phonetic information should not Subjects
only be segmented into syllables; indeed, it should also be
A total of 22 healthy subjects (11 females) aged 19–31
memorized until the critical phoneme is encountered. The
years (mean ¼ 23.5, SD ¼ 3.6) participated in this study.
distance between the two relevant phonemes involves
According to the Annett-Handedness-Questionnaire
working memory (WM), as one item must be kept active
(AHQ) [Annett, 1970], all subjects were consistently right-
until it can be compared with a second phonetic element.
handed. They were native speakers of (Swiss) German
According to Baddeley’s influential model, verbal memory
with no history of neurological, major medical, psychiat-
is thought to be divided by a subvocal rehearsal system
ric, or hearing disorders. All subjects gave written
and a phonological store. While the phonological store is
consent in accordance with procedures approved by the
suggested to hold auditory/verbal information for a very
local Ethics Committee. Subjects were paid for their
short period of time, articulatory rehearsal is a more active
participation.
process that retains the information in the phonological
store [Baldo and Dronkers, 2006]. It has been previously
argued that rhyme judgments engage both of these proc- Stimuli
esses [Baddeley et al., 1984]. Several PET and functional
magnetic resonance imaging (fMRI) studies that used 2- Stimuli material comprised a total of 72 pseudosentences
back or 3-back tasks to investigate WM found activation in containing phonotactically legal pseudowords. Our stimuli
the left IFG [mostly in the opercular part, corresponding resemble so-called ‘‘jabberwocky’’ sentences used in prior
FOP; see Rogalsky and Hickok, 2011; Tzourio-Mazoyer studies [e.g., Friederici et al., 2000; Hahne and Jescheniak,
et al., 2002], which was related to articulatory rehearsal. In 2001], in that, they contain some real German function
addition, it has been proposed that the left IPL subserves words. In contrast with typical jabberwocky sentences,
the phonological store [e.g., Paulesu et al., 1993]. they display a regular meter and do not contain systematic
Contrary to most of the previous studies about rhyme morphological markers, to minimize semantic and syntac-
processing, we used pseudosentences instead of real word tic associations. Rhymed and nonrhymed sentences were
stimuli. Therefore, we are able to rule out possible con- matched based on the amount of function words they
founds brought about by obvious semantic processing. To contained.
control for WM load, the pseudosentences were spoken The last syllable of the stimuli either rhymed (R) or did
metrically. This enables the span between the end rhymes not rhyme (NR) with the last syllable of the first part of
to remain constant. To direct the participants’ attention to the sentence (see Fig. 1). The pseudosentences were metri-
the phonology stimuli’s last syllable, all of the pseudosen- cally spoken by a trained female speaker and consisted of
tences were spoken in the same isochronous rhythm. a verse form, which means that sentences followed a regu-
As previously mentioned, explicit rhyme detection at lar meter (eight iambs per sentence). As a result, each
the sentence level has not yet been investigated with fMRI pseudosentence contained 16 syllables and the sentences
methodology. Based on the predictions of the AST-hypoth- consisted of a mean of 10.4 pseudowords (SD  1.4).
esis, as well as findings from the aforementioned studies All stimulus items were normalized in amplitude to
pertaining to prosody and speech meter, we predict that 70% of the loudest signal in a stimulus item. All pseudo-
the rhyme detection task per se should be related to sentences were analyzed by the means of PRAAT speech
enhanced supratemporal recruitment of the right auditory- editor [Boersma, 2001]. Stimuli were balanced with respect
related cortex. Because of the cognitive demands of the to mean intensity, and the length of all stimuli was set to
task used, we also expect the recruitment of areas related exactly 6 s.

r 3184 r
r Sentence-Level Rhyme Detection in Spoken Language r

Figure 1.
Examples of pseudosentences. Underlined are the pseudowords, which had to be compared.

Task/Procedure Functional time series were collected from 16 transverse


slices covering the entire perisylvian cortex with a spatial re-
Each participant read instructions to the experiment, solution of 2.7  2.7  4 mm3 by using a Sensitivity Encoded
gave their written consent, and completed the Annett- (SENSE) [Pruessmann et al., 1999], single-shot, gradient-echo
Handedness-Questionnaire. During scanning, the room planar sequence (acquisition matrix 80  80 voxels, SENSE
lights were dimmed and a fixation cross was projected, via accelerator factor R ¼ 2, FOV ¼ 220 mm, TE ¼ 35 ms). The
a forward projection system, onto a translucent screen volumes were acquired with an acquisition time of 1,000 ms
placed at the supine position at the end of the magnet’s each, a flip angle ¼ 68 , and a 12 s intercluster interval was
gurney. Subjects viewed the screen through a mirror employed; as a result, one trial lasted 15 s. Furthermore, a
attached to the head coil. Stimuli were presented using standard 3D T1-weighted volume for anatomical reference
R
PresentationV software (Version 0.70, www.neurobs.com). was collected with a gradient echo sequence with a 0.94 
The stimulus presentation was synchronized with the data 0.94  1 mm3 spatial resolution (160 axial slices, acquisition
acquisition by employing a 5 V TTL trigger pulse. We matrix 256  256 voxels, FOV ¼ 240  240 mm, repetition
used an MR-compatible piezoelectric auditory stimulation time [TR] ¼ 8.17 ms, flip angle ¼ 8 ).
system that is incorporated into standard Philips head-
phones for binaural stimulus delivery.
Subjects were instructed to decide as quickly and as Data Analysis
accurately as possible whether the pseudosentences that
Behavioral data analysis and ROI statistics were per-
they were presented with rhymed or not. They indicated
formed by using SPSS Statistics 19.0 (SPSS Inc.).
their response by pressing a button on the response box
with either their right index finger, or with their right mid-
dle finger. Additionally, a total of 10 null events were cre- Behavioral data
ated to be a baseline condition and were randomly During the experiment in the scanner, behavioral per-
included in the time course of the experiment. During the formance data on the rhyme detection task were collected.
empty trials, subjects were instructed to press a random Data (reaction time and accuracy) were corrected for out-
button. In one run, a total of 82 trials (36 rhymed pseudo- liers (>2 SD above or below mean value). A repeated-
sentences, 36 nonrhymed pseudosentences, and 10 empty measures t-test was performed to identify significant dif-
trials) were presented. A fixation cross was presented for ferences between the conditions.
500 ms prior to each stimulus presentation. The task in the
scanner lasted 20 min 30 s.
fMRI analysis
Artifact elimination and image analysis was performed
Data Acquisition
by using MATLAB 7.4 (Mathworks, Natick, MA) and the
The functional imaging study was performed on a Phi-
lips 3T Achieva whole-body MR unit (Philips Medical Sys-
tem, Best, The Netherlands) equipped with an eight-
channeled Philips SENSE head coil. To acquire data, a
clustered sparse temporal acquisition technique was used.
This scheme combines the principles of a sparse temporal
acquisition with a clustered acquisition [Liem et al., 2012;
Schmidt et al., 2008; Zaehle et al., 2007]. That way, the
stimuli were binaurally presented in an interval devoid of
auditory scanner noise. Three consecutive volumes were Figure 2.
collected, to cover the peak of the event-related hemody- Acquisition scheme. Depicted are the three time points of ac-
namic signal (see Fig. 2). quisition and the stimulus presentation in one trial.

r 3185 r
r Hurschler et al. r

TABLE I. Brain areas showing significant increases for rhymed and nonrhymed condition relative to baseline

Left hemisphere Right hemisphere


Condition/region T score Voxels x y z T score Voxels x y z

Rhyme > rest


Superior temporal gyrus 13.34 322 44 14 2
11.94 102 50 22 12
8.29 47 48 2 6
Total amount of voxels 149 322
Nonrhyme > rest
Superior temporal gyrus 12.82 349 62 16 2
12.08 104 50 22 12
8.52 22 48 2 6
7.89 40 52 12 4
Total amount of voxels 166 349

Note: x, y, z ¼ MNI coordinates of local maxima. Voxels ¼ number of voxels at P < 0.05 after family-wise correction for multiple com-
parisons across the whole brain.

SPM5 software package (Institute of Neurology, London, pared using a parametric two-sample t-test. Concerning
UK; http://www.fil.ion.ucl.ac.uk). To account for movement RT no significant difference between R and NR conditions
artifacts, all volumes were realigned to the first volume, nor- was revealed (mean  SD ¼ 635.1  190.66 and 598.9 
malized into standard stereotactic space (voxel size 2  2  167.015, respectively, t ¼ 1.214, df ¼ 21). On the contrary,
2 mm3, template provided by the Montreal Neurological accuracy was significantly lower in the R condition, as
Institute), and smoothed using a Gaussian kernel with a 6- compared with the NR condition (92.4  2.6% and 97.8 
mm full-width-at-half-maximum that increased the signal- 1.25%, respectively; t ¼ 5.232, P < 0.001, df ¼ 21).
to-noise ratio of the images. Due to the low number of sam-
pling points, a boxcar function (first order, window length ¼
3 s) was modeled for each trial. In addition, two regressors Imaging Data
of no interest were included, to account for the T1-decay
along the three volumes [Liem et al., 2012; Zaehle et al., Whole-head analysis
2007]. The resulting contrast images from each of the first Rhyme detection task. In a first step of analysis, main
level fixed-effects analysis were entered into one-sample t- effects for the rhyme detection task were investigated.
tests (df ¼ 21); thereby, permitting inferences about condi- Therefore, rhymed (R) and nonrhymed (NR) conditions
tion effects across subjects [Friston et al., 1999]. Unless other- were separately contrasted to the baseline (fixation cross
wise indicated, regions reported showed significant effects and random button press). Table I and Figure 3 present
of P < 0.05 and were FWE corrected. regions that reveal significant supra-threshold BOLD-acti-
vation for each of the two experimental conditions, as
Post-hoc region of interest analyses compared with the empty trials. In both conditions a bilat-
eral superior temporal fMRI pattern could be observed
To statistically test for asymmetry in cluster size of tem- and exhibited a more expanded cluster of significant acti-
poral activation, cluster sizes in the right and the left STG vation (P < 0.05, FWE corrected) in the right, as compared
at the single-subject level (P < 0.001, unc.) were extracted with the left hemisphere. Notably, the peak activation in
via an in-house-tool and subjected to a 2  2 repeated- the right auditory-related cortex of the posterior temporal
measures ANOVA with the factors condition and hemi- lobe was more anteriorially and medially situated in the R
sphere, followed by paired t-tests with the cluster extent in (44-14-12), than in the NR condition (62-16-2).
the right and the left STG for both conditions. To statistically test for this rightward temporal laterali-
zation in cluster size for both contrasts (R > rest, NR >
rest) for each subject’s statistic map (first-level contrast),
RESULTS left and right cluster sizes within the superior temporal
Behavioral Data gyrus were extracted and subjected to a paired sample t-
test. As depicted in Figure 4, temporal cluster size was sig-
Individual mean reaction times (RT), as well as accuracy nificantly larger in the right, than the left hemisphere in
scores were distributed normally in both the R and the NR the R condition (t ¼ 6.513, P < 0.001, df ¼ 21). This was
conditions (Kolmogorov-Smirnov one-sample test: d ¼ also the case for the NR condition (t ¼ 5.029, P < 0.001,
0.153, P > 0.20, and d ¼ 0.162, P > 0.20) and were com- df ¼ 21).

r 3186 r
r Sentence-Level Rhyme Detection in Spoken Language r

Figure 4.
Size of activated clusters in bilateral superior temporal gyrus
Figure 3. (STG). Mean value of each subjects’ (n ¼ 22) cluster extent in R
Brain areas showing significantly greater activation during the > rest and NR > rest contrasts (***P < 0.001).
processing of (A) rhymed and (B) nonrhymed condition com-
pared with rest. Each cluster is thresholded at P < 0.05, FWE
corrected with a spatial extent minimum of 20 contiguous vox- words [Khateb et al., 2000, 2007; Rayman and Zaidel, 1991;
els per cluster. The corresponding cortical regions, cluster sizes, Rugg and Barrett, 1987]. The significantly increased error
peak T-values and MNI coordinates can be found in Table I. rate for rhymed as compared with nonrhymed sentences,
was also evident in previous studies [Rayman and Zaidel,
Rhymed vs. nonrhymed pseudosentences. The direct con- 1991; Rugg, 1984; Rugg and Barrett, 1987]. We assume that
trast between both conditions (Table II, Fig. 5) revealed subjects showed a bias towards negative responses, when
increased BOLD-responses in the anterior insula and the they were not completely sure of the answer. This may be
deep opercular portion of the inferior frontal gyrus of the due to the speed-demands placed upon them (caused by
left hemisphere for rhymed, as compared with the non- the instruction to ‘‘respond as quickly and accurately as
rhymed pseudosentences (P < 0.05 FWE corrected at clus- possible’’) [Khateb et al., 2007].
ter level, k > 25). Since the expected effects in the direct The assumption that cortical fields in the right temporal
contrasts are smaller than in the contrasts versus rest, we lobe along the superior temporal gyrus and sulcus play an
adopted the more liberal approach of clusterwise FWE cor- essential role in the analysis of the speech signal continues
rection, to not miss effects. The reverse contrast at the to receive ever-increasing support [Boemio et al., 2005;
same threshold did not reveal any significantly different Hickok, 2001; Lattner et al., 2005; Meyer et al., 2002, 2004;
activation patterns between the NR and the R condition. Vigneau et al., 2011]. The right lateralized activation was
observed while subjects were performing a rhyme detec-
tion task at the sentence level. This result buttresses the
DISCUSSION results of previous studies, which have investigated the
auditory processing of slowly changing cues, namely,
In the current study, we investigated the neural basis of prosody and speech meter [Geiser et al., 2008; Meyer
rhyme detection in healthy adults with a particular focus et al., 2002; Zhang et al., 2010]. According to the AST hy-
on lateralized processing. At the behavioral level, we did pothesis, the auditory-related cortex of the right hemi-
not find a significant difference in reaction times between sphere is more inclined to process slowly changing
rhymed and nonrhymed conditions. This finding is con- acoustic cues [Meyer, 2008; Poeppel, 2003; Zatorre and
sistent with studies using visually presented rhyming Gandour, 2008]. We posit that the right lateralized

TABLE II. Brain areas showing significant increases for rhymed compared with nonrhymed trials

Condition/region H T score Voxels x y z

Rhyme > nonrhyme


Inferior frontal gyrus, opercular part L 7.49 40 52 14 0
Anterior insula L 6.05 40 28 24 6

Note: x, y, z ¼ MNI coordinates of local maxima. H ¼ hemisphere, L ¼ left, voxels ¼ number of voxels. T scores and cluster size are
reported if they are significant at P < 0.05 after family-wise correction for multiple comparisons at cluster level (k > 25).

r 3187 r
r Hurschler et al. r

Figure 5.
Brain areas showing significantly greater activation during the processing of rhymed compared
with nonrhymed pseudosentences. Each cluster is thresholded at P < 0.05, FWE-corrected at
cluster level (k > 25). The corresponding cortical regions, cluster sizes, peak T-values and MNI
coordinates can be found in Table II. Figures are displayed in neurological convention.

activation elicited during the explicit rhyme detection task the IFG [Bergerbest et al., 2004; Orfanidou et al., 2006;
complies with the predictions of this AST framework. Thiel et al., 2005] did not require explicit judgments
Akin to prosody and especially speech meter, rhymes between the prime and target word, as was the case in
serve as structural devices. Indeed, the segmentation of this study.
spoken sentences into single syllables is a suprasegmental To our knowledge, this is the first fMRI study that
computation, which relies on the analysis within larger directly compares rhymed to nonrhymed pseudosentences.
time windows (250 ms). The fact that we found this lat- A small number of fMRI studies implementing an explicit
eralized activation in cluster-size irrespectively of the con- rhyme detection task compared BOLD response associated
dition and task performance provides support to the with a rhyme detection task to other tasks. But the stimuli
hypothesis of a task-dependent, top-down modulation of employed in these studies were visually presented (there-
lateralization effects in parts of the auditory-related cortex fore involving grapho-phonemic conversion) and included
that may be preferentially sensitive to suprasegmental words and/or pseudo words [e.g., Cousin et al., 2007], or
acoustic aspects speech and music [Brechmann and single syllables [Sweet et al., 2008]; thus, they obviously
Scheich, 2005; Tervaniemi and Hugdahl, 2003]. Geiser did not include direct contrasts between rhymed and non-
et al. [2008] found a similar right lateralization for speech rhymed sentences.
rhythm perception only in an explicit, task-driven process- Therefore, we cannot rely upon these studies when
ing condition, which implies that areas of the right (and attempting to elucidate the differences involved in audi-
left) STG are partly modulated by task demand [Poeppel tory processing of rhymed versus nonrhymed items at the
et al., 1996]. sentence-level.
The direct contrast between rhymed and nonrhymed tri- Incidentally, various EEG investigations of the auditory
als demonstrated increased BOLD response in the left modality have produced an electrophysiological rhyming
hemisphere for rhymed pseudosentences in the opercular effect for spoken word pairs. This effect is usually
part of the IFG and the anterior insula. The finding of observed when a pair of words is presented and subjects
increased rhyme related fronto-opercular activation is of are requested to make a phonemically based judgment
specific interest, since rhyming targets should have been and, it is typically expressed by a more negative bilateral
phonologically primed and would therefore require less posterior response for nonrhyming than for rhyming tar-
processing than nonrhyming targets [Coch et al., 2008]. gets [Rugg, 1984]. Elsewhere, various researchers have
However, a closer look at the literature pertaining to pri- demonstrated a reversal of this effect at lateral sites, that
ming in auditory modality reveals a wide diversity of is, rhyming targets produced more negative responses
results. The best candidates for comparison to this study than nonrhyming targets [Coch et al., 2005; Khateb et al.,
are experiments that used sequentially presented primes 2007]. In such an ERP study that included a rhyme-detec-
and targets in the auditory modality. The most consistent tion task with words, Coch et al. [2005] found a rhyming
findings in such studies are reduced activation for related effect with a frontal leftward asymmetry in children and
targets in the bilateral IFG, as well as in the bilateral supe- adults. They used a simple prime-target auditory rhyming
rior temporal gyrus [Orfanidou et al., 2006; Vaden et al., paradigm with nonword stimuli (e.g., nin-rin and ked-
2010]. Notably, studies that did report priming effects in voo). Interestingly, they found a more negative response

r 3188 r
r Sentence-Level Rhyme Detection in Spoken Language r

to nonrhyming targets over posterior sites and an Thus, it is clear that phonological rehearsal is needed, to
increased negativity to rhyming targets at lateral anterior detect rhyme; therefore, the involvement of inferior frontal
sites. Subsequently, a visual rhyme-detection study con- regions is not surprising. The subjects in this study did
ducted by Katheb et al. [2007] reported a specific left later- not know whether the sentence that they were listening to
alized negativity for rhymed versus nonrhymed targets. rhymed or not until they heard the last syllable. Therefore,
Their estimated source localization indicated the major dif- this result cannot be explained by WM load per se; instead
ference between rhyming and nonrhyming words as being it is linked to the different outcomes resulting from the
positioned in predominantly left frontal and temporal comparison between the syllables.
areas. The fact that the rhyming effect can also be found As suggested by Rogalsky and Hickok [2011], parts of
when target words are spoken in a different voice than the frontal operculum corresponding to regions in which
primes suggests that this effect is an index of phonological we noted differences are essential for the integration of in-
processing instead of a physical-acoustic mismatch formation that is maintained via articulatory rehearsal
[Praamstra and Stegeman, 1993]. However, due to the processes or decision-level processes, or both. The fact that
inverse problem and the limited spatial resolution of the we found activation in this region when we made a direct
EEG technique, the informative value of EEG studies for comparison between the rhymed versus the nonrhymed
the present work is quite limited and comparisons must sentences bolsters the notion that the opercular portion of
be interpreted with caution. the LIFG plays a role in various decision-processes
In our study, we found a significant signal increase in involved in a task that relies on phonological WM. This
the left frontal operculum and the left anterior insula dur- interpretation also fits with results of previous studies,
ing the rhymed trials as compared with the nonrhymed which found that the LIFG is involved in a adverse listen-
trials; this finding was absent during the reverse contrast ing condition with enhanced demands on response selec-
(NR > R). The left inferior frontal gyrus (LIFG) has been tion [Binder et al., 2004; Giraud et al., 2004; Vaden et al.,
shown to be related to a myriad of functions in speech 2010; Zekveld et al., 2006].
processing [e.g., Davis et al., 2008; Lindenberg et al., 2007; The direct comparison of rhymed with nonrhymed trials
Meyer and Jancke, 2006]. Activation in the LIFG has been also revealed increased BOLD response in the left anterior
previously associated with segmentation processes or sub- insula. This region has previously been associated with
lexical distinctions in different speech perception tasks diverse functions [Mutschler et al., 2009]. Sharing exten-
[see Poeppel and Hickok, 2004] and a variety of syntactic sive connections with different structures in temporal,
and semantic operations [Hagoort, 2005; Shalom and Poep- frontal, and parietal cortices, the insula is perfectly situ-
pel, 2008]. Nevertheless, there is currently no consensus ated for the task of integrating different sensory modal-
with regards to the contribution that the LIFG makes to ities. Previous research has identified the anterior insula as
language processing [Friederici, 2011; Hickok, 2009]. a key player in general processes of cognitive control
Besides unspecific, modality independent involvement in [Cole and Schneider, 2007; Dosenbach et al., 2007]. The an-
different language tasks, this region has been suggested to terior insula also seems to play a role in perception at
reflect aspects of articulatory rehearsal [Meyer et al., 2004], each of the sensory modalities [Sterzer and Kleinschmidt,
discrimination of subtle temporal acoustic cues during 2010]. Besides its involvement in subvocal rehearsal proc-
speech and nonspeech [Zaehle et al., 2008], as well as au- esses during WM activation, the left insula supports coor-
ditory search [Giraud et al., 2004]. dination processes in the complex articulatory programs
Previous studies were able to show that subvocal re- that are needed during pseudoword processing [Acker-
hearsal processes are essentially mediated by parts of the mann and Riecker, 2004; Dronkers et al., 2004]. Dyslectic
LIFG [Paulesu et al., 1993]. The posterior–dorsal aspect of children show less activation than typically developing
the LIFG (corresponding to the opercular part) might be children in bilateral insulae during an auditory rhyme-
preferentially engaged in phonology-related, sublexical detection task with words and pseudowords [Steinbrink
processes [Burton et al., 2000; Zurowski et al., 2002]. This et al., 2009]. Furthermore, there is evidence that the left an-
region is commonly suggested to be one part of the phono- terior insula is also involved in the phonological recogni-
logical loop in the Baddeley model [Paulesu et al., 1993; tion of words [Bamiou et al., 2003]. Thus, our findings
Smith and Jonides, 1999], and there is evidence that it provide further evidence that the insula is involved in the
mediates phonological rehearsal. Hemodynamic changes in auditory-motor network [Mutschler et al., 2009]. However,
the opercular frontal inferior region have been previously our experimental design does not permit further discus-
associated with making phonological judgments [Demonet sion pertaining to the left anterior insula activation that
et al., 1992; Poldrack et al., 1999; Zatorre et al., 1992]. we found.
Since this study used pseudosentences, subjects could The finding of significant differences in left frontal brain
not build up expectations about the following words. regions, which are associated with rhyme perception, coin-
Instead, they were required to maintain the critical seg- cides with results from the EEG studies discussed above.
ment from the first part of the sentence in their mind for To reiterate, the aforementioned EEG studies produced
3 s until they heard the second critical segment, after significant differences for the direct contrasts between
which they made their decision by pressing a button box. rhymed and nonrhymed stimuli. Due to the limited

r 3189 r
r Hurschler et al. r

temporal resolution of fMRI technique, it is not possible to Annett M (1970): A classification of hand preference by associa-
clearly link activation to a particular step of processing dur- tion analysis. Br J Psychol 61:303–321.
ing the rhyme judgments. The stimuli used in both condi- Avons S, Wragg C, Cupples L, Lovegrove W (1998): Measures of
tions did not contain syntactic or semantic information, and phonological short-term memory and their relationship to vo-
cabulary development. Appl Psycholing 19:583–601.
they did not differ in terms of intelligibility. Therefore, our
Baddeley A, Lewis V, Valler G (1984): Exploring the articulatory
finding that the reported left frontal brain activations were
loop. Q J Exp Psychol 36A:467–478.
significant for the direct contrast level of analysis between Baldo J, Dronkers N (2006): The role of inferior parietal and infe-
rhymed and nonrhymed pseudosentences implies that these rior frontal cortex in working memory. Neuropsychology
regions may not only be involved in articulatory rehearsal 20:529–538.
processes, but are also enmeshed in the last step of the anal- Bamiou D, Musiek F, Luxon L (2003): The insula (Island of Reil)
ysis, namely, the detection of phonological matching. and its role in auditory processing. Literature review. Brain
Even though WM load was theoretically identical in Res Rev 42:143–154.
both conditions, we nevertheless, must consider that task Bergerbest D, Ghahremani D, Gabrieli J (2004): Neural correlates
difficulty may have contributed to the difference in brain of auditory repetition priming: Reduced fMRI activation in the
activation between the conditions. It has previously been auditory cortex. J Cogn Neurosci 16:966–977.
shown that activation of the LIFG can be modulated by Binder J, Liebenthal E, Possing E, Medler D, Ward B (2004): Neu-
ral correlates of sensory and decision processes in auditory
task-difficulty [Zekveld et al., 2006]. Since this is the first
object identification. Nat Neurosci 7:295–301.
fMRI study that investigates auditory rhyme detection in
Boemio A, Fromm S, Braun A, Poeppel D (2005): Hierarchical and
an explicit paradigm at the sentence level, follow-up stud- asymmetric temporal sensitivity in human auditory cortices.
ies with more conditions that pose different cognitive Nat Neurosci 8:389–395.
demands should be introduced. Future research of this Boersma P (2001): Praat, a system for doing phonetics by com-
sort will prove helpful in disentangling brain responses puter. Glot Int 5:341–345.
that are associated with specific processes involved in au- Brechmann A, Scheich H (2005): Hemispheric shifts of sound rep-
ditory rhyme recognition. resentation in auditory cortex with conceptual listening. Cereb
Cortex 15:578–587.
Brett M, Anton J, Velabregue R, Polin J (2002): Region of interest
CONCLUSION analysis using an SPM tollbox. NeuroImage 16:1140–1141.
Bryant P, Bradley L, Maclean M, Crossland J (1989): Nursery rhymes,
We composed a rhyme detection task with pseudosenten- phonological skills and reading. J Child Lang 16:407–428.
ces to investigate the neural correlates of rhyme perception Burton M, Small S, Blumstein S (2000): The role of segmentation
in healthy adults. Subjects in this study were requested to in phonological processing: An fMRI investigation. J Cogn
decide whether the last syllable of the pseudosentences Neurosci 12:679–690.
rhymed or not. We found a task-related right-lateralized Coch D, Grossi G, Skendzel W, Neville H (2005): ERP nonword
pattern of activation in the superior temporal lobe. This rhyming effects in children and adults. J Cogn Neurosci
17:168–182.
result implies that explicit rhyme processing at the sentence
Coch D, Hart T, Mitra P (2008): Three kinds of rhymes: An ERP
level—like prosody or meter in speech [Geiser et al., 2008;
study. Brain Lang 104:230–243.
Meyer et al., 2002]—essentially relies on the processing in Coch D, Mitra P, George E, Berger N (2011): Letters rhyme: Elec-
longer time windows wherefore the right temporal cortex trophysiological evidence from children and adults. Dev Neu-
has been proposed to be specialized [Poeppel, 2003]. Direct ropsychol 36:302–318.
comparisons between rhymed and nonrhymed pseudosen- Cole M, Schneider W (2007): The cognitive control network: Inte-
tences showed increased activation for the correctly recog- grated cortical regions with dissociable functions. NeuroImage
nized rhymed trials in left fronto-opercular areas (deep 37:343–360.
frontal operculum and adjoining anterior insula). These Cousin E, Peyrin C, Pichat C, Lamalle L, Le Bas JF, Baciu M
regions have been previously linked to processes of phono- (2007): Functional MRI approach for assessing hemispheric
logical WM and articulatory rehearsal. predominance of regions activated by a phonological and a
semantic task. Eur J Radiol 63:274–285.
Davis C, Kleinman J, Newhart M, Gingis L, Pawlak M, Hillis A
ACKNOWLEDGMENTS (2008): Speech and language functions that require a function-
ing Broca’s area. Brain Lang 105:50–58.
We express our gratitude to Sarah McCourt Meyer and Demonet J, Chollet F, Ramsay S, Cardebat D, Nespoulous J, Wise R,
two anonymous reviewers for helpful comments on an Rascol A, Frackowiak R (1992): The anatomy of phonological
earlier draft. and semantic processing in normal subjects. Brain 115:1753–1768.
Dosenbach N, Fair D, Miezin F, Cohen A, Wenger K, Dosenbach R,
Fox MD, Snyder AZ, Vincent JL, Raichle ME, Schlaggar BL,
REFERENCES Petersen SE (2007): Distinct brain networks for adaptive and stable
task control in humans. Proc Natl Acad Sci USA 104:11073–11078.
Ackermann H, Riecker A (2004): The contribution of the insula to Dronkers N, Ogar J, Willock S, Wilkins D (2004): Confirming the
motor aspects of speech production: A review and a hypothe- role of the insula in coordinating complex but not simple artic-
sis. Brain Lang 89:320–328. ulatory movements. Brain Lang 91:23–24.

r 3190 r
r Sentence-Level Rhyme Detection in Spoken Language r

Friederici A (2011): The brain basis of language processing: From duction. In Grodzinsky Y, Amunts K, editors. Broca’s Region.
structure to function. Physiol Rev 91:1357–1392. New York: Oxford University Press. pp 218–241.
Friederici A, Meyer M, von Cramon DY (2000): Auditory language Meyer M, Steinhauer K, Alter K, Friederici A, von Cramon DY
comprehension: An event-related fMRI study on the process- (2004): Brain activity varies with modulation of dynamic pitch
ing of syntactic and lexical information. Brain Lang 75:289–300. variance in sentence melody. Brain Lang 89:277–289.
Friston K, Zarahn E, Josephs O, Henson R, Dale A (1999): Stochas- Mutschler I, Wieckhorst B, Kowalevski S, Derix J, Wentlandt J,
tic designs in event-related fmri. NeuroImage 10:607–619. Schulze-Bonhage A, Ball T (2009): Functional organization of
Gandour J, Tong Y, Wong D, Talavage T, Dzemidzic M, Xu YS, Li the human anterior insular cortex. Neurosci Lett 457:66–70.
XJ, Lowe M (2004): Hemispheric roles in the perception of Narain C, Scott SK, Wise RJ, Rosen S, Leff A, Iversen SD, Mat-
speech prosody. NeuroImage 23:344–357. thews PM (2003): Defining a left-lateralized response specific
Geiser E, Zaehle T, Jancke L, Meyer M (2008): The neural correlate to intelligible speech using fMRI. Cereb Cortex 13:1362–1368.
of speech rhythm as evidenced by metrical speech processing. Noesselt T, Shah N, Jancke L (2003): Top-down and bottom-up
J Cogn Neurosci 20:541–552. modulation of language related areas—An fMRI study. Bmc
Giraud A, Kell C, Thierfelder C, Sterzer P, Russ M, Preibisch C, Neurosci 4:ARTN 13.
Kleinschmidt A (2004): Contributions of sensory input, audi- Orfanidou E, Marslen-Wilson WD, Davis M (2006): Neural
tory search and verbal comprehension to cortical activity dur- response suppression predicts repetition priming of spoken
ing speech processing. Cereb Cortex 14:247–255. words and pseudowords. J Cogn Neurosci 18:1237–1252.
Hagoort P (2005): On Broca, brain, and binding: A new frame- Paulesu E, Frith C, Frackowiak R (1993): The neural correlates of
work. Trends Cogn Sci 9:416–423. the verbal component of working memory. Nature 362:342–
Hahne A, Jescheniak J (2001): What’s left if the jabberwock gets 345.
the semantics? An ERP investigation into semantic and syntac- Plante E, Creusere M, Sabin C (2002): Dissociating sentential pros-
tic processes during auditory sentence comprehension. Brain ody from sentence processing: Activation interacts with task
Res Cogn Brain Res 11:199–212. demands. NeuroImage 17:401–410.
Hesling I, Clement S, Bordessoules M, Allard M (2005): Cerebral Poeppel D (2003): The analysis of speech in different temporal
mechanisms of prosodic integration: Evidence from connected integration windows: Cerebral lateralization as ‘asymmetric
speech. NeuroImage 24:937–947. sampling in time’. Speech Commun 41:245–255.
Hickok G (2001): Functional anatomy of speech perception and Poeppel D, Hickok G (2004): Towards a new functional anatomy
speech production: Psycholinguistic implications. J Psycholing of language. Cognition 92:1–12.
Res 30:225–235. Poeppel D, Yellin E, Phillips C, Roberts TP, Rowley HA, Wexler
Hickok G (2009): The functional neuroanatomy of language. Phys K, Marantz A (1996): Task-induced asymmetry of the auditory
Life Rev 6:121–143. evoked M100 neuromagnetic field elicited by speech sounds.
Ischebeck A, Friederici A, Alter K (2008): Processing prosodic Cognitive Brain Res 4:231–242.
boundaries in natural and hummed speech: An fMRI study. Poldrack R, Wagner A, Prull M, Desmond J, Glover G, Gabrieli J
Cereb Cortex 18:541–552. (1999): Functional specialization for semantic and phonological
Jung-Beeman M (2005): Bilateral brain processes for comprehend- processing in the left inferior prefrontal cortex. NeuroImage
ing natural language. Trends Cogn Sci 9:512–518. 10:15–35.
Khateb A, Pegna AJ, Landis T, Michel CM, Brunet D, Seghier ML, Praamstra P, Stegeman D (1993): Phonological effects on the audi-
Annoni JM (2007): Rhyme processing in the brain: An ERP tory N400 event-related brain potential. Brain Res Cogn Brain
mapping study. Int J Psychophysiol 63:240–250. Res 1:73–86.
Khateb A, Pegna A, Michel C, Custodi M, Landis T, Annoni J Price CJ (2000): The anatomy of language: Contributions from
(2000): Semantic category and rhyming processing in the left functional neuroimaging. J Anat 197:335–359.
and right cerebral hemisphere. Laterality 5:35–53. Pruessmann K, Weiger M, Scheidegger M, Boesiger P (1999):
Kuest J, Karbe H (2002): Cortical activation studies in aphasia. Sense: Sensitivity encoding for fast MRI. Magn Reson Med
Curr Neurol Neurosci Rep 2:511–515. 42:952–962.
Lattner S, Meyer M, Friederici A (2005): Voice perception: Sex, Rayman J, Zaidel E (1991): Rhyming and the right hemisphere.
pitch, and the right hemisphere. Hum Brain Mapp 24:11–20. Brain Lang 40:89–105.
Liem F, Lutz K, Luechinger R, Jancke L, Meyer M (2012): Reduc- Rogalsky C, Hickok G (2011): The role of Broca’s area in sentence
ing the interval between volume acquisitions improves comprehension. J Cogn Neurosci 23:1664–1680.
‘‘sparse’’ scanning protocols in event-related auditory fMRI. Rugg MD (1984): Event-related potentials in phonological match-
Brain Topogr 25:182–193. ing tasks. Brain Lang 23:225–240.
Lindenberg R, Fangerau H, Seitz R (2007): ‘‘Broca’s area’’ as a col- Rugg M, Barrett S (1987): Event-related potentials and the interac-
lective term? Brain Lang 102:22–29. tion between orthographic and phonological information in a
Meinschaefer J, Hausmann M, Gunturkun O (1999): Laterality effects rhyme-judgment task. Brain Lang 32:336–361.
in the processing of syllable structure. Brain Lang 70:287–293. Scheich H, Brechmann A, Brosch M, Budinger E, Ohl F (2007):
Meyer M (2008): Functions of the left and right posterior temporal The cognitive auditory cortex: Task-specificity of stimulus rep-
lobes during segmental and suprasegmental speech perception. resentations. Hearing Res 229:213–224.
Z Neuropsychol 19:101–115. Schmidt C, Zaehle T, Meyer M, Geiser E, Boesiger P, Jancke L
Meyer M, Alter K, Friederici A, Lohmann G, von Cramon DY (2008): Silent and continuous fMRI scanning differentially
(2002): FMRI reveals brain regions mediating slow prosodic modulate activation in an auditory language comprehension
modulations in spoken sentences. Hum Brain Mapp 17:73–88. task. Hum Brain Mapp 29:46–56.
Meyer M, Jancke L. (2006): Involvement of the left and right fron- Shalom DB, Poeppel D (2008): Functional anatomic models of lan-
tal operculum in speech and nonspeech perception and pro- guage: Assembling the pieces. Neuroscientist 14:119–127.

r 3191 r
r Hurschler et al. r

Smith EE, Jonides J (1999): Storage and executive processes in the Vaden K, Muftuler L, Hickok G (2010): Phonological repetition-
frontal lobes. Science 283:1657–1661. suppression in bilateral superior temporal sulci. NeuroImage
Stanovich K, Cunningham A, Cramer B (1984): Assessing phono- 49:1018–1023.
logical awareness in kindergarten-children—Issues of task Vigneau M, Beaucousin V, Herve P, Duffau H, Crivello F, Houde
comparability. J Exp Child Psychol 38:175–190. O, Mazoyer B, Tzourio-Mazoyer N (2006): Meta-analyzing left
Stefanatos G (2008): Speech perceived through a damaged tempo- hemisphere language areas: Phonology, semantics, and sen-
ral window: Lessons from word deafness and aphasia. Semin tence processing. NeuroImage 30:1414–1432.
Speech Lang 29:239–252. Vigneau M, Beaucousin V, Herve P, Jobard G, Petit L, Crivello F,
Steinbrink C, Ackermann H, Lachmann T, Riecker A (2009): Contri- Mellet E, Zago L, Mazoyer B, Tzourio-Mazoyer N (2011): What
bution of the anterior insula to temporal auditory processing def- is right-hemisphere contribution to phonological, lexico-seman-
icits in developmental dyslexia. Hum Brain Mapp 30:2401–2411. tic, and sentence processing? Insights from a meta-analysis.
Sterzer P, Kleinschmidt A (2010): Anterior insula activations in NeuroImage 54:577–593.
perceptual paradigms: Often observed but barely understood. Zaehle T, Geiser E, Alter K, Jancke L, Meyer M (2008): Segmental
Brain Struct Funct 214:611–622. processing in the human auditory dorsal stream. Brain Res
Stowe LA, Haverkort M, Zwarts F (2005): Rethinking the neuro- 1220:179–190.
logical basis of language. Lingua 115:997–1042. Zaehle T, Schmidt C, Meyer M, Baumann S, Baltes C, Boesiger P,
Sweet LH, Paskavitz J, Haley A, Gunstad J, Mulligan R, Nyala- Jancke L (2007): Comparison of ‘‘silent’’ clustered and sparse
kanti P, Cohen RA (2008): Imaging phonological similarity temporal fMRI acquisitions in tonal and speech perception
effects on verbal working memory. Neuropsychologia 46:1114– tasks. NeuroImage 37:1195–1204.
1123. Zatorre R, Evans A, Meyer E, Gjedde A (1992): Lateralization of
Tervaniemi M, Hugdahl K (2003): Lateralization of auditory-cortex phonetic and pitch discrimination in speech processing. Sci-
functions. Brain Res Brain Res Rev 43:231–246. ence 256:846–849.
Thiel A, Haupt WF, Habedank B, Winhuisen L, Herholz K, Kessler J, Zatorre R, Gandour J (2008): Neural specializations for speech and
Markowitsch HJ, Heiss WD (2005): Neuroimaging-guided rTMS pitch: Moving beyond the dichotomies. Philos Trans R Soc
of the left inferior frontal gyrus interferes with repetition priming. Lond B Biol Sci 363:1087–1104.
NeuroImage 25:815–823. Zekveld A, Heslenfeld D, Festen J, Schoonhoven R (2006): Top-
Treiman R (1985): Onsets and rimes as units of spoken syllables: down and bottom-up processes in speech comprehension.
Evidence from children. J Exp Child Psychol 39:161–181. NeuroImage 32:1826–1836.
Turner R, Kenyon L, Trojanowski J, Gonatas N, Grossman M Zhang L, Shu H, Zhou F, Wang X, Li P (2010): Common and dis-
(1996): Clinical, neuroimaging, and pathologic features of pro- tinct neural substrates for the perception of speech rhythm
gressive nonfluent aphasia. Ann Neurol 39:166–173. and intonation. Hum Brain Mapp 31:1106–1116.
Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Zurowski B, Gostomzyk J, Gron G, Weller R, Schirrmeister H,
Etard O, Delcroix N, Mazoyer B, Joliot M (2002): Automated Neumeier B, Spitzer M, Reske SN, Walter H (2002): Dissociat-
anatomical labeling of activations in SPM using a macroscopic ing a common working memory network from different neural
anatomical parcellation of the MNI MRI single-subject brain. substrates of phonological and spatial stimulus processing.
NeuroImage 15:273–289. NeuroImage 15:45–57.

r 3192 r

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy