The Form Within Karl H Pribram Part 1
The Form Within Karl H Pribram Part 1
Pribram
For information about permission to reproduce selections from this book, write
to:
PROSPECTA PRESS
P.O. Box 3131
Westport, CT 06880
www.prospectapress.com
The first thing we have to say respecting what are called new views . . .
is that they are not new, but the very oldest of thoughts cast into the
mold of these new times.
—Ralph Waldo Emerson, “The Transcendentalist,” 1842
A major thrust in our current changing view of humanity has been our
growing understanding of what our brain does and how it does it. As always, in
times of change, controversies have arisen. Not surprisingly, some of the most
exciting findings, and the theories that are derived from them, are being swept
under the rug. In The Form Within I have enjoyed placing these controversies
within the larger context of complexity theory, framed by two ways of doing
science from the time of classical Greece to the present. Within such a frame I
bring to light findings that deserve our attention which are being ignored, and I
clarify the reasons why they are being ignored.
Critical to any communication of our brain’s complex inner activity and its
interaction with the world we navigate is the concept of “form”: The Form
Within. We are in the midst of an in-form-ation revolution, begun in the latter
half of the 20th century, that is superseding the Industrial Revolution of the 19th
and early 20th centuries. The Industrial Revolution had created a vast upheaval
through the impact of changes that occurred in our material world. The
information revolution that is now under way is changing the very form, the
patterns of how we navigate that world.
This revolution has many of the earmarks of the Copernican revolution of
earlier times. That revolution inaugurated a series of scientific breakthroughs
such as those by Galileo, Newton, Darwin and Freud. Though often unintended,
all of these and related contributions shifted humans from center stage to ever
more peripheral players in the scientific scenario.
The current information revolution redresses and reverses this shift: Once
again science is showing that we humans must be perceived as the agents of our
perceptions, the caretakers of the garden that is our earth, and how we can be
actively responsible for our fellow humans. As during the Copernican revolution,
the articulation of new insights is halting, often labored, and subject to revision
and clarification. But the revolution is under way, and the brain and
psychological sciences are playing a central role in its formulation. This
orientation toward the future is what has made The Form Within so interesting
to write.
This frame provides the form in which we communicate scientific research
and theory. Form is the central theme of this book. Form is defined in Webster’s
dictionary as “the essential nature of a thing as distinguished from its matter.”
What has been lacking in the brain sciences is a science-based alternative to
matter-ialism. Form provides such an alternative.
The dictionary continues its definition. “Form” forms two currents: form as
shape and form as pattern. Twentieth-century brain scientists felt comfortable in
their explanations using descriptions of shape but less so with descriptions of
pattern. Matter has shape. Most brain scientists are materialists. By contrast,
communication by way of the transmission of information is constituted of
pattern. Despite paying lip service to describing brain function in terms of
information processing, few classrooms and textbooks make attempts to define
brain processing in the strict sense of measures of information as they are used in
the communication and computational sciences. A shift from the materialism
(explanations in terms of matter) of the Industrial Revolution of the 19th and
early 20th centuries, to this “form-al” understanding of in-form-ation as pattern
is heralding the Communications Revolution today.
But measures of information per se are not enough to convey the
impressions and expressions by which we think and communicate. When the
man in the street speaks of information, he is concerned with the meaning of the
information. Meaning is formed by the context, the social, historical and material
context within which we process the information. Thus, in The Form Within I
address meaning as well as information.
The distinction between form as shape and form as pattern has many
ramifications that make up the substance of the arguments and explanations I
will explore throughout this book. For instance, as Descartes was declaring that
there was a difference between “thinking” (as pattern) and “matter” (as shape),
he was also providing us the way to trans-form shape and pattern by bringing
them together within co-ordinates—what we know today as “Cartesian
coordinates.”
The resolution of our question “what does the brain do and how does it do
it” rests on our ability to discern, in specific instances, the transformations, the
changes in coordinates, that relate our brain to its sensory and motor systems and
the ways in which different systems within the brain relate to each another. The
Form Within gives an account of many of these transformations, especially those
that help give meaning to the way we navigate our world.
What also impressed me was the location of the paintings: Today the floor
where we walked is hollowed out, but when the paintings were made, the ceiling
was so low that no one could stand erect. During prehistoric times, the painters
must have lain on their backs in this restricting passage. Anyone who has tried to
cover a ceiling using a paintbrush has experienced the aches that go along with
doing something from an unnatural position. And by what light were the
paintings executed? There is no evidence in the caves of smoke from torches.
How, in that ancient time, did the artists illuminate those caves? Just how and to
what purpose did they re-form the world of their caves?
Early human beings refined and diversified—formed —their actions to re-
form their world with painting and song. Storytellers began to give diverse
meanings to these diverse refinements by formulating legends and myths. In
turn, their stories became accepted as constraints on behavior such as laws
(modes of conduct) and religious injunctions (from Latin, religare, “to bind
together, to tie up”). Today we subsume these forms of cultural and social
knowing and acting under the heading “the humanities.”
Along with stories, early human beings began to formulate records of
diverse observations of recurring events that shaped their lives: the daily cycle of
light and darkness as related to the appearance and disappearance of the sun; the
seasonal changes in weather associated with flooding and depletion of rivers; the
movements of the stars across the heavens and the more rapid movements
conjoining some of them (the planets) against the background of others; the tides
and their relationship to the phases of the moon—and these to the patterns of
menstrual cycles. Such records made it possible to anticipate such events, and
subsequently to check these anticipations during the next cycle of those
recurring events. Formulating anticipation of their occurrence gave meaning to
our observations; that is, they formed predictable patterns. Today we subsume
these patterns, these forms of knowing about and acting on the world we
navigate, under the heading “the sciences.”
There has been considerable dissatisfaction among humanists with the
materialist approach taken by scientists to understanding our human condition.
What has been lacking is a science-based alternative to materialism. Form as
“the essential nature” of our concerns provides us such an alternative.
The Story
Most of The Form Within is organized according to topics and issues of
general interest as viewed from the vantage of those of us working within the
sciences of brain, behavior and mind. However, in the first three chapters, I
address issues such as how to proceed in mapping correlations, a topic totally
ignored in the current surge in using fMRI to localize the relations between brain
and behavior; how I came to use metaphors and models, which is just coming
into focus in doing experiments in psychology; and how I came to describe self-
organizing processes in the fine-fiber webs and the circuitry of the brain.
The main body of the text begins with the topic of perception. Of special
interest is the fact that, contrary to accepted views, the brain does not process
two-dimensional visual perceptions; rather, the brain handles many more
dimensions of perceptual input. This conclusion is reached because I make a
distinction between the functions of the optic array and those of retinal
processing. Furthermore, evidence shows that there is as much significant input
from the brain cortex to the receptors as there is input from the environment.
In a subsequent chapter, I present evidence to show that the brain’s control
of action is by way of “images of achievement,” not just motor programs—and
the resultant support of the currently dismissed “motor theory” of language
acquisition.
And there is more: The neural processing of pleasure, which accounts for
the intertwining of pain- and temperature-transmitting fibers in the spinal cord, is
shown to be straightforward; so is the evidence for how brain processes are
involved in setting values and making choices, as well as how brain processes
compose, through interleaving cognitive with emotional/motivational processes,
the formation of attitudes.
Finally, a series of chapters is devoted to new views on encompassing
issues such as evolution, memory, consciousness, mind/brain transactions, and
the relation between information processing and meaning. These chapters
challenge, by presenting alternatives, the prevailing views presented by
philosophers of science, especially those concerning the relation between brain
and mind. Much of what composes these and other chapters of The Form Within
cannot be readily found in other sources.
The first three chapters that follow this preface are organized, somewhat in
the order that I utilized them, according to three different types of procedure that
I used to study what the brain does and how it does it: 1) Correlations, 2)
Metaphors and Models, and 3) The Biological Imperative.
By correlations I mean discoveries of relationships within the same context,
the same frame of reference (the same coordinate systems.) We humans, and our
brains, are superb at discovering correlations, sometimes even when they are
questionable. Such discoveries have abounded during the two past decades, by
way of PET and fMRI image processors, techniques that have provided a
tremendous surge of activity in our ability to make correlations between our
brain, behavioral and mental processes. But correlations do not explain the
processes, the causes and effects that result in the correlations.
Metaphors and models, by contrast, are powerful tools that we can use to
guide our explorations of process. We can model a brain process that we are
exploring by doing research on a device, or with a formulation that serves as a
metaphor, and we can then test in actual brain research whether the process that
has been modeled actually works. The Form Within uses telephone
communications, computer programs, and optical-engineered holographic
processes wherever relevant. For the most part our metaphors in the
brain/behavior/mind sciences come from engineering and mathematics.
Brain and behavioral research (such as that obtained through correlation
and the use of metaphors) often develops questions that lead directly to brain
physiological research. My research led to understanding a biological
imperative: the self-organizing property of complex processing.
In subsequent chapters of The Form Within—the topic-oriented chapters—I
have noted the results of the application of the insights obtained as they become
relevant.
In Summary
1. The Prologue and Preface of The Form Within have stressed that we are
experiencing a period of revolution in science akin to that which shaped our
views during the Copernican revolution. We humans are reasserting the
centrality of our viewpoints and our responsibility toward the environment
within which we are housed.
2. This reassessment of our centrality can be formulated in terms of a neglect
of a science based on pattern that has resulted in a science based on matter,
an overarching materialism.
3. Form deals with the essence of what is being observed, not only its matter.
Form comes in two flavors: form as shape and form as pattern.
4. Current science, especially the brain and behavioral sciences, has been
comfortable with explanations in terms of form as the shapes of matter. But
contemporary science is only rarely concerned with form as patterns.
5. The distinction between form as shape and form as pattern is already
apparent in Greek philosophy. Euclid’s geometry, as taught in our high
schools, uses descriptions of form as shapes. Pythagoreans have dealt with
patterns of numbers that describe wave-forms, angles and circles, as in
tonal scales and in trigonometry.
6. In The Form Within, when appropriate, I trace these two different
formulations of research and theory development in the
brain/behavior/experience sciences.
7. Within this context, I develop the theme that the brain/behavior/mind
relationship receives a major new explanatory thrust based on the observed
biological imperative toward self-organization.
Formulations
Chapter 1
Correlations
Wherein I explore the shapes and patterns that inform the brain’s cortex.
There is good evidence for the age-old belief that the brain has
something to do with . . . . mind. Or to use less dualistic terms, when
behavioral phenomena are carved at their joints, there will be some
sense in which the analysis will correspond to the way the brain is put
together . . . . The procedure of looking back and forth between the two
fields [psychology and neurophysiology] is not only ancient and
honorable—it is always fun and occasionally useful.
Local networks in the posterior visual cortex were the only ones activated
by a visual percept. Simple visual stimulation engaged the extrinsic occipital
region; greater involvement, as in viewing movies, engages the intrinsic visual
systems of the temporal lobe. (Malach, in his presentation, used the terms
“extrinsic” and “intrinsic,” which I had used decades earlier.) A fronto-parietal
spread of activation occurs with self-related processing such as motor planning,
attentional modulation and introspection.
3. Mapping regions of the brain according to the effects of experimental surgical removals on visual choices
4. Map of the functions of the cortex derived from clinical studies
Beyond Correlation
Correlating an individual’s experience or his test behavior with a restricted
brain location is a first step, but is not enough to provide an understanding of the
brain-behavior-experience relationship. Currently, I am finding it necessary to
stress this caveat once again with the advent of the new non-invasive brain-
imaging techniques that are mostly being used to map correlations between a
particular human experience or a particular test behavior with particular
locations in the brain.
Unfortunately, many of the mistakes made during the 19th century are
being made again in the 21st. On the brain side, finding “mirror neurons”
matters much to materialists who feel that matter has now been shown to mirror
our feelings and perceptions. But, as David Marr (now deceased) of the
Massachusetts Institute of Technology pointed out, all we do with such
microelectrode findings is remove the problem of understanding a process—in
this case of imitation and empathy—from ourselves to the brain. We still are no
further in understanding the ”how” of these processes.
At the behavioral level of inquiry there is a similar lack of understanding.
There is almost no effort being made to classify the behaviors that are being
correlated with a particular locus in the brain. What do the behaviors have in
common that are being correlated with activity in the cingulate gyrus (or in the
anterior insular cortex)? And how are these behaviors grounded in what we
know about the physiology of these parts of the brain?
When relations of commonalities between behavior and parts of the brain
are noted, they are often mistaken. “Everyone knows the limbic systems of the
brain deal with our emotions.” This would be a nice generalization—except for
the observation that when damage to the human limbic system occurs, a certain
kind of memory, not human emotion is impaired.
And further—much has been made of the fact that the amygdala deal with
“fear.” Fear is based on pain. The threshold for feeling pain remains intact after
the removal of the amygdala. Fear is also based on memory of pain. Is memory
the contribution of the amygdala to fear? If so, what other memory processes do
the amygdala serve? Chapters 13 and 14 explore these issues
Among other issues, the chapters of The Form Within recount the research
accomplished during the early part of the second half of the 20th century that
provide answers to these issues—answers that were forgotten or swept under the
rug by subsequent generations of investigators.
Mapping correlations, even when done properly, is only a first step in
reaching an understanding of the processes involved in relating brain to behavior
and to mind— correlations do not tell us “the particular go” of a relationship.
As David Hume pointed out toward the end of the 17th century, the
correlation between a cock crowing and the sun rising needs a Copernican
context within which it can be understood. Here in The Form Within, I shall
provide the context within which the correlations revealed by the technique of
brain imaging need to be viewed in order to grasp the complex relationships
between our experience, our behavior and our brain. To do this, I have had to
provide a richer understanding of the relationship between our actions and the
brain than simply making maps, which is the epitome of exploring form as
shape. Sensitized by my discussions with Lashley, without overtly articulating
what I was doing in my research, I thus undertook to explore form as pattern.
Chapter 2
Metaphors and Models
More Experiments
Somewhat later (1960), another of the students in my laboratory (Robert
Gumnit, an undergraduate at Harvard) performed a similar experiment to the one
that Köhler and I had done on monkeys. In this new experiment, we used a series
of clicking sounds and again demonstrated a DC shift, recording directly from
the surface of the auditory cortex of cats as shown in the accompanying figure.
Thus we successfully verified that there is a DC shift in the primary receiving
areas of the brain when sensory receptors are stimulated. Furthermore, we
demonstrated that the DC shift is accompanied by a shift from a slow to a fast
rhythm of the EEG, the recording of the oscillatory electrical brain activity.
Previously, human EEG recordings had shown that such a shift in the frequency
of the EEG occurs when a person becomes attentive (for instance, by opening
the eyes to view a scene, after having been in a relaxed state with eyes closed).
A Neurophysiological Test
In order to explore directly Köhler’s hypothesis that DC brain activity is
involved in perception, I created electrical disturbances by implanting an irritant
in the visual cortex of monkeys. To create the irritation, I filled small silver discs
with “amphogel,” aluminum hydroxide cream, a popular medication used for
peptic ulcers at the time. I then placed a dozen of the discs onto the visual cortex
of each of the monkeys’ two hemispheres. Amphogel (or alternatively,
penicillin), when applied to the brain cortex, produces electrical disturbances
similar to those that initiate epileptic seizures—large slow waves and “spikes”—
a total disruption of the normal EEG patterns. I tested the monkeys’ ability to
distinguish very fine horizontal lines from fine vertical lines. I expected that,
once the electrical recordings from the monkeys’ visual cortex indicated that
electrical “seizures” had commenced, the monkeys’ ability to distinguish the
lines would be impaired or even totally lost.
Contrary to my expectation, the monkeys were able to distinguish the lines
and performed the task without any deficiency. They were able to “perceive”—
that is, to distinguish whether the lines were vertical or horizontal”—even
though the normal pattern of electrical waves recorded from their brains had
been disrupted. Therefore, “geometric isomorphism”—the matching of the
internal shape of electrical brain activity to an external form”—did not explain
what was happening in the brain, at least with respect to visual perception.
I told Köhler of my results. “Now that you have disproved not only my
theory of cortical function in perception, but everyone else’s as well, what are
you going to do?” he asked. I answered, “I’ll keep my mouth shut.”
And indeed, since I had no answer to explain my results, when, two years
later, I transferred from Yale to Stanford University, I declined to teach a course
on brain mechanisms in sensation and perception. I had to wait a few years for a
more constructive answer to the problem.
6. Direct Current Field responses from auditory area in response to auditory stimulation. (A) Shift in
response to white noise. (B) Shift in response to tone of 4000 Hertz. (C) Shift in response to white noise
returning to baseline before end of auditory stimulation. (D) Response to 50 clicks/sec. returning to baseline
before end of stimulation. (From Gumnit, 1960.)
Perception Revisited
It was not until the early 1960s, a few years into my tenure at Stanford, that
the issue of brain processes in perception would confront me once again. Ernest
(Jack) Hilgard, my colleague in psychology at Stanford, was preparing an update
of his introductory psychology text and asked me to contribute something about
the status of our current knowledge regarding the role that brain physiology
played in perception. I answered that I was very dissatisfied with what we knew.
I described the experiments, conducted both by myself and others, that had
disproved Köhler’s belief that perception could be ascribed to direct current
brain electrical fields shaped like (isomorphic with) envisioned patterns.
Further, I noted that David Hubel and Thorsten Wiesel had recently shown
that elongated stimuli, such as lines and edges, were the most successful shapes
to stimulate neurons in the primary visual receiving cortex. They had suggested
that perception might result from something like the construction of stick figures
from these elementary sensitivities. However, I pointed out, this proposal
wouldn’t explain the fact that much of our perception involves patterns produced
by shadings and textures—not just lines or edges. For me, the stick-figure
concept failed to provide a satisfactory explanation of how our brain organizes
our perceptual process. Hilgard asked me to give this important issue some
further thought. We met again about a week later, but I still had nothing to offer.
Hilgard, ordinarily a very kind and patient person, seemed just a bit peeved, and
declared that he did not have the luxury of procrastination. He had to have
something to say in his book, which was due for publication. He asked me to
come up with a viable alternative to the ones I had already disproved or so
summarily dismissed.
Engaging Lashley
I could see only one alternative, one that provided a middle way between
the direct currents of Köhler and the lines and stick figures of Hubel and Wiesel.
Years earlier, during the 1940s, when I had been involved in research at the
Yerkes Laboratory, its director Karl Lashley had suggested that interference
patterns among wave fronts in brain electrical activity could organize not only
our perceptions but also our memory. Lashley, a zoologist, had taken as his
model for patterning perception those developmental patterns that form a fetus
from the blastula stage to the fully developed embryo. Already, at the turn of the
19th century, the American zoologist Jacques Loeb had suggested that “force
lines” guide such development. At about the same period, A. Goldscheider, a
German neurologist, had wondered if such force lines might also account for
how the brain forms our patterns of perceptions.
Lashley’s theory of interference patterns among wave fronts of electrical
activity in the brain had seemed to suit my earlier intuitions regarding patterned
(as opposed to DC) brain electrical activity. However, Lashley and I had
discussed interference patterns repeatedly without coming up with any idea of
what micro-wave fronts might look like in the brain. Nor could we figure out
how, if they were there, they might account for anything at either the behavioral
or the perceptual level.
Our discussions became somewhat uncomfortable with regard to a book
that Lashley’s postdoctoral student, Donald Hebb, was writing at the time.
Lashley declined to hold a colloquium to discuss Hebb’s book but told me
privately that he didn’t agree with Hebb’s formulation. Hebb was proposing that
our perception is organized by the formation of “cell assemblies” in the brain.
Lashley could not articulate exactly what his objections were: “Hebb is correct
in all his details, but more generally, he’s just oh so wrong.” Hebb was deeply
hurt when Lashley refused to write a preface to his book—which nonetheless
came to be a most influential contribution to the brain/behavioral sciences.
In hindsight, it is clear that Hebb was addressing brain circuitry, while
Lashley, even at such an early date, was groping for an understanding of the
brain’s deeper connectivity and what that might mean in the larger scheme of
psychological processes.
With respect to my colleague Jack Hilgard’s problem of needing an answer
about brain processes in perception in time for his publication date, I took his
query to my lab group, outlining both my objections to Köhler’s DC fields and to
Hubel and Wiesel’s stick figures. I suggested that since there were no other
alternatives available right now, perhaps Lashley’s proposal of interference
patterns might be worth pursuing. Not totally tongue-in-cheek, I noted that
Lashley’s proposal had the advantage that we could not object to something
when we had no clue of what was involved.
Microprocessing at Last
Within a few days of my encounters with Jack Hilgard, a postdoctoral
fellow in my laboratory, Nico Spinelli, brought us an article by the eminent
neurophysiologist John Eccles that had appeared in 1958 in Scientific American.
In this article, Eccles pointed out that, although we could only examine synapses
—junctions between brain cells—one by one, the branching of large fiber axons
into fine fibers just as they are approaching a synapse makes it necessary for us
to regard the electrical activity occurring in these finefibered branches as
forming a wave front. This was one of those “aha” moments scientists like me
are always hoping for. I immediately recognized that such wave fronts when
reaching synapses from different directions, would provide exactly those long-
sought-after interference patterns Lash-ley had proposed.
Despite my excitement at this instant realization, I also felt like an utter
fool. The answer to my years of discussion with Lashley about how such
interference patterns among micro-waves could be generated in the brain had all
the time been staring us both in the face—and neither of us had had the wit to
see it.
Wherein I describe the experimental results that provided substance to the self-
organizing processes in the brain.
‘I love you.’ It was spring in Paris and the words held the
delightful flavor of a Scandinavian accent. The occasion was a
UNESCO meeting on the problems of research on Brain and Human
Behavior. The fateful words were not spoken by a curvaceous blonde
beauty, however, but generated by a small shiny metal device in the
hands of a Swedish psycholinguist.
The device impressed all of us with the simplicity of its design.
The loudspeaker was controlled by only two knobs. One knob altered
the state of an electronic circuit that represented the tensions of the
vocal apparatus; the other regulated the pulses generated by a circuit
that simulated the plosions of air puffs striking the pharynx.
Could this simple device be relevant to man’s study of himself?
Might not all behavior be generated and controlled by a neural
mechanism equally simple? Is the nervous system a “two knob” dual
process mechanism in which one process is expressed in terms of
neuroelectric states and the other in terms of distinct pulsatile
operators on those states? That the nervous system does, in fact,
operate by impulses has been well documented. The existence of
neuroelectric states in the brain has also been established, but this
evidence and its significance to the study of psychology has been slow
to gain acceptance even in neuro-physiology. We therefore need to
examine the evidence that makes a two-process model of brain
function possible.
Scanning electron micrograph showing the arrangement of nerve fibers in the retina of Necturus. Fibers
(dendrites) arise in the inner segment and course over the outer segment of a cone. Note that points of
contact do not necessarily take place at nerve endings. (From Lewis, 1970.)
The situation that Hodgkin found in the retina is identical to what occurs in
the web of dendrites in the brain cortex: processing occurs in the fine fibers, and
these processes compose and form the patterns of signals relayed by axons, the
large nerve trunks.
13. The Cortical Chem-Web Demonstrated by Staining Acetyl Choline Esterase Receptors
14. The Neuro-Nodal Web
Wherein I juxtapose form as shape and form as pattern as they are recorded from
the brain cortex and describe the usefulness and drawbacks of each.
An Early Intuition
My initial thought processes and the research that germinated into this
section of The Form Within reinforced my deeply felt intuition that the “feature
detection” approach to understanding the role of brain processes in organizing
our perceptions was of limited explanatory value.
Michael Polanyi, the Oxford physicist and philosopher of science, defined
an intuition as a hunch one is willing to explore and test. (If this is not done, it’s
just a guess.) Polanyi also noted that an intuition is based on tacit knowledge;
that is, something one knows but cannot consciously articulate. Matte Blanco, an
Argentine psychoanalyst, enhanced this definition by claiming that “the
unconscious” is defined by infinite sets: consciousness consists of being able to
distinguish one set of experiences from another.
The “feature detection” story begins during the early 1960s, when David
Hubel and Torsten Wiesel were working in Stephen Kuffler’s laboratory at the
Johns Hopkins University. Hubel and Wiesel had difficulty in obtaining
responses from the cortical cells of cats when they used the spots of light that
Kuffler had found so effective in mapping the responses in the optic nerve. As so
often happens in science, their frustration was resolved accidentally, when the
lens of their stimulating light source went out of focus to produce an elongation
of the spot of light. The elongated “line” stimulus brought immediate results: the
electrical activity of the cat’s brain cells increased dramatically in response to the
stimulus and, importantly, different cells responded selectively to different
orientations of the elon-gated stimulus.
During the late 1960s and early 1970s, excitement was in the air: like Hubel
and Wiesel, many laboratories, including mine, had been using microelectrodes
to record from single nerve cells in the brain. Visual scientists were able to map
the brain’s responses to the stimuli they presented to the experimental subject.
Choosing the stimulus most often determined what would be the response they
would obtain.
Hubel and Wiesel settled on a Euclidian geometry format for choosing their
visual stimulus: the bull’s eye “point” receptive fields that characterized the
retina and geniculate (thalamic) receptive fields could be shown to form lines
and planes when cortical maps were made. I referred to these results in Chapter
3 as making up “stick figures” and that more sophisticated attempts at giving
them three dimensions had essentially failed.
Another story, more in the Pythagorean format, and more compatible with
my intuitions, surfaced about a decade later. In 1970, just as my book Languages
of the Brain was going to press, I received a short reprint from Cambridge
University visual science professor Fergus Campbell. In it he described his
pioneering experiments that demonstrated that the form of visual space could
profitably be described not in terms of shape, but in terms of patterns of their
spatial frequency. The frequency was that of patterns of alternating dark and
white stripes whose fineness could be varied. This mode of description was
critical in helping to explain how the extremely fine resolution of a scene is
possible in our visual perception. Attached to this reprint was a note from
Fergus: “Karl—is this what you mean?” It certainly was!!
After this exciting breakthrough, I went to Cambridge several times to
present my most recent findings and to get caught up with those of Campbell and
of visual scientist Horace Barlow, who had recently moved back to Cambridge
from the University of California at Berkeley, where we had previously
interacted: he presenting his data on Cardinal cells (multiple detectors of lines)
and I presenting mine on the processing of multiple lines, to each other’s lab
groups.
Campbell was showing that not only human subjects, but brain cells
respond to specific frequencies of alternating stripes and spaces. These
experiments demonstrated that the process that accounts for the extremely fine
resolution of human vision is dependent on the function of the brain cortex.
When the Cambridge group was given its own sub-department, I sent them a
large “cookie-monster” with bulging eyes, to serve as a mascot.
Once, while visiting Fergus’s laboratory, Horace Barlow came running
down the hall exclaiming, “Karl, you must see this, it’s right up your alley.”
Horace showed me changes produced on a cortical cell’s receptive field map by
a stimulus outside of that brain cell’s receptive field. I immediately exclaimed,
“A cortical MacIlvain effect!” I had remembered (not a bad feat for someone
who has difficulty remembering names) that an effect called by this name had
previously been shown to occur at the retina. Demonstrating such global
influences on receptive fields indicated that the fields were part of a larger brain
network or web. It took another three decades for this finding to be repeatedly
confirmed and its importance more generally appreciated by the neuroscience
community.
These changes in the receptive fields are brought about by top-down
influences from higher-order (cognitive) systems. The changes determine
whether information or image processing is sought. Electrical stimulation of the
inferior temporal cortex changes the form of the receptive fields to enhance
information processing— as it occurs in communication processing systems. By
contrast, electrical stimulation of the prefrontal cortex changes the form of the
receptive fields to enhance image processing—as it is used in image processing
such as PET scans, fMRI and in making correlations such as in using FFT. The
changes are brought about by altering the width of the inhibitory surround of the
receptive fields. Thus both information and image processes are achieved,
depending on whether communication and computation or imaging and
correlations are being addressed.
15. Cross sections of thalamic visual receptive fields (n) showing the influence of electrical stimulation of
the inferior temporal cortex (i) and the frontal cortex (f)
Encounter at MIT
In the early 1970s, I was invited to present the results of these experiments
at the Massachusetts Institute of Technology. The ensuing discussion centered
not on the results of my experiments but—ironically, it being MIT— on my use
of computers. David Hubel stated that the use of computers would ruin visual
science by taking the research too far from a hands-on, direct experience
necessary to understanding the results of observation. I retorted that in every
experiment, in the mapping of every cell, I had to use the direct hands-on
mapping before I could engage the computer to quantify my observations. I
stated that in order to obtain the results that I had shown, I needed stable
baselines—baselines that could only be statistically verified by the use of a
computer. Only by using these baselines, could I show that the changes produced
by the electrical stimulation were reliable.
Then my host, Hans-Lukas Teuber, professor and chair of the Department
of Psychology at MIT, rose to support Hubel by saying that the use of computers
would also ruin behavioral psychology. He was an old friend of mine (our
families celebrated each Thanksgiving together), so I reminded him that my
laboratory had been completely computerized since 1959, when I’d first moved
to Stanford. That alone had made it possible to test 100 monkeys a day on a
variety of cognitive (rather than conditioning) tasks. I pointed out that he should
be pleased, since he was a severe critic of operant conditioning as a model for
understanding psychological processing.
I came away from the MIT lecture disappointed by the indifference to the
remarkable and important experimental results obtained both at Cambridge and
in my laboratory, a disappointment that persisted as the feature approach to
visual processing became overwhelmingly dominant during the 1990s.
For example:
Contrary to what was becoming “established,” we showed in my laboratory
that the supposed “Geiger counter” cells demonstrated sensitivities to many
stimuli far beyond their sensitivity just to the presentation of lines. Some of these
cells also responded to select bandwidths of auditory stimulation. All of them
respond to changes in luminance (brightness), while other cells respond also to
color. As described in Chapter 2, the Berkeley brain scientists Russell and Karen
DeValois, on the basis of their decades-long experimental results, have
constructed a model showing that the same network of cells can provide for our
experience both of color and of form: When connected in one fashion,
processing leads to the experience of form; when connected in another fashion,
our experience is of color. The fact that a single network can accomplish both is
important because we rarely experience color apart from form.
Perhaps more surprising was our finding that some of these cells in the
primary visual sensory receiving cortex, when recorded in awake problem-
solving monkeys, respond to whether the monkey’s response to a visual stimulus
had been rewarded or not—and if so, whether the monkey had pressed one panel
or the other in order to receive that reward.
A Show of Hands
To illustrate how such a network might function, I ask the students in my
class to do the following: “All those who are wearing blue jeans, please raise
your hand. Look at the distribution of hands. Now lower them. Next, all those
wearing glasses, please raise your hands. Look at the distribution of hands and
note how different it is from the distribution produced by those wearing blue
jeans.” Again, “All girls, raise your hands; then all boys.” We quickly observe
that each distribution, each pattern, is different. The features—blue jeans,
glasses, gender—were coded by the various patterns of raised hands, a pattern
within a web, not by the characteristic of only a single individual person,
Brain cells are like people when you get to know them. Each is unique. All
features are distributed among the group of individuals, but not all the
individuals share all the features. Also, as in the saying “birds of a feather flock
together,” brain cells sharing the same grouping of features tend to form clusters.
Thus, one cluster will have more cells that are sensitive to color; another cluster
will have more cells that are sensitive to movement. Still, there is a great deal of
overlap of features within any cluster. People are the same in many respects. For
instance, some of us have more estrogen circulating in our bodies while others
have more testosterone. But all of us have some of each.
With this arrangement of “features as patterns,” the question immediately
arises as to who asks the question as I did with my students: Who is wearing
blue jeans? This question is known as an example of a top-down approach to
processing, whereas feature detection is a bottom-up approach. In my laboratory,
my colleagues and I spent the better part of three decades establishing the
anatomy and physiology of the routes by which top-down processing occurs in
the brain. Together with the results of other laboratories, we demonstrated that
electrical stimulation of the sensory-specific “association” regions of the brain
cortex changed the patterns of response of sensory receptors to a sensory input.
Influence of such stimulation was also shown to affect all the “way stations” that
the sensory input traversed en route to the sensory receiving cells in the cortex.
Even more surprising was the fact that the effects of an auditory or touch
stimulus would reach the retina in about the same amount of time that it takes for
a visual stimulus to be processed. Tongue in cheek, I’ve noted these results in the
phrase: “We live in our sensory systems.” The experimental findings open the
important question as to where all our processing of sensory input begins, and
what is the sequence of processing. That question will be addressed shortly and
again in depth when we analyze frames and contexts.
1. that David was a modest person, who would claim that he hadn’t
understood the mathematical aspects of the DeValois presentation—but I
would not let him get away with this because he had taught high school
math before going to medical school;
2. that, in my experience, each cortical neuron encodes several features of our
visual environment, that each neuron is like a person with many attributes,
and thus our ability to recognize features had to depend on patterns
displayed by groups of neurons for further processing;
3. that, as just presented, Karen and Russ DeValois had done critical
experiments showing that the visual cortical cells were tuned to spatial
frequency rather than to the shapes of “lines” or “edges.”
Seeing Colors
James and Eleanor Gibson of Cornell University, using purely behavioral
techniques, had also reached the conclusion that higher-order distinctions are
learned. The Gibsons reported their results in a seminal publication in which
they showed that perceptual learning is due to progressive differentiation of
stimulus attributes, not by forming associations among them.
Taking Matte Blanco’s definition that our conscious experience is based on
making distinctions, as described in the opening paragraphs of this chapter, this
line of research indicates how our brain processes, through learning, make
possible progressively greater reaches of conscious experience.
The processes by which such distinctions are achieved were addressed at
another meeting of the Society for Neuroscience—this one in Boston in the
1980s. Once again, David Hubel gave a splendid talk on how brain processes
can entail progressively greater distinctions among colors. He noted that, as we
go from receptor to visual cortex and beyond, a change occurs in the coordinates
within which the colors are processed: that processing changes from three colors
at the receptor to three opponent pairs of colors at the thalamus and then to six
double-opponent pairs at the primary sensory cortex. Still higher levels of
processing “look at the coordinates from an inside vantage” to make possible the
experiencing of myriads of colors. A change in coordinates is a transformation, a
change in the code by which the form is processed.
Transformations are the coin of making ever-greater distinctions,
refinements in our conscious experience. After his presentation in Boston, I
spoke with David Hubel and said that what he had just done for color was
exactly what I was interested in doing for form. David said he didn’t think it
would work when it came to form. But of course it did. The prodigious
accomplishments of the 1970s and 80s form the legacy of how visual angles at
different orientations can become developed into a Pythagorean approach to
visual pattern vision. (Hubel and I have never had the opportunity to discuss the
issue further.)
Form as Pattern
A different path of experimentation led me to explore the view of form as
pattern. In my earlier research, while examining the effects of cortical removals
on the behavior of monkeys, I had to extensively examine the range of responses
to stimuli that were affected by removing a particular area of the brain cortex.
Thus, during the 1960s, when I began to use micro-electrodes, I used not only a
single line but two lines in my receptive field experiments. The relationship of
the lines to one another could thus be explored. Meanwhile, other laboratories
began to use multiple lines and I soon followed suit.
Multiple lines of different width and spacing (that is, stripes) produce a
“flicker” when moving across the visual field. We experience such a flicker
when driving along a tree-lined road: the rapid alternation of such light and
shadow stripes can actually incite a seizure in seizure-prone people. The rapidity
of the flicker, the speed of the alternation between light and dark can be
described mathematically in terms of its frequency.
When we moved such stripes before the eye of a subject in our experiments,
we could change the response of the subject’s cortical receptive field by
changing the frequency of the stimulation in two ways: by altering the speed at
which we moved the stimulus, or by changing the width of the stripes and the
spaces between them.
Mathematically, this relationship among the angles on the retina that result
from stimulation by stripes is expressed as degrees of arc of the portion of a
circle. A circle, as noted earlier, represents a cycle. Degrees of arc are measures
of the density, the rapidity of oscillations between light and dark in the cycle.
Thus, the oscillations are the spectral transform of the spatial widths of the
alternating light and dark stripes. It follows that changes in the width of the light
and dark stripes will change the frequency spectrum of the brain response.
Those of us who used multiple lines in our experiments have described our
results in terms of “spatial frequency.”Frequency over space as distinct from
frequency over time can be demonstrated, for example, if we visually scan the
multiple panels that form a picket fence or similar grating. When we visually
scan the fence from left to right, each panel and each gap in the fence projects a
pattern of alternating a bright region and a shadow region onto the curve of our
retina. Therefore, each bright region and each shadow covers an angle of the
curve of our retina. The “density” of angles is defined by how close the
stimulating bright and dark regions (the width of the fence panels and their
spacing) are to each other.
Waves: A Caveat
In the early 1960s, when the holographic formulation of brain processes
was first proposed, one of the main issues that had to be resolved was to identify
the waves in the brain that constituted a hologram. It was not until I was writing
this book that the issue became resolved for me. The short answer to the
question as to what brainwaves are involved is that there are none.
For many years David Bohm had to chastise me for confounding waves
with spectra, the results of interferences among waves. The transformation is
between space-time and spectra. Waves occur in space-time as we have all
experienced. Thus the work of Karen and Russell DeValois, Fergus Campbell
and the rest of us did not deal with the frequency of waves but with the
frequency of oscillations— the frequency of oscillations expressed as wavelets,
as we shall see, between hyperpolarizations and depolarizations in the fine-fiber
neuro-nodal web.
Waves are generated when the energy produced by oscillations is
constrained as in a string—or at a beachfront or by the interface between wind
and water. But the unconstrained oscillations do not produce waves. Tsunamis
(more generally solitons) can be initiated on the east coast of Asia. Their energy
is not constrained in space or time and so is spread over the entire Pacific Ocean;
their effect is felt on the beaches of Hawaii. But there are no perceptible waves
over the expanse of the Pacific Ocean in between. The oscillations that “carry”
the energy can be experienced visually or kinesthetically, as at a beach beyond
the breakers where the water makes the raft bob up and down in what is felt as a
circular motion. In the brain, it is the constraints produced by sensory inputs or
neuronal clocks that result in various frequencies of oscillations that we record in
our ERPs and EEGs.
Critical Tests
Fergus Campbell provided an interesting sidelight on harmonics, the
relationships among frequencies. The test was critical to viewing the perception
of visual form as patterns of oscillation. When we hear a passage of music, we
hear a range of tones and their harmonics. A fundamental frequency of a tone is
produced when a length of a string is plucked. When fractions of the whole
length of the string also vibrate, harmonics of the fundamental frequency are
produced. When we hear a passage of music, we hear the fundamental tones
with all of their harmonics. Surprisingly, we also hear the fundamental tones
when only their harmonics are present. This is called experiencing the “missing
fundamental.”
I was able to experience another exciting moment when in the 1970s I was
visiting Cambridge, Fergus Campbell demonstrated that in vision, as in audition,
we also experience the “missing fundamental.” Yes, the eye is like the ear.
The results of two other sets of experiments are critical to demonstrating the
power of the frequency approach to analyzing visual patterns. Russell and Karen
DeValois and their students performed these experiments at the University of
California at Berkeley. In one set, they mapped an elongated receptive field of a
cortical neuron by moving a single line in a particular orientation across the
visual field of a monkey’s eye. (This form of mapping had been the basis of the
earlier interpretation that cells showing such receptive fields were “line
detectors.”) The DeValoises went on to widen the line they presented to the
monkey until it was no longer a line but was now a rectangle: The amount of the
cell’s response did not change! The cell that had been described as a line detector
could not discriminate—could not resolve the difference—between a line and a
rectangle.
In this same study, the experimenters next moved gratings with the same
orientation as that of the single line across the visual field of the monkey. These
gratings included a variety of bar widths and spacing (various spatial
frequencies). Using these stimuli, they found that the receptive field of the cell
was tuned to a limited bandwidth of spatial frequencies. Repeating the
experiment, mapping different cortical cells, they found that different cells
responded not only to different orientations of a stimulus, as had been found
previously, but that they were tuned to different spatial frequencies: the
ensemble of cortical cells had failed to discriminate between a line and a
rectangle but had excellent resolving power in distinguishing between band
widths of frequencies.
19. The missing fundamental: note that the patterns do not change: the bottom, as you view it, is the pattern
formed with the fundamental missing
For the second set of experiments the DeValoises elicited a receptive field
by moving a set of parallel lines (a grating) across the monkey’s receptive field.
As noted earlier, one of the characteristics of such fields is their specificity to the
orientation of the stimulus: a change in the orientation of the lines will engage
other cells, but the original cell becomes unresponsive when the orientation of
the grating is changed.
Following their identification of the orientation selectivity of the cell, the
DeValoises changed their original stimulus to a plaid. The pattern of the original
parallel lines was now crossed by perpendicular lines. According to the line
detector view, the cell should still be “detecting” the original lines, since they
were still present in their original orientation. But the cell no longer responded as
it had done. To activate the cell, the plaid had to be rotated.
The experimenters predicted the amount of rotation necessary to activate
the cell by scanning each plaid pattern, then performing a frequency transform
on the scan using a computer program. Each cell was responding to the
frequency transform of the plaid that activated it, not detecting the individual
lines making up the plaid. Forty-four experiments were performed, and in 42 of
them the predicted amount of, rotation came within two minutes of arc; in the
other two experiments the results came within one degree of predicting the
amount of rotation necessary to make the cell respond. Russell DeValois, a rather
cautious and reserved person, stated in presenting these data at the meeting of
the American Psychological Association that he had never before had such a
clear experimental result. I was delighted.
20. Tuning functions for two representative striate cortex cells. Plotted are the sensitivity functions for bars
of various widths (squares) and sinusoidal gratings of various spatial frequencies (circles). Note that both
cells are much more narrowly tuned for grating frequency than for bar width. (From Albrecht et al., 1980,
copyright 1980, AAAS. Reprinted by permission.)
21. Orientation tuning of a cat simple cell to a grating (squares and solid lines) and to checkerboards of
various length/width ratios. In A the responses are plotted as a function of the edge orientation; in B as a
function of the orientation of the Fourier fundamental of each pattern. It can be seen that the Fourier
fundamental but not the edge orientation specifies the cell’s orientation tuning (From K. K. DeValois et al.,
1979. Reprinted with permission.)
24. The Receptive Fields of Seven Ganglion Cell Types Arranged Serially
The use of both the harmonic and the vector descriptions of our data reflect
the difference between visualizing the deep and surface structures of brain
processing. Harmonic descriptions richly demonstrate the holoscape of receptive
field properties, then patches—the nodes—that make up the neuro-nodal web of
small-fibered processing. The vector descriptions of our data describe the
sampling by large-fibered axons of the deep process.
In Summary
The feature detection approach to studying how our brain physiology
enables our perceptions, outlines a Euclidian domain to be explored. This
approach, though initially exciting to the neuroscience community, has been
shown wanting: research in my laboratory and that of others showed that the
receptive fields of cells in the brain’s visual cortex are not limited to detecting
one or another sensory stimulus.
Instead, research in my laboratory, and that of others, showed that a feature,
relevant to a particular situation, is identified by being extracted from a set of
available features.
The frequency approach, based on recording the spectral dimensions of arcs
of excitation subtended over the curvature of the retina, is essentially
Pythagorean. Deeper insights into complex processing are obtained, insights that
can provide the basis for identifying features.
Neither approach discerns the objects that populate the world we navigate,
the subject of Chapter 6.
Navigating Our World
Chapter 5
From Receptor to Cortex and Back Again
If you throw two small stones at the same time on a sheet of motionless
water at some distance from each other, you will observe that around
the two percussions numerous separate circles are formed; these will
meet as they increase in size and then penetrate and intersect each
other, all the while retaining as their respective centers the spots struck
by the stones. And the reason for this is that the water, though
apparently moving, does not leave its original position. . . . [This] can
be described as a tremor rather than a movement. In order to
understand better what I mean, watch the blades of straw that because
of their lightness float on the water, and observe how they do not
depart from their original positions in spite of the waves underneath
them
Just as the stone thrown in the water becomes the center and causes
various circles, sound spreads in circles in the air. Thus every body
placed in the luminous air spreads out in circles and fills the
surrounding space with infinite likeness of itself and appears all in all
and all in every part.
—Leonardo da Vinci, Optics
Enough has now been said to prove the general law of perception,
which is this, that whilst part of what we perceive comes through our
senses from the objects before us another part (and it may be the
larger part) always comes out of our own head.
—William James, Principles of Psychology, 1890
Lenses: How Do We Come to Know What Is Out
There?
In the early 1980s the quantum physicist David Bohm pointed out that if
humankind did not have telescopes with lenses, the universe would appear to us
as a holographic blur, an emptiness of all forms such as objects. Only recently
have cosmologists begun to arrive at a similar view. String theoreticians, the
mathematician Roger Penrose and Stephen Hawking (e.g., in A Universe in a
Nutshell) have, for their own reasons, begun to suspect that Bohm was correct, at
least for exploring and explaining what we can about the microstructure of the
universe at its origin and horizon. But the horizon itself is still a spatial concept,
whereas Bohm had disposed of space entirely as a concept in his lens-less order.
With respect to the brain, I have extended Bohm’s insight to point out that,
were it not for the lenses of our eyes and similar lens-like properties of our other
senses, the world within which we navigate would appear to us as if we were in
a dense fog—a holographic blur, an emptiness within which all form seems to
have disappeared. As mentioned in Chapter 2, we have a great deal of trouble
envisioning such a situation, which was discovered mathematically in 1948 by
Dennis Gabor in his effort to enhance the resolution of electron microscopy.
Today, thanks to the pioneering work of Emmet Leith in the early 1960s, we now
have a palpable realization of the holographic process using laser optics.
Holography
Leith’s procedure was to shine a beam of coherent (laser) light through a
half-silvered angled mirror that allowed some of the beam to pass straight
through the mirror, and some of it to be reflected from its surface. The reflected
portion of the laser light was beamed at right angles to become patterned by an
object. The reflected and transmitted beams were then collected on a
photographic plate, forming a spectrum made by the intersections of the two
beams. Wherever these intersections occur, their amplitudes (heights) are either
reinforced or diminished depending on the patterns “encoded” in the beams. A
simple model of the intersections of the beams can be visualized as being very
much like the intersections of two sets of circular waves made by dropping two
pebbles in a pond, as described by Leonardo da Vinci.
When captured on film, the holographic process has several important
characteristics that help us understand sensory processing and the reconstruction
of our experience from memory. First, when we shine one of the two beams onto
the holographic film, the patterns “encoded” in the other beam can be
reconstructed. Thus, when one of the beams has been reflected from an object,
that object can again be viewed. If both beams are reflected from objects, the
image of either object can be retrieved when we illuminate the other.
An additional feature of holography is that one can retrievably store huge
amounts of data on film simply by repeating the process with slight changes in
the angle of the beams or the frequency of their waves (much as when one
changes channels on a television set). The entire contents of the Library of
Congress can be stored in 1 cubic centimeter in this manner.
Shortly after my inauguration to Emmet Leith’s demonstration of
holography, I was invited to a physics meeting held in a Buddhist enclave in the
San Francisco Bay area. I had noted that the only processing feature that all
perceptual scientists found to be non-controversial was that the biconcave lens of
the eye performed a Fourier transform on the radiant energy being processed.
Jeffrey Chew, director of the Department of Physics at the University of
California at Berkeley, began the conference with a tutorial on the Fourier
relationship and its importance in quantum physics. He stated that any space-
time pattern that we observe can be transformed into spectra that are composed
of the intersections among waveforms differing in frequency, amplitude, and
their relation to one another, much as Leonardo Da Vinci had observed.
My appetite for understanding the Fourier transformation was whetted and
became amply rewarded.
Fourier’s Discovery
Jean Baptiste Joseph Fourier, a mathematical physicist working at the
beginning of the 19th century, did an extensive study of heat, that is, radiation
below the infrared part of the visible spectrum. The subject had military
significance (and thus support) because it was known that various patterns of
heat could differently affect the bore of a cannon. Shortly thereafter, the field of
inquiry called “thermo-dynamics” became established to measure the dissipation
of heat from steam engines: the amount of work, and the efficiency (the amount
of work minus the dissipation in heat) with which an engine did the work would
soon be major interests for engineers and physicists. The invention of various
scales that we now know as “thermo-meters”—Fahrenheit, Celsius and Kelvin—
resulted, but what interested Fourier was a way to measure not only the amount
of heat but rather to measure the spectrum of heat. The technique he came up
with has been discovered repeatedly, only to have slipped away to be
rediscovered again in a different context. We met this technique when we
discussed the Pythagoreans, but both Fourier and I had to rediscover it for
ourselves. The technique is simple, but if Fourier and other French
mathematicians had difficulty with it, it may take a bit of patience on your part
to understand it.
What is involved is looking at spectra in two very different ways: as the
intersection among wave fronts and as mapped into a recurring circle.
To analyze the spectrum that makes up white light, we use a prism. What
Fourier came up with to analyze the spectrum of heat can be thought of as
analyzing spectra with a “clock.” The oscillations of molecules that form heat
can be displayed in two ways: 1) an up-and-down motion (a wave) extended
over space and time, or 2) a motion that goes round and round in a circle as on a
clockface that keeps track of the daily cycle of light and darkness.
Fourier, as a mathematician, realized that he could specify any point on a
circle trigonometrically, that is by triangulating it. Thus he could specify any
point on the circumference of the circle, that is, any point on an oscillation by its
sine and cosine components. The internal angle of the triangle that specifies the
point is designated by the cosine; the external angle, by the sine.
When he plotted these sine and cosine components on the continuous up-
and-down motion extended over space and time, the sine wave and the cosine
wave are displaced from one another by half a cycle. Where the two waveforms
intersect, those points of intersection are points of interference (reinforcement or
diminution of the height of oscillations) just as at the intersection of Leonardo’s
waves created by pebbles dropped into a pool. The heights of the points of
intersection are numbers that can be used to calculate relations between the
oscillations making up the spectra of heat. In this manner Fourier digitized an
analogue signal.
Thinking back and forth between the two ways by which waveforms can be
plotted is not easy. It took Fourier, a brilliant mathematician, several decades to
work out the problem of how to represent the measure of heat in these two ways.
One of my “happier times” was reading, in the original French and in English
translations, Fourier’s stepwise groping toward this solution. My reason for
being so happy was that I had tried for myself to understand his solution for over
a decade and found not only that my solution was correct, but that Fourier had
gone through the steps toward the solution much as I had. Further, I have just
learned that quantum physicists are rediscovering this analytic “trick”: they
speak of “modular operators” which divide the processes under investigation
into clock-like repetitious segments. I asked my colleagues, “Like Fourier?” and
received a nod and smile: I had understood.
29. Representing Oscillations as Waveforms and as Circular Forms
a. Radiant energy (such as light and heat) becomes scattered, spectral and
holographic, as it is reflected from and refracted by objects in the universe.
b. The input to our eyes is (in)formed by this spectral, holographic radiant
energy.
c. The optics of our eyes (pupil and lens) then perform a Fourier
transformation to produce a space-time flow, a moving optical image much
as we experience it.
d. At the retina, the beginning of another transformation occurs: the space-
time optical image, having been somewhat diffracted by the gelatinous
composition of the lens, becomes re-transformed into a quantum-like
multidimensional process that is transmitted to the brain.
Gabor’s Quantum of Information
The processes of the retinal transformation, and those that follow, are not
accurately described by the Fourier equation. I discussed this issue with Dennis
Gabor over a lovely dinner in Paris one evening, and he was skeptical of
thinking of the brain process solely in Fourier terms: “It is something like the
Fourier but not quite.” At the time, Gabor was excited about showing how the
Fourier transform could be approached in stages. (I would use his insights later
in describing the stages of visual processing from retina to cortex.) But Gabor
could not give a precise answer to why it was not quite how the brain process
worked.
Gabor actually did have the answer—but he must have forgotten it or felt
that it did not apply to brain processing. I found, after several years’ search, that
the answer “to not quite a Fourier” had been given by Gabor himself when I
discovered that J. C. R. Licklider, then at the Massachusetts Institute of
Technology, had referred in the 1951 Stevens’ Handbook of Experimental
Psychology to a paper by Gabor regarding sensory processing.
Missing from our Paris dinner conversation was Gabor’s development of
his “elementary function” which dated to an earlier interest. He had published
his paper in the IEE, a British journal. I searched for the paper in vain for several
years in the American IEEE until someone alerted me to the existence of the
British journal. The paper was a concise few pages which took me several
months to digest. But the effort was most worthwhile:
Before he had invented the Fourier holographic procedure, Gabor had
worked on the problem of telephone communication across the transatlantic
cable. He wanted to establish the maximum amount a message could be
compressed without losing its intelligibility. Whereas telegraphy depends on a
simple on-off signal such as the Morse code, in a telephone message that uses
language, intelligibility of the signal depends on transmission of the spectra that
compose speech. The communicated signals are initiated by vibrations produced
by speaking into the small microphones in the phone.
Gabor’s mathematical formulation consisted of what we now call a
“windowed” Fourier transform, that is, a “snapshot” (mathematically defined as
a Hilbert space) of the spectrum created by the Fourier transform which
otherwise would extend the spectral analysis to infinity. Neither the transatlantic
cable nor the brain’s communication pattern extends to infinity. The Gabor
function provides the means to place these communication patterns within
coordinates where the spectrum is represented on one axis and space-time on the
other. The Fourier transform lets us look at either space-time or spectrum; the
Gabor function provides us with the ability to analyze a communication within a
context of both space-time and spectrum.
Holographic Brainwaves?
At a recent conference I was accosted in a hallway by an attendee who
stated that he had great trouble figuring out what the brain waves were that
constituted the dendritic processing web. I was taken aback. I had not thought
about the problem for decades. I replied without hesitation, “Brain waves! There
aren’t any. All we have is oscillations between excitatory and inhibitory post-
(and pre-) synaptic potential changes.” I went on to explain about Gabor
“wavelets” as descriptive of dendritic fields and so forth.
It’s just like water. When the energy of a volley of nerve impulses reaches
the “shore” of the retina the pre-synaptic activation forms a “wave-front.” But
when activity at the brain cortex is mapped we obtain wavelets, Gabor’s “quanta
of information.” The relationship between the interference patterns of spectra
and the frequencies of patterns of waves is straightforward, although I have had
problems in explaining the relationship to students in the computational
sciences. Waves occur in space-time. Spectra do not. Spectra result from the
interference among waves—and, by Fourier’s insightful technique, the locus of
interference can be specified as a number that indicates the “height” of the wave
at that point. The number is represented by a Gabor function (or other wavelet.)
Frequency in space-time has been converted to spectral density in the transform
domain.
Wavelets are not instantaneous numbers. As their name implies wavelets
have an onset slope and an offset slope. Think of a musical tone: you know a
little something about where it comes from and a little as to where it is leading.
Note that we are dealing with dendritic patches limited to one or a small
congregation of receptive fields—like the bobbing of a raft in the water beyond
the shore. There was a considerable effort made to show that there is no overall
Fourier transform that describes the brain’s relationship to a percept—but those
of us who initially proposed the holographic brain theory always described the
transformation as limited to a single or a small patch of receptive fields. By
definition, therefore, the Fourier transformation is constrained, windowed,
making the transformed brain event a wavelet.
We now need to apply these formulations to specific topics that have
plagued the study of perception— needlessly, I believe. The concept that the
optics of the eye create a two-dimensional “retinal image” is such a topic.
Gabor or Fourier?
The form of receptive fields is not static. As described in the next chapter,
receptive field properties are subject to top-down (cognitive) influences from
higher-order brain systems. Electrical stimulation of the inferior temporal cortex
changes the form of the receptive fields to enhance Gabor information
processing— as it occurs in communication processing systems. By contrast,
electrical stimulation of the prefrontal cortex changes the form of the receptive
fields to enhance Fourier image processing—as it is used in image processing
such as PET Scans and fMRI and in making correlations such as in using FFT.
The changes are brought about by altering the width of the inhibitory surround
of the receptive fields. Thus both Gabor and Fourier processes are achieved,
depending on whether communication and computation or imaging and
correlations are being addressed.
In Summary
In this chapter I summarize the hard evidence for why we should pay
attention to the transformations that our receptors and sensory channels make—
rather than thinking of these channels as simple throughputs. This evidence
provides a story far richer than the view of the formation of perception held by
so many scientists and philosophers.
Furthermore, when we separate the transformations produced by our lenses
and lens-like processes from those produced by our retinal processing, we can
demonstrate that the brain processing of visual perception is actually
multidimensional. For other senses I presented the evidence that “sensory
inhibition” serves a similar function. Thus there is no need for us to deal with the
commonly held view that brain processes need to supplement a two-dimensional
sensory input.
Finally, I place an emphasis on the data that show that perception has the
attribute of “projection” away from the immediate locus of the stimulation. We
often perceive objects and figures as “out there,” not at the surface of the
receptor stimulation.
A corollary of projection is introjection. I thus now venture to suggest that
introjection of perceptions makes up a good deal of what we call our ”conscious
experience.”
Chapter 6
Of Objects and Images
Wherein I distinguish between objects and images and the brain systems that
process the distinction.
An Encounter
Some years ago, during an extended period of discussion and lecturing at
the University of Alberta, I was rather suddenly catapulted into a debate with the
eminent Oxford philosopher Gilbert Ryle. I had just returned to Alberta from a
week in my laboratory at Stanford to be greeted by my Canadian hosts with:
“You are just in time; Gilbert Ryle is scheduled to give a talk tomorrow
afternoon. Would you join him in discussing ‘The Prospects for Psychology as
an Experimental Science’?” I was terrified by this “honor,” but I couldn’t pass
up the opportunity. I had read Ryle’s publications and knew his memorable
phrase describing “mind” as “the ghost in the machine.” I thought, therefore, that
during our talk we would be safely enveloped in a behaviorist cocoon within
which Ryle and I could discuss how to address issues regarding verbal reports of
introspection. But I was in for a surprise.
Ryle claimed that there could not be any such thing as an experimental
science of psychology—that, in psychology, we had to restrict ourselves to
narrative descriptions of our observations and feelings. He used the example of
crossing a busy thoroughfare filled with automobiles, while holding a bicycle.
He detailed every step: how stepping off a curb had felt in his legs, how moving
the bicycle across cobblestones had felt in his hands, how awareness of on-
coming traffic had constrained his course, and so forth. His point was that a
naturalist approach, using introspection, was essential if we were to learn about
our psychological processes. I agreed, but added that we needed to supplement
the naturalist procedure with experimental analyses in order to understand “how”
our introspections were formed. Though I countered with solid results of many
experiments I had done on visual perception, I was no match for an Oxford don.
Oxonians are famed for their skill in debating. Ryle felt that my experimental
results were too far removed from experience. I had to agree that in
experimentation we lose some of the richness of tales of immediate events.
About ten minutes before our time was up, in desperation, I decided to do
just what Ryle was doing: I described my experimental procedure rather than its
results. I compared the laboratory and its apparatus to the roadway’s curbs and
pavements; I enumerated the experimental hazards that we had to overcome and
compared them with the cars on the road; and I compared the electrodes we were
using to Ryle’s bicycle. Ryle graciously agreed: he stated that he had never
thought of experiments in this fashion. I noted that neither had I, and thanked
him for having illuminated the issue, adding that I had learned a great deal about
the antecedents —the context within which experiments were performed. The
audience rose and cheered.
This encounter with Gilbert Ryle would prove to be an important step in my
becoming aware of the phenomenological roots of scientific observation and
experiment. Ryle’s “ghost” took on a new meaning for me; it is not to be
excised, as in the extremes of the behaviorist tradition; it is to be dealt with. I
would henceforth claim that: We go forth to encounter the ghost in the machine
and find that “the ghost is us.”
Object Constancy
As noted at the beginning of this chapter, immediately adjacent to—or in
some systems even overlapping— the primary sensory receiving areas of the
brain, are brain areas that produce movement of the sensing elements when
electrically excited. Adjacent to and to some extent overlapping this movement-
producing cortex are areas that are involved in making possible consistent
perceptions despite movement. My colleagues and I, as well as other
investigators, have shown these areas to be involved in perceiving the size and
color as well as the form of objects as consistent (called “size, color and object
constancy” in experimental psychology.)
In our experiments we trained monkeys to respond to the smaller of two
objects irrespective of how far they were placed along an alley. The sides of the
alley were painted with horizontal stripes to accentuate its perceived distance.
The monkeys had to pull in the smaller of the objects in order to be able to open
a box containing a peanut. After the monkeys achieved 90% plus, I removed the
cortex surrounding the primary visual receiving area (which included the
visuomotor area). The performance of the monkeys who had the cortical
removals fell to chance and never improved over the next 1000 trials. They
always “saw” the distant object as smaller. They could not correct for size
constancy on the basis of the distance cues present in the alley.
The Form(ing) of Objects
Our large-scale movements allow us to experience objects. Groundbreaking
research has been done by Gunnar Johanssen in Uppsala, Sweden, and James
Gibson at Cornell University in Ithaca, New York. Their research has shown
how objects can be formulated by moving points, such as the pixels that we
earlier noted were produced by nystagmoid oscillations of the eyes. When such
points move randomly with respect to each other, we experience just that—
points moving at random. However, if we now group some of those points and
consistently move the group of points as a unit within the background of moving
points— Voilà! We experience an object. A film made by Johanssen and his
students in the 1970s demonstrates this beautifully. The resulting experience is
multidimensional and sufficiently discernible that, with a group of only a dozen
points describing each, the viewer can tell boy from girl in a complex folk dance.
Though some of the film is made from photographs of real objects,
computer programs generated most of it. I asked Johanssen whether he used the
Fourier transform to create these pictures. He did not know but said we could
ask his programmer. As we wandered down the hall, he introduced me to one
Jansen and two other Johanssens before we found the Johanssen who had created
the films. I asked my question. The programmer was pleased: “Of course, that’s
how we created the film!”
But the Johanssen film shows something else. When a group of points, as
on the rim of a wheel, is shown to move together around a point, the axel, our
perception of the moving points on the rim changes from a wave form to an
object: the “wheel.” In my laboratory we found, rather surprisingly, that we
could replace the “axel” by a “frame” to achieve the same effect: a “wheel.” In
experimental psychology this frame effect has been studied extensively and
found to be reciprocally induced by the pattern that is being framed.
This experimental result shows that movement per se does not give form to
objects. Rather, the movements must cohere, must become related to one another
in a very specific manner. Rudolfo R. Llinás provided me with the mathematical
expressions of this coherence in personal discussions and in his 2001 book, I of
the Vortex: From Neurons to Self. My application of his expressions to our
findings is that the Gabor functions that describe sensory input cohere by way of
covariation; that the initial coherence produced by movements among groups of
points is produced by way of contravariation (an elliptical form of covariation);
and that the final step in producing object constancy is the “symmetry group”
invariance produced by a merging of contravariation with covariation.
Symmetry Groups
Groups can be described mathematically in terms of symmetry. A
Norwegian mathematician, Sophus Lie, invented the group theory that pertains
to perception. The story of how this invention fared is another instance of slow
and halting progress in gaining the acceptance of scientific insights. As
described in the quotation at the beginning of this chapter, early in the 1880s
Hermann von Helmholtz, famous for his scientific work in the physiology of
vision and hearing, wrote to Henri Poincaré, one of the outstanding
mathematicians of the day. The question Helmholtz posed was ”How can I
represent objects mathematically?” Poincaré replied, “Objects are relations, and
these relations can be mapped as groups.” Helmholtz followed Poincaré’s advice
and published a paper in which he used group theory to describe his
understanding of object perception. Shortly after the publication of this paper,
Lie wrote a letter to Poincaré stating that the kind of discrete group that
Helmholtz had used would not work in describing the perception of objects—
that he, Lie, had invented continuous group theory for just this purpose. Since
then, Lie groups have become a staple in the fields of engineering, physics and
chemistry and are used in everything from describing energy levels to explaining
spectra. But only recently have a few of us returned to applying Lie’s group
theory to the purpose for which he created it: to study perception.
My grasp of the power of Lie’s mathematics for object perception was
initiated at a UNESCO conference in Paris during the early 1970s. William
Hoffman delivered a talk on Lie groups, and I gave another on microelectrode
recordings in the visual system. We attended each other’s sessions, and we found
considerable overlap in our two presentations. Hoffman and I got together for
lunch and began a long and rewarding interaction.
Symmetry groups have important properties that are shown by objects.
These groups are patterns that remain constant over two axes of transformation.
The first axis assures that the group, the object, does not change as it moves
across space-time. The second axis centers on a fixed point or stable frame, and
assures that changes in size—radial expansion and contraction—do not distort
the pattern.
When it was time for the little overloaded boat to start back to the
mainland, a wind came up and nudged it onto a large rock that was
just below the surface of the water. We began, quite gently, to sink. . . .
All together we went slowly deeper into the water. The boatman
managed to put us ashore on a huge pile of rocks. . . .
The next day when I passed the island on a bus to Diyarbakir there,
tied to the rock as we had left it, was a very small, very waterlogged
toy boat in the distance, all alone.
—Mary Lee Settle, Turkish Reflections: A Biography of a Place, 1991
Navigating the Spaces We Perceive
Have you considered just how you navigate the spaces of your world?
Fortunately, in our everyday living, we don’t have to; our brain is fine-tuned to
do the job for us. Only when we encounter something exceptional do we need to
pay attention. When we are climbing stairs, we see the stairs with our eyes. The
space seen is called “occulocentric” because it centers on our eyeballs, but it is
our body that is actually climbing the stairs. This body- centered space is called
“egocentric.” The stairs inhabit their own space: some steps may be higher than
others; some are wider, some may be curved. Stairs are objects, and each has its
own “object-centered” space. Space, as we navigate it, is perceived as three-
dimensional. Since we are moving within it, we must include time as a fourth
dimension. We run up the stairs, navigating these occulocentric, egocentric, and
object-centered spaces without hesitation. The brain must be simultaneously
computing within 12 separate co-ordinates, four dimensions for each time-space.
Cosmologists are puzzling the multidimensionality of “string” and “brane”
theory: why aren’t scientists heeding the 12 navigational coordinates of brain
theory?
Now imagine that the stairs are moving. You are in a hurry, so you climb
these escalator stairs. You turn the corner and start climbing the next flight of
stairs. Something is strange. The stairs seem to be moving but they are not, they
are stationary. You stumble, stop, grab the railing, then you proceed cautiously.
Your occulocentric (visual) cues still indicate that you are moving, but your
egocentric (body) cues definitely shout “no.” The brain’s computations have
been disturbed: occulocentric and egocentric cues are no longer meshed into a
smoothly operating navigational space. It is this kind of separation—technically
called “dissociation”— that alerts us to the possibility that the three types of
space may be constructed by different systems of the brain.
Mirrors
How we perceive becomes especially interesting, almost weird, when we
observe ourselves and the world we navigate in mirrors. Like author and
mathematician Lewis Carroll in the experiments he often demonstrated to
children, I have all my life been puzzled by mirrors. How is it that within the
world of mirrors right and left is reversed for us but up and down is not? How is
it that when I observe myself in a mirror, my mirror image is “in the mirror,” but
the scene behind me is mirrored “behind the mirror?”
Johannes Kepler proferred an answer to these questions in 1604. Kepler
noted that a point on the surface of a scene reflects light that becomes spread
over the surface of the mirror in such a way that when viewed by the eye, it
forms a cone behind the mirror symmetrical to the point. His diagram illustrates
this early insight.
I realized the power of the virtual image while reading about Kepler’s
contribution, lying on a bed in a hotel room. The wall at the side of the bed was
covered by a mirror—and “behind” it was another whole bedroom! An even
more potent and totally confusing experience occurred while I was visiting a
famous “hall of mirrors” in Lucerne, Switzerland. Speak of virtual horizons! I
saw my colleagues and myself repeated so many times at so many different
distances and in so many different directions that I would certainly have become
lost in the labyrinth of images had I not held onto an arm of one of my
companions.
On this occasion, I was given a reasonable “explanation” of the how of
mirror images. I was told that there had been an explanation presented in
Scientific American some time ago which was summarized thus: ordinarily we
have learned to look forward from back to front—so when we are looking in a
mirror we see what is in front of us, behind the mirror, as if we are coming at the
virtual image from behind it.
37. Kepler’s diagram of the reflection of a point (S) on a plane mirror (M—M’) as seen (I) as a virtual
image by a person’s eyes
Support for this explanation comes from an experience many of us have had
when we have tried to teach someone else how to tie a necktie (especially a bow
tie). If you face the person and try to tie the tie, the task is awkward if not
impossible. But if you stand behind the person, look forward into a mirror, and
tie his tie, the job is easy.
The above observations were made on a plane mirror. A concave mirror
presents another wonderful phenomenon. I have a 7X concave mirror in my
bathroom. Looking at close distance to my face, I see its reflection “in the
mirror.” But one day I noted that the bathroom’s ceiling light was floating in
midair, suspended over the bathroom sink. Investigating the source of the image
showed that when I covered the mirror with my hand the image disappeared.
The examples presented to us by mirrors show that what we see depends on
how we see. The next sections describe some of what we now know about the
“how” of seeing.
38. Image (I) produced from scene (S) reflected by a concave mirror
We then removed a part of each monkey’s cortex just in front of the primary
visual areas that receive the retinal input and those that regulate the oscillatory
movements of the eyes. The monkeys behaved normally on all visual tests
except one: now they would pull in the smaller box when it was close while the
larger box was far down the alley. At that distance, the larger box made a smaller
optical image on the monkeys’ retina. The intact monkeys were able to
compensate for distance; the monkeys that had been operated upon could not.
When I tell my students about this experiment, I point out that their heads
look the same size to me whether they sit in the front or the back row of the
room. The students in front did not seem to suffer from hydrocephalus (water-
on-the-brain); the ones in back, most likely were not microcephalic idiots.
Constancy in our perception holds within a certain range, a range that varies
as a function of distance being observed and the experience of the observer. The
range at which a particular constancy ceases forms a horizon, a background
against which the perception becomes recalibrated. A classroom provides such a
“horizon.” But out on a highway, automobiles all look pretty much the same size
over a longer range in terms of the distance in our field of vision. What
determines the horizon in this case is the layout of the road and its surroundings.
On a straight road in rolling hills, where distances are truncated by the lay of the
land, cars may look small compared to what they might look like at the same
distance on a U curve, where the entire road is visible. When there is an
unfamiliar gap in visibility, multiple horizons are introduced. The placement of
horizons where constancy breaks down is a function of the context within which
an observation is occurring.
Professor Patrick Heelan, who helps teach my Georgetown class at times,
has pointed out that when painters try to depict three dimensions onto a two-
dimensional surface, size constancy is limited to a certain range. Beyond that
range, other laws of perspective seem to take hold. Heelan is a theoretical
physicist and philosopher of science whose specialty is identifying how painters
construct perspectives that allow us to “see” three dimensions when a scene is
displayed on a two-dimensional surface.
On the basis of his 30-year study of paintings and of how we perceive the
cosmos, Heelan has concluded that overall visual space appears to us as
hyperbolic. For instance, as I mentioned earlier, when we observe objects by
looking into a concave mirror—the kind that has a magnification of 7X—objects
close to the mirror are magnified in it, but things far away seem to be projected
in front of the mirror between ourselves and it—and those objects appear to us
smaller than they are when we look at them directly.
Leonardo da Vinci observed that the optics of the eye and the surface of the
retina do not compose a flat plane but a hemisphere, thus the physiological
process that forms our perceptions cannot be a two-dimensional Euclidian one.
Rather, this physiological process forms the concavity of perceived configuration
of hyperbolic space. There is a change in the amount of concavity depending
upon the distance that is being observed: the closer we focus, the more the lens
of the eye becomes bulged; thus there is greater concavity. As our focus become
more distant, the lens flattens somewhat, which results ins less concavity. These
changes are like changing the magnification of the concave mirror from 7X to
5X and 3X.
Heelan has proposed that a description of cosmic space as hyperbolic
accounts for a good part of the “moon illusion” which occurs when we look at
the sun as well. When the moon, or sun, is at the earth’s horizon, it appears
enormous. Atmospheric haze can add a bit more to this illusion. But as the moon
rises, away from our experienced earth-space defined by its horizon, it appears to
be much smaller. This change in perception is identical to what we experience
when we look at the 7X concave mirror.
Furthermore, when my collaborator Eloise Carlton modeled the brain
process involved in viewing the rotation of figures developed by Roger Shepard,
my colleague at Stanford, the process consisted of mapping a geodesic path—
the shortest route around a sphere—as in crossing the Atlantic Ocean from New
York to London via Iceland. Mathematically, this is represented by a Hilbert
space of geodesics —a Poincaré group —which is similar to the mathematics
used by Heisenberg to describe quantum structures in physics and by Gabor in
describing the minimum uncertainty in telephone communications. It is the
description of processing by receptive fields that we have become familiar with
throughout these chapters.
39. Diagram of a horizontal plane at eye level with two orthogonals AB and CD, to the frontal plane AOC
(O is the observer): A’B’ and C’D’ are the visual transforms of AB and CD in a finite hyperbolic model of
visual space. M’N’ marks the transition between the near and the distant zones. Contours are modeled on
computed values. The mapping is not (and cannot be made) isomorphic with the hyperbolic shapes, but
some size and distance relationships are preserved.
This and similar observations raises the issue as to how much of what we
perceive is due to our perceptual apparatus and how much is “really out there.”
Ernst Mach, the influential early 20th-century Viennese physicist, mathematician
and philosopher, whose father was an astronomer, worried all his life about the
problem of how much of the apparent size of the planets, stars, and other visual
phenomena was attributable to the workings of our perceptual apparatus and
how much was “real.” Mach’s legacy has informed the visual arts and
philosophers dealing with perception to this day.
40. Frontal planes: composite diagram in four sectors: 1. physical frontal plane (FP); visual transform of
FP at distance d (near zone: convex); and 4. visual transform of FP at distance 10d (distant zone: concave).
d is the distance to the true point for the hyperbolic model. Contours are modeled or computed values for a
finite hyperbolic space.
41. Diameter of the pattern subtends an angle of 90 degrees with the viewer. B. The diagram is not a set of
images to be read visually, but a set of mappings to be interpreted with the aid of the transformation
equations: the mappings are not (and cannot be made) isomorphic with the hyperbolic shapes, but some
size relationships are preserved in frontal planes (or what physically are frontal planes).
In keeping with the outline presented in the Preface, the perceptions we
hold are a consequence of the territory we explore. Within our yards, and within
the context of portraying our living spaces onto a two-dimensional surface,
Euclidian propositions suffice. When we come to perceive figures that rotate in
space and time, higher-order descriptions become necessary. It is these higher-
order descriptions that hold when we look at what is going on in the brain. The
retina of our eye is curved, and this curvature is projected onto the cortical
surface. Thus, any process in the sensory and brain systems that allows us to
perceive an object’s movement must be a geodesic: a path around a globe.
Wade Marshall came to the laboratory to see for himself what we were
observing. He felt that the “artifact” might be due to a spread of current within
the cortex, starting from the sensory cortex to include the motor cortex.
Together, we used a technique called “spreading cortical depression” that is
induced by gently stroking the cortex with a hemostat. Using this procedure,
within a few seconds any neighboring cortically originated activity ceases. The
electrical stimulation of the sciatic nerve continued to produce a local brain
response, though the response was now of considerably lower amplitude than it
had been before stroking the cortex. The remaining response was subcortical,
initiated by nerve fibers reaching the cortex prior to our stroking the cortex,
which had amplified the response.
So, I asked: If the input from the sciatic nerve is real, what might be the
pathways by which its signals reach the cortex? Larry Kruger decided that this
would provide him an exciting thesis topic. So, I surgically removed a monkey’s
post-central cortex—the part of the brain known to receive the sensory input
from the sciatic nerve—and therefore a possible way station to the adjacent pre-
central motor cortex. The “artifact” was still there.
Next I removed a hemisphere of the monkey’s cerebellum, a portion of the
primate brain that was known to receive a sciatic nerve input and to provide a
pathway to the pre-central motor cortex. Again, there was no change in the
“artifact”—the spikes were still there. Finally, I surgically removed both the
post-central cortex and the cerebellar hemisphere. Still no change in the response
in the motor cortex to sciatic stimulation!
During these experiments, we discovered another puzzling fact. It seemed
plausible that some input to the motor cortex might originate in the muscles. But
we found, in addition, that stimulation of the sciatic nerve fibers originating in
the skin also produced a response in the motor cortex.
Larry Kruger settled the entire issue during his thesis research by tracing
the sciatic input directly to the motor cortex via the same paths in the brain stem
that carried the signals to the post-central sensory cortex.
The research that I’ve summarized in these few paragraphs took more than
five years to complete. So we had plenty of time to think about our findings.
What Must Be
What we call the motor cortex is really a sensory cortex for movement. This
conclusion fits another anatomical fact that has been ignored by most brain and
behavioral scientists. Notable exceptions were neurosurgeon Wilder Penfield and
neuroscientist Warren McCulloch, whose maps of the sensory cortex did include
the pre-central region. Their reason was that the anatomical input to the motor
cortex arrives there via the dorsal thalamus (dorsal is Latin for back, thalamus is
Latin for "chamber") that is the halfway house of all sensory input to the entire
brain cortex except the input for smell. The dorsal thalamus is an extension of
the dorsal horn of our spinal cord, the origin of input fibers from our receptors,
not of output fibers to our muscles.
Nineteenth century European scientists had labeled this part of the cortex
“motor” when they discovered that electrically stimulating a part of the cortex
produced movements of different parts of the subject’s body. When they mapped
the location of the cortical origins of all these stimulus-produced movements, the
result was a distorted picture of the body, a picture which exaggerated the parts
with which we make fine movements, such as the fingers and tongue. Such a
map is referred to in texts as a “homunculus” (a “little human”).
While Malis, Kruger and I at Yale were looking at the input to the cortex
surrounding the central fissure, Harvard neuro-anatomist L. Leksell was looking
at the output from this cortex. This output had been thought to originate
exclusively in the pre-central motor cortex. Now Leksell found this output to
originate as well from the post-central sensory cortex. Those fibers that originate
in our pre-central cortex were the largest, but the origin of the output as a whole
was not limited to these large fibers.
For me, all of these new discoveries about the input to and the output from
the cortex surrounding the central fissure called into question the prevailing
thesis of the time: that the brain was composed of separate input and output
systems. Today, this thesis is still alive and popularly embodied in what is
termed a “perception-action cycle.” However, in our nervous system, input and
output are in fact meshed. The evidence for this comes not only from the brain
research conducted in my laboratory, as detailed above, but also from discoveries
centered on the spinal cord.
The Reflex
During the latter part of the 19th century two physiologists, one English,
Charles Bell, and the other French, François Magendie, conducted a series of
experiments on dogs, cutting the roots of various nerves such as the sciatic
nerve. These nerve roots are located at the point where the nerves connect to the
spinal cord. Some of the fibers making up these nerves are rooted in the dorsal,
or back, region of the spinal cord, the others in the ventral, or ”stomach,” region.
When the dorsal fibers were cut off at the root, the dogs had no difficulty in
moving about but did not feel anything when the experimenters pinched the skin
of their legs with tweezers. By contrast, when the experimenters cut the ventral
roots, the dogs’ limbs were paralyzed, though the dogs responded normally to
being pinched. The results of these experiments are famously enshrined in
neuroscientific annals as the “Law of Bell and Magendie.”
In the decades just before and after the turn of the 19th to 20th centuries, Sir
Charles Sherrington based his explanation of his studies of the spinal cord of
frogs on the Bell and Magendie law. It was Sherrington who developed the idea
of a reflex—an “input-output arc,” as he termed it—as the basis for all our
nervous system processing. Sherrington clearly and repeatedly stated that what
he called an “arc” was a “fiction,” a metaphor that allowed us to understand how
the nervous system worked: Reflexes could combine in a variety of ways. For
instance, when we flex an arm, contraction of the flexion reflex must be matched
simultaneously by a relaxation of the extension reflex; when we carry a suitcase
both flexors and extensors are contracted.
These ideas are as fresh and useful today as they were a hundred years ago
despite the fact that Sherrington’s fiction, of an “arc” as the basis for the reflex,
has had to be substantially revised.
Unfortunately, however, today’s textbooks have not kept up with these
necessary revisions. On several occasions, I have written up these revisions at
the request of editors, only to have the publishers of the texts tell the editors that
the revisions are too complex for students to understand. (I have wondered why
publishers of physics texts don’t complain that ideas, such as Richard Feynman’s
brilliant lectures on quantum electrodynamics, are too complex for students to
understand.)
Sherrington’s “fiction” of a reflex as a unit of analysis of behavior was the
staple of the behaviorist period in psychology. My quarrel is not with the concept
of a reflex but with the identification of “reflex” with its composition as an arc.
The organization of the reflex, and therefore of behavior, is more akin to that of
a controllable thermostat. The evidence for this stems from research performed
in the 1950s that I will describe shortly. The change in conception was from the
metaphor of an “arc,” where behavior is directly controlled by an input, to a
metaphor of a “controllable thermostat,” where behavior is controlled by the
operations of an organism in order to fulfill an aim. This change has fundamental
consequences, not only for our understanding of the brain and of our behavior,
but also for society. Doling out charities is not nearly as effective as educating a
people, setting their aims to achieve competence.
43. Summary diagram of the gamma (y) motor neuron system. Redrawn after Thompson, 1967.
Biological Control Systems
The history of the invention of the thermostat helps make the biological
control systems more understandable. During the 1920s and 1930s, Walter
Cannon at Harvard University, on the basis of his research, had conceptualized
the regulation of the body’s metabolic functions in terms of a steady state that
fluctuated around a baseline. We get hungry, eat, become satiated, metabolize
what we have eaten, and become hungry once more. This cycle is anchored on a
base which later research showed to be the organism’s basal temperature. He
called this regulatory process “homeostasis.” A decade later, during World War
II, Norbert Wiener, who had worked with Cannon, further developed this idea at
the Massachusetts Institute of Technology, where he used it to control a tracking
device that could home in on aircraft.
After the war, engineers at Honeywell applied the same concept to track
temperature: they created the thermostatic control of heating and cooling
devices. In the 1950s, the concept of a thermostat returned to biology, becoming
the model for sensory as well as metabolic processing in organisms.
In the process of conducting experiments to find out where such tracking—
such controls on inputs—originate, neuroscientists found that, except in vision,
control can be initiated not only from structures in our brain stem but also by
electrical stimulation of higher-order systems within the brain.
Because our visual system is so important to navigation, I set my Stanford
laboratory to work to determine whether or not our brain controls our visual
receptors as had been demonstrated to be the case for muscle receptors, touch,
smell and hearing. My postdoctoral student Nico Spinelli and I tested this
possibility by implanting a micro-electrode in the optic nerve of cats, and then,
at a later date, stimulating the footpads of the awake cats, using the end of a
pencil. One of us would stimulate the cat’s paw while the other supervised the
electrical activity recorded from the cat’s optic nerve. We obtained excellent
responses as shown on our oscilloscope and computer recordings.
We next went on to use auditory stimuli consisting of clicking sounds.
Again we were able to record excellent responses from the cat’s optic nerve. We
were surprised to find that the responses in the optic nerve arriving from tactile
and auditory stimulation came at practically the same time as those we initiated
by a dim flash. This is because the input to the brain from the tactile and
auditory stimulation is much faster than the visual stimuli that have to be
processed by the retina. There is so much processing going on in the fine fibers
of the retina (recall that there is no nerve impulse rapid transmission within the
retina) that the delay in optic stimulation reaching the optic nerve directly is
equal to the delay produced by processing of tactile and auditory stimuli in the
brain. We wondered whether this coincidence in timing might aid lip reading in
the hearing impaired.
When the cat went to sleep, the responses recorded from the optic nerve
ceased. Not only being awake but also not being distracted turned out to be
critical for obtaining the responses. A postdoctoral student, Lauren Gerbrandt,
came to my Stanford laboratory and, excited by what we had found, tried for
over a year to replicate our findings in monkeys. Sometimes he obtained the
responses from the optic nerve; other times he did not. We were upset and
frustrated at not being able to replicate an experimental result that we had
already published.
I urged Gerbrandt to choose some other experiment so that he would have
publishable findings before his tenure in the laboratory was up. He asked if it
would be OK to continue the visual experiments in the evening, and I offered to
assist him. He got his wife to help him bring the monkeys from their home cages
to the laboratory. The results of the experiments were immediately rewarding.
He obtained excellent responses every time, results that I came to witness. But
this run of good luck did not last. His wife became bored just waiting around for
him and asked if it would be all right for her to bring a friend. Now the responses
from the optic nerve disappeared. I came in to witness what was happening and
after a few nights the answer became clear to us: We asked the women, who had
been chatting off and on, to be quiet. The responses returned. Having established
the cause of the variability in the experimental results, we moved the experiment
to the room where the original daytime experiments had been performed. Now
we found good optic nerve responses whenever the adjacent hall was quiet.
When someone in high heels went by, or whenever there was any commotion,
the responses disappeared. We had found the cause of the “artifact.”
By recording the electrical activity of subcortical structures during the
experiment, we were then able to show that the distracting noise short-circuited
cortical control at the level of the thalamus, the halfway house of the visual input
to the cortex. Thus, we had not only demonstrated cortical control over visual
input but had also shown the thalamus to be a necessary brain component
involved in the control of the feedback circuitry.
44. The TOTE servomechanism modified to include feedforward. Note the parallel processing feature of the
revised TOTE.
The Act
I came to understand the distinction between movement and action as a
result of these experiments with monkeys, experiments that were undertaken at
the same time as those in which we stimulated the sciatic nerve. The motor
cortex turned out to be not only a sensory cortex for action—an act turned out to
be more of a sensory accomplishment than a particular set of movements.
Behaviorally, the motor cortex is not limited to programming muscle
contractions or movements. This cortex is involved in making actions possible.
An act is a target, an attractor toward which a movement intends. I illustrate this
for my classes by closing my eyes, putting a piece of chalk between my teeth,
and writing with it on the blackboard. It matters not whether I use my right hand
or my left, my feet or my teeth—processes in my motor cortex enable what
becomes written on the board. What is written is an achievement that must be
imaged to become activated. Such “images of achievement” are the first steps
leading to action, whether literally, as in walking, or at a higher, more complex
level as in speaking or writing.
The answer to the initial question that started my research on the motor
cortex can be summarized as follows: anatomically, muscles are represented in
the motor cortex; physiologically, movements around joints are represented
there. Behaviorally, our cortical formations operate to facilitate our actions.
How?
For ethologists—zoologists studying behavior, as for instance Konrad
Lorenz and Niko Tinbergen—the study of behavior is often the study of
particular movements, the fixed patterns of a sequence of muscle contractions
made by their subjects. Their interest in neuroscience has therefore been on the
spinal and brain stem structures that organize movements.
By contrast, my interest, stemming from the experiments on the motor
cortex, has been in line with the behaviorist tradition of psychology, where
behavior is defined as an action. An act is considered to be the environmental
outcome of a movement, or of a sequence of movements. B. F. (Fred) Skinner
remarked that the behavior of his pigeons is the record made when they pecked
for food presented to them according to a schedule. These records were mainly
made on paper, and Skinner defined behavior as the paper records that he took
home with him. They were records of responses, and Skinner was describing the
transactions among these responses: he described which patterning of responses
led to their repetition and which patterns led to the responses being dropped out.
The interesting question for me was: What brain processes are entailed in these
transactions?
An anecdote highlights my interest. The United Nations University hosted a
conference in Paris during the mid-1970s. Luria, Gabor and other luminaries,
including Skinner, participated. In his address, Skinner admitted that indeed he
had a theory, something that he had denied up to that time. He stated that his
theory was not a stimulus-response theory nor was it a stimulus-stimulus theory;
his theory was a response-reinforcement theory. Furthermore, he stated,
reinforcement was a process. I felt that he had given the best talk I’d ever heard
him give and went up to congratulate him. Skinner looked at me puzzled: “What
did I say that makes you so happy?” I replied, “That reinforcement is a process.
The process must be going on in the brain.” I pointed to my head. Skinner
laughed and said, “I guess I mustn’t say that again.” Of course he did. In 1989, a
year before he died, he wrote:
Reinforcement Revisited
Skinner’s contribution to the nature of reinforcement in learning theory has
not been fully assimilated by the scientific and scholarly communities: Skinner
and his pupils changed our conception of how learning occurs from being based
on pleasure and pain to a conception based on self-organization, that is, on the
consequences produced by the behavior itself. Pavlov’s classical conditioning,
Thorndike’s “law of effect,” Hull’s “drive reduction” and Sheffield’s “drive
induction”—even the idea that interest is some sort of drive—are all different
from what Skinner’s operant conditioning revealed. In my laboratory, a
colleague and I raised two kittens from birth to demonstrate the self-maintenance
and self-organizing aspects of behavior. We nurtured one kitten with milk on its
mother’s breast; we raised a littermate on the breast of a non-lactating cat and
fed the kitten through a stomach tube. After six months, there was practically no
difference in the rate of sucking between the two kittens. Mothers who have tried
to wean their infant from sucking on his thumb or on a pacifier might feel that
such an experimental demonstration was superfluous.
The change in conception of reinforcement from being based on pleasure
and pain to being based on performance per se is important for our
understanding of human learning. Rather than rewarding good behavior or
punishing bad behavior, operant behaviorism suggests that we “educate” (Latin
e-ducere, “lead” or “draw out”) a person’s abilities— that we nourish the
performances already achieved and allow these performances to grow by
leading, not pushing or pulling. In these situations, pleasure is attained by
achievement; achievement is not attained by way of pleasure. I did not have to
look at my score sheets to find out whether my young male monkeys were
learning a task: their erect penises heralded their improvement—and conversely,
when their performance was lagging, so was their penis.
My good students have used a technique that I used in college. We review
and reorganize our notes about once a month, incorporating what we have
learned that month into our previous material. A final revision before the final
exams relieves one of useless night-long cramming. I say useless because
crammed rote learning needs much rehearsal—as in learning multiplication
tables—to make a lasting impression. What has been learned by cramming is
well forgotten by the next week.
More to Be Explained
The results of my experiments on the motor cortex, and the conclusions
based on them, were published in the 1950s while I was still at Yale. Our results
were readily accepted and even acclaimed: John Fulton, in whose laboratory I
conducted these studies, called the paper “worthy of Hughlings Jackson,” the
noted English neurologist whose insights about the functions of the motor cortex
were derived from studying epileptic seizures.
My conclusions were that the removals of motor cortex had produced a
“scotoma of action.” In vision, a scotoma (from the Greek scotos, “darkness”) is
a dark or blind spot in the visual field. In Chapter 6 we reviewed the evidence
that some patients can reasonably well navigate their blind visual field despite
being consciously “blind” within that dark spot or field. The scotoma of action
produced by my removals of the motor cortex had a similar effect: the monkeys’
difficulty was restricted to a particular task, though they could still perform that
task with some loss of skill, as demonstrated in film records of their behavior
and by the longer time it took them to reach the peanut in the box. In my
experiments the “scotoma of action” referred to a particular task rather than to a
particular area of perceived space as in vision—and not to a difficulty with any
particular muscle group or a particular movement.
Today a great deal has been learned regarding the specific targeting of an
action. Much of the experimental and clinical research has been performed on
“visuomotor” achievements such as grasping an object. The brain cortex that has
been shown involved lies somewhat behind and in front of the cortex
surrounding the central fissure, the cortex involved in the experiments on
primates reviewed above. This may well be because in primates the central
fissure has moved laterally from its position in carnivores, splitting the
visuomotor related cortex into parietal and frontal parts.
45. (a) Subject in black costume with white tape. (b) Cinematograph of walking. Movement is from left to
right. The frequency is about 20 exposures per sec. Reprinted with permission from N. Bernstein, The Co-
ordination and Regulation of Movements. © 1967 Pergamon Press Ltd.
Eureka!
In Bernstein’s experiments, he had dressed people in black leotards and
filmed them against black backgrounds, performing tasks such as hammering a
nail, writing on a blackboard, or riding a bicycle. He had taped white stripes onto
the arms and legs of the experimental subjects. In later experiments, Bernstein’s
students (and, as previously discussed, Gunnar Johanssen in Sweden) placed
white dots on the joints. After developing the film, only waveforms made by the
white tapes could be seen. Bernstein had used a Fourier procedure to analyze
these waveforms and was able to predict each subsequent action accurately!!!
I was elated. This was clear evidence that the motor system used essentially
the same procedures as did the sensory systems! My seven years’ hunt had
ended. I gave my lecture on the brain’s motor systems the next day and was able
to face Luria with gratitude the following week.
The Particular Go of It
Next, of course, this idea had to be tested. If processing in the motor cortex
were of a Fourier nature, we should be able to find receptive fields of cortical
cells that respond to different frequencies of the motion of a limb.
An Iranian graduate student in the Stanford engineering department, Ahmad
Sharifat, wanted to do brain research. He was invaluable in upgrading and
revamping the computer systems of the laboratory as well as in conducting the
planned experiment.
A cat’s foreleg was strapped to a movable lever so that it could be passively
moved up and down. We recorded the electrical activity of single cells in the
motor cortex of the cat and found many cells that responded primarily, and
selectively, to different frequencies of motion of the cat’s limb. These motor
cortex cells responded to frequencies much as did the cells in the auditory and
visual cortex. The exploration had been successfully completed.
A Generative Process
At these higher levels, Llinás brings in sensory processing. Recall that, in
order to account for our ability to perceive images and objects, movements are
necessary: that in vision, nystagmoid movements are necessary to perceiving
images; and that more encompassing movements are necessary for us to perceive
objects. The result of movement is to make some aspects of the sensory input
“hang together” against a background, as in Johanssen’s grouping of dots. This
hanging together, or grouping of sensory inputs, is technically called
“covariation.” Llinás brings in covariation among sensory patterns to fill out his
view.
Furthermore, Llinás also describes the encompassing organizations of fixed
action patterns, our movements, as “hanging together” much as do sensory
patterns. As described in Chapter 6, the hanging together of action patterns is
described as “contravariation.” The difference between covariation in sensory
patterns and contravariation in action patterns is that contravariation implies the
anticipatory nature of action patterns.
When covariation and contravariation are processed together, the result
generates our perception of objects (Chapter 6) and our accomplishment of acts.
Technically the “coming together” of covariation and contravariation produces
invariances. Objects are invariant across different profiles—that is, across
different images of the sensory input—while acts are invariant across different
movements that compose an act.
Finally, the research on cortical processing of visuo-motor acts
accomplished by Marc Jeannerod and comprehensively and astutely reviewed in
Ways of Seeing by him and Pierre Jacob, provides the specificity required by the
actions guided by their targets, their images of achievement. These experiments
deal with invariants developed by the transactions between actions and the
images upon which they intend to operate.
Viewing acts as invariants composed across movements and their intended
achievements is the most encompassing step we have reached in our
understanding: brain processes that enable meaningful actions—we have moved
from reflex arc, through controllable thermostat, to what can be called a
generative organization of the brain/mind relationship.
This generative organization can be understood as being produced by
“motor inhibition” by analogy to the process of sensory inhibition. In Chapter 5,
I reviewed Békésy’s contribution on sensory inhibition that showed how our
perceptions are projected beyond ourselves. For the generative motor process,
we can invoke motor inhibition to internalize (introject) our images of
achievement, the targets of our actions. When, for instance, we write with our
pens in our notebooks or type on our computer keyboards, we produce an
achievement. This achievement must be internally generated in some fashion,
and motor inhibition can fulfill this role. Massive motor inhibition is produced
by the cellular interactions among their fine-fiber connections in the cerebellar
hemispheres: Purkinge cells release GABA, which produces a feedforward
inhibition. Other cells (basket and stellate) can inhibit the Purkinje cells. Thus
multiple stages of inhibition can sculpt the target—form the achievement.
Speaking
The discovery that our human brain’s motor cortex encodes achievement—
not movement per se—has signifi-cant import for our understanding of
language.
One theory of how our speech develops suggests that a baby’s babbling
gradually, through practice, allows the emergence of words and sentences. This
is called the “motor theory of language production.” Speaking, when regarded as
a skill, makes such a theory plausible. However, linguists have expressed a
considerable amount of skepticism, based on their observations of the
development of language in infants.
I vividly remember discussing this issue with Roman Jakobson, dean of
classical linguistics at Harvard, with whom I worked repeatedly in venues as far
apart as Moscow, the Center for Advanced Studies in Palo Alto, and the Salk
Institute in LaJolla, California. Jakobson pointed out to me what is obvious: a
child’s development of his ability to understand language precedes by many
months his ability to speak. Also, once the ability to speak emerges, and a
caretaker tries to mimic, in baby talk, the child’s erroneous grammatical
construction, the child will often try to correct the adult. These observations
indicate that the child has a much better perceptual grasp than a motor
performance ability in the use of language.
The motor theory of language can be reinstated, however, when we
conceive of language as a “speech act,” as John Searle, the University of
California at Berkeley philosopher, has put it. According to my research results
reviewed in this chapter, the motor systems of the brain encode “acts,” not
movements. In this regard, speech acts become achievements, targets that can be
carried out with considerable latitude in the movements involved. Perception of
the target is the essence of the execution of an act, including the act of speaking.
Babies learn to target language before they can comprehensibly speak it. The
outdated “motor theory” of speech needs to be reformulated as an “action
theory” of speech.
Linguist Noam Chomsky and psychologist George Miller each have called
our attention to the fact that we can express (achieve) as speech acts the same
intended meaning in a variety of grammatically correct ways. Chomsky has
developed this observation as indicating that there is a deep, as well as a surface,
structure to language.
I extend Chomsky’s insight to suggest that the variety of languages spoken
by those who know several languages provides evidence for the existence of an
ultra-deep language process. This ultra-deep structure of language processing
occurring in the brain does not resemble the surface expressions of language in
any way. I will consider this view more fully in Chapter 22.
In Summary
Our brain processes are not composed of input-output cycles. Rather, they
are composed of interpenetrating meshed parallel processes: the motor systems
of our brain work in much the same way as do our sensory systems. Our sensory
systems depend on movement to enable us to organize the perception of images
and objects. Conversely, our brain’s motor systems depend upon imaging the
intended achievement of our actions.
Objects remain invariant, constant, across the variety of their profiles—the
variety of their images. Actions remain invariant across the variety of
movements that can produce them. Speech acts, intended meanings, are
invariant not only across a variety of expressions but also across a variety of
languages.
A World Within
Chapter 9
Pain and Pleasure
Wherein the rudiments of the “form within” that initiate pain and pleasure are
shown to derive from stem cells lining the central cavities of the brain. Pain is
shown to be homeostatically controlled. And a much-needed physiological
explanation for the generation of pleasure is developed.
A Membrane-Shaping Process
Even water changes its properties by changing its form. A simple but
revolutionary example of the importance of shape in chemical processing comes
from a suggestion as to how water, when its H2O molecules become aligned,
becomes superconductive. My colleagues and I, as well as others, such as I.
Marshall in his 1989 article “Consciousness and Bose-Einstein Condensates,”
have proposed that the membranes of cells such as those of the stem cells lining
the cavities of the brain can organize the configuration, the shape, of water. The
membranes are made up of phospholipids. The lipid, fatty part of the molecule is
water repellant and lies at the center of the membrane, giving it structure; the
phosphorus part forms the outer part of the membrane, both on the inside and
outside, and attracts water. The phospholipid molecules line up in parallel to
compose the linings of the membrane. Gordon Shepherd, professor of
physiology at Yale University, has described the water caught in the interstices
of the aligned phosphorus part of the membrane as being “caught in a swamp.”
My colleagues, anesthesiologist Mari Jibu, physicist Kunio Yasue, the Canadian
mathematician Scott Hagen and I ventured to suggest that the water in the
swamp becomes “ordered;” that is, the water molecules line up much as they do
in a meniscus at a surface. Ordered water forms a “superliquid”: superliquids
have superconducting properties. Such properties are not constrained by the
resistance produced by ordinary materials.
In our work, we speculated that these membrane processes might serve as
substrates for learning and remembering. As such, they may do this, in the first
instance, by ordering the membranes involved in the most basic components of
learning and remembering: the processing of pain and pleasure. These are the
membranes of stem cells lining the central cavities of the brain that actually
provide a most effective substrate for experiencing pain and pleasure.
Bailey recalled that he had recently seen a patient who had been shot in the
head. The bullet was lodged in one of his ventricles (“little stomachs”) filled
with cerebrospinal fluid. The patient’s complaint: Sometimes he would wake in
the morning in a foul mood. At other times he would awaken cheerful. He had
learned that he could change his mood by positioning his head: If when he first
arose, he hung his head face down over the side of the bed, he would become
cheerful in a few minutes; when he tilted his head back, his black mood would
return. Sure enough, X rays showed that by tilting his head forward or backward,
the patient could change the location of the bullet, which would “drift” inside the
ventricle.
The question was whether or not to try to remove the bullet. The answer
was no, because the surgery would require cutting into the ventricle to get the
bullet out. Even minute temporary bleeding into the ventricle is extremely
dangerous, usually followed by a long period of unconsciousness and often
death. Bleeding elsewhere in the brain, once stopped, causes little if any injury.
The brain tissue repairs itself much as would such an injury in the skin.
Bailey remarked that the symptoms shown by these two patients indicated
that our ventricles, which are lined with stem cells with their undifferentiated
membranes, could well be sensitive. The nervous system of the embryo develops
from the same layer of cells as does our skin, the surface part of the layer folding
in to become the innermost layer of a tube. It should not, therefore, have
surprised us that this innermost part of the brain might have sensitivities similar
to those of the skin.
With the patient’s permission, we tested this idea in a subsequent surgical
procedure in which one of the ventricles of the brain lay open and therefore did
not have to be surgically entered. When we put slight local pressure on the
exposed ventricle, the patient felt it as an ache in the back of his head; when we
squirted small amounts of liquid, a little warmer or colder than the normal body
temperature, onto the lining of the ventricle, the patient also experienced a
headache. (Pressure and temperature are the two basic sensitivities of our skin.)
We then proceeded with the intended surgical procedure, cutting the pain tract in
the brain stem—a cut that the patient momentarily experienced as a sharp pain in
the side of his head—with excellent postoperative results. (Actually the patient’s
head jumped up when he experienced the pain, a jump that fortunately made
Bucy cut in just the right place. But we decided that next time the actual cut
would be done under general anesthesia.)
Percival Bailey had classified brain tumors on the basis of their
development from this innermost layer of cells lining the ventricles, the
“ependyma,” stem cells that have the potential of forming all sorts of tissue.
Bailey had spent a whole year with me and another resident, peering down a
microscope that was outfitted with side-extension tubes so we could
simultaneously see what he was seeing. He would tell us fascinating stories
about his work in Spain studying with Hortega del Rio and how Ramón y Cajal,
Hortega’s chief, was annoyed with Hortega for studying stem cells instead of
nerve cells. In Chapter 2 of my 1971 book Languages of the Brain, and in
Chapter 9 of this book, I summarized what I had learned—bringing to bear what
Bailey had taught us on how our brain can be modified to store experiences as
memories.
49.
The Core-Brain
At this juncture, a brief exploration of functional brain anatomy will help.
The names of the divisions of the nervous system were established over the
years prior to World War II. During these years, the nervous system was studied
by dividing it horizontally, as one might study a skyscraper according to the
floor where the inhabitants work. The resulting scheme carried forward the basic
segmental “earthworm” plan of the rest of the body (in many instances, in my
opinion, inappropriately as the cranial nerves are not organized according to the
segmental pattern of the rest of the body).
The tube we call the spinal cord enters our skull through the foramen
magnum (Latin for the “big hole.”) Within the skull, the brain stem becomes the
hindbrain. As we move forward, the hindbrain becomes the midbrain. Still
further forward (which is “up” in upright humans) is the forebrain. The forebrain
has a part called the thalamus (Latin for “chamber”), which is the way station of
the spinal cord tracts to the brain cortex (Latin, “bark,” as on trees.) The most
forward part of the brain stem is made up of the basal ganglia. During the
development of the embryo, cells from the basal ganglia migrate away from the
brainstem to form the brain cortex.
These new methods, pursued on three fronts, produced new insights into
what we came to call the core-brain. From the midbrain forward there is actually
a continuous band of cells that had earlier been divided into separate structures.
In the midbrain, the gray matter (brain cells) around the central canal connecting
the ventricles, is called the “periaqueductal gray” because, at that point, the canal
inside the brain stem forms an aqueduct that leads from the brain stem ventricles
(that we found to be so sensitive) to the ventricles of the forebrain (where the
bullet had lodged in Bailey’s patient.) Shortly, we’ll see how important this gray
matter is to our understanding of the way we process pain. The periaqueductal
gray extends seamlessly forward into the hypothalamic region (which is the
under part of the thalamus, its upper part being called the dorsal thalamus.) The
gray matter of the hypothalamic region, in turn, seamlessly extends forward into
the septal region where the brain’s thermostat is housed.
The sensors that control the variety of behaviors that regulate metabolism
are located in this band of cells bordering the ventricles. The gray matter that
makes up the band regulates and makes possible the modification of a variety of
behaviors such as drinking, eating, motor and sexual activity.
Control is homeostatic; that is, the behavior is turned on and turned off
according to the sensitivity of the sensors. This sensitivity is set genetically but
can be modified to some extent by our experience.
52. (From Brain and Perception, 1991)
53. The median forebrain bundle (afferent and efferent) highlighting the concentric pattern of the brain. 52.
(From Brain and Perception, 1991)
The thermostats that control the temperature of our buildings work in this
fashion. A sensor in the thermostat usually consists of two adjacent metal strips
or wires that expand and touch, closing an electrical circuit as the room
temperature increases. When the temperature in the room drops, the metal
shrinks, opening the electrical circuit. Opening and closing the circuit can turn
on an air-conditioning unit or a furnace. The homeostats in the core-brain turn
on, and turn off, behaviors such as drinking and eating, which are controlled by
their respective sensors. Roughly, drinking is controlled by the amount of salt in
the blood and eating is controlled by the amount of sugar in the blood. More on
these topics shortly.
Ordinarily the loci of control over these behaviors are attributed to
processing “centers.” There is separation between these loci of control, but
within the controlling region there is considerable distribution of function. Here
are some examples:
The cells located around the central canal in the hindbrain control
breathing, heart rate and blood pressure. When I arrived at Yale one of the
graduate students wanted to know whether the “respiratory center” on one side
of the hindbrain controlled breathing on the same side or the opposite side of the
body. Under general anesthesia, he made a cut into the “respiratory center” of a
rat’s brain, but nothing happened to the animal’s breathing. Perhaps the cut was
inadequate. A week later he made another cut; deeper. Again nothing happened.
After a week, yet another cut, and still no change in breathing. He then tried
cutting into the breathing “center “ on the other side of the brain stem. By this
time he was surprised that the animal had survived so much surgery; his cuts led
a zigzag path through the length of the hind-brain. Conclusion: control over
breathing is distributed over a region among cells that do not form a “center.”
By using a variety of chemical tagging and electrical stimulation
techniques, brain scientists have found that such distribution of function is
characteristic of the entire band of gray matter surrounding the ventricles. In the
hypothalamic region of rats, stimulation of one point would result in wiggling of
its tongue. Stimulation of an adjacent point led to swallowing, the next adjacent
point to penile erection, and the next to swallowing again. The cells in the
hypothalamic region that regulate the amount we eat are strung out almost to the
midbrain, and those that are involved in regulating our basal temperature extend
forward into the septal region. To talk of “centers” in the brain within which all
cells are engaged in the same process is therefore misleading. Always there are
patches of cells and their branches that are primarily devoted to one function
intermingled to some extent with those devoted to another function. Thus, there
are systems devoted to vision within which there are cells that are tuned to parts
of the auditory spectrum. With respect to the regulation of internal body
functions, processes such as chewing and swallowing often intermingle with
cells that regulate ovulation and erections. Such an arrangement has the virtue of
providing flexibility when new coalitions among functions are called for.
Odd Bedfellows
We return now to the sensitivities shared by the tissue of the spinal canal
and ventricles with those of the skin. The sensitivities of the skin can be
classified into two sorts: touch and pressure make up one category, while pain
and temperature make up the other. These two categories of sensitivities are
clearly separated in our spinal cord. The nerves that convey the sensations of
touch and pressure run up to our brain through the back part of the cord; those
that convey pain and temperature run through the side of the cord. This
arrangement allows surgeons to cut into the side of the cord in those patients
who are experiencing intractable pain, in order to sever the patients’ “pain and
temperature” fibers without disturbing their sensation of touch. There is no way
to isolate pain from temperature fibers, however. Thus, cutting into the spinal
cord and severing the pain/ temperature nerves eliminates both pain and
temperature sensibility. As noted above, the human body, its vertebrae and
nervous system, is formed in segments, like those of an earthworm, thus
sensibility is lost only in the segments below the cut but not above it. This
segmentation becomes manifest when shingles develop: the itch and pain are
usually restricted to one body segment.
54. Example of core-brain receptor sites in rat brain: striped areas indicate uptake of labelled estrogen
(female sex hormone) molecules. Lateral structure showing uptake is amygdala. (From Stumpf, 1970)
55. Places in a rat’s brain where self-stimulation is elicited. (From Brain and Perception, 1991)
The results of her mapping showed that sites of self-stimulation were
located not only in the tract adjacent to the core ventricle gray matter but also in
what is the rat’s olfactory brain. Further, the amount of effort the rat would
expend (how fast and for how long the rat would push the panel) on self-
stimulating was a function of the three systems that I had recently outlined in a
1954 paper, “Functions of the ‘Olfactory’ Brain.”
The discovery of self-stimulation by Olds and Milner was celebrated as
locating the “pleasure centers” in the brain. When a psychiatrist in New Orleans
electrically stimulated these brain sites in humans, the patient stated that he or
she was experiencing a pleasant feeling. One patient repeatedly professed deep
love for the psychiatrist who had turned on the stimulus—but the feeling was
present only during the stimulation!
By this time I had arrived at Yale, in John Fulton’s Department of
Physiology, where another set of experiments was under way in which an
experimenter turned on —and left on—an electrical stimulus in the same brain
locations as in the Olds and Milner experiments. The rats would turn off this
ongoing electrical stimulus by pushing a panel, but the electrical stimulation
would turn itself on again after a brief pause. In this experiment, the rat would
dutifully keep turning off the stimulation. This was the mirror image of the turn-
on self-stimulation effect. The location of these turn-off effects was the same as
those that produced the turn-on effect.
I suggested to our graduate student who was doing the experiment that the
stimulation be adjusted so that, in the turn-on experiment, the stimulus stays on
after being turned on, to see if the rat would turn it off. It did. And when the
electrical stimulation remained off, the rat turned it on again. Jokingly we called
the graduate student’s thesis “Sex in the Brain!”
On the basis of this result, we can interpret the self-stimulation experiments
as “biasing,” that is, modifying the context within which the naturally occurring
pleasure is processed. Nicki Olds showed that this is an excellent explanation as
she explored the interaction between self-stimulation with food and water intake
and sexual behavior. She showed that “pleasure” is ordinarily self-limiting,
although, under certain conditions, it can become “addictive.” Jim Olds pointed
out that the danger of potential addiction is why there are so many social taboos
that aim to constrain the pursuit of pleasure. (I’ll have more to say about the
conditions that lead to addiction in Chapter 19.)
Ordinarily, pleasure has an appetitive phase that ends in satiety. The finding
of how the brain process works to produce an appetitive and a satiety phase has
important clinical implications. Pleasure is self-limiting unless you trick the
process, as in bulimia, by short-circuiting it: the bulimic person artificially
empties his stomach so that the signals that usually signify satiety, such as a full
stomach and absorption of fats, are disrupted. Is the bulimic’s short-circuiting
process a parallel to that which occurs in self-stimulation of the brain? Or, as in
anorexia, can culture play the role of “keeping the stimulation on” in the role of
a person’s self image so that the person’s brain is always set to experience only a
“turn-off” mode. We need to find noninvasive ways by which we can change the
settings in the brains of the persons with these eating disorders.
Evidence that further associates pleasure with pain came from experiments
that showed pain to be a process in which the experience of pleasure can be the
antecedent of the experience of pain. The experimental findings that led to this
conclusion are taken up shortly.
To summarize what we have covered so far in this chapter: Our world
within is formed by stimulation of a variety of sensors lining, and adjacent to,
the central canal of the nervous system, a canal in which the cerebrospinal fluid
circulates. In warm-blooded animals, including humans, the sensors are linked
to one another through the regulation of behaviors such as drinking, eating and
motor activity that regulate our basal temperature: thus the explanation for the
odd pairing of pain and temperature in our spinal cord and brain is that the
regulation of temperature is the basis of “how” we come to experience pleasure.
The following are examples of what to me were exciting discoveries of the
sensitivities and their regulation of pleasure and pain by these “core-brain”
sensors.
In Summary
1. The brain is lined with a layer of stem cells that is sensitive to the same
stimuli as is the skin from which they are derived in the embryo: pressure
and temperature—and, in addition, to changes in the salinity of the
cerebrospinal fluid bathing them.
2. Surrounding the stem-cell layer are control systems of brain cells that, when
electrically excited, turn on and turn off behavior. These systems have been
interpreted to provide comfort (pleasure) and discomfort (pain) to the
organism so stimulated, and this interpretation has been confirmed by
verbal reports from humans stimulated in this fashion.
3. A stimulus is felt as painful when any pattern of stimulation is
overwhelming and disorganized.
4. Endogenous chemicals, endorphins, act as protectors against pain. When
activated, these chemicals produce thrills and chills.
5. A stimulus is felt as pleasurable when (in warm-blooded animals) it
activates the metabolic processes anchored in the regulation of temperature.
Finding temperature regulation to be the root of pleasure accounts for the
previously un- understood intimate intermingling of the pain and
temperature tracts in the spinal cord.
Chapter 10
The Frontolimbic Forebrain: Initial Forays
Wherein I describe my initial discoveries that defined the limbic systems of the
brain and the relation of the prefrontal cortex to those systems.
57. This figure, depicting the lateral and inferomedial aspects of a monkey’s brain shows diagrammatically
the distribution of regions and segments referred to in the text. Dotted stippling denotes frontotemporal;
rosroventral striations denote medial occipitotemporal; vertical striations denote medial parieto-occipital;
horizontal straitions denote medial frontoparietal; dorsorostral straiations denote medial frontal.
The other important result of our neuronographic studies was that, though
there were major inputs to the hippocampus from the adjacent isocortex, there
were no such outputs from the hippocampus to adjacent non-limbic cortex. The
output from the hippocampus is to the amygdala and accumbens, and through
the subcortical connections of the Papez circuit. MacLean, always adept at
coining names for the results of our experiments, suggested that this one-
directional flow of signals from isocortex to the limbic cortex indicated a
“schizophysiology” in that the “limbic systems,” though open to influence from
the rest of the cortex, worked to some extent independently from the isocortex—
much as James Papez had suggested. Papez was most pleased with our results
when MacLean and I recounted them on a visit to his laboratory.
Anatomical Fundamentals
A brief review of the brain anatomy relevant to the processes organized by
the frontolimbic forebrain can be helpful at this point:
We can visualize the central nervous system as a long tube which is
fabricated during our embryonic development from the same layer of cells that
elsewhere makes up our skin. This tube is lined with stem cells and within the
tube flows the cerebrospinal fluid. The lower part of the tube makes up our
spinal cord. Where the tube enters our skull through a big hole—in Latin,
foramen magnum— the tube becomes our brain stem. The hind part of the brain
stem is called the “hindbrain”; the mid part of the brain stem is the “midbrain”;
and the forward part is the “fore-brain.” The most forward part of the forebrain
is made up of the “basal ganglia.”
The basal ganglia can be divided into three components: An upper, a
middle, and a limbic (from the Latin, limbus, “border”). The upper component is
made up of the caudate (Latin, “tailed”) nucleus and the putamen (Latin,
“pillow”). The middle component is the globus pallidus (“pale glob”). The
limbic component is made up of the nucleus accumbens (Latin, “lying down”)
and the amygdala (Greek, “the almond”).
The brain cortex makes up two hemispheres that have developed in the
embryo by a migration of cells from the basal ganglia. Six layers of cells migrate
from the upper basal ganglia to make up most of the cortex technically called the
“iso” (Latin for “uniform”) cortex. A more common name for this expanse of
cortex is “neocortex” (new cortex), but as I described earlier in this chapter, this
more common name is misleading in some of its usages. The remainder of the
cortex is called the “allo” (Latin for “other”) cortex. The allocortex is formed
when cells from the amygdala and other limbic basal ganglia migrate. These
migrations sometimes do not occur at all; but for the most part, they make up a
three-layered cortex. Sometimes, however, there is a doubling of three layers as
in the cingulate (Latin, “belt”) cortex.
Mistaken Identities
The limbic basal ganglia and their allo-cortical extensions are called
“limbic” because they are located at the medial limbus (Latin, “border”) of the
hemispheres of the forebrain. When I started my research in the late 1940s, this
term was used mainly for the cingulate cortex lying on the upper part of the
inner surface of the brain’s hemispheres. But also, Paul Broca, the noted 19th-
century neurologist, had discerned that the allocortex formed a ring around the
inside border of the brain’s hemispheres. However, until the 1950s, the
allocortex was generally referred to by neuro-anatomists as the “olfactory” or
“smell” brain because it receives a direct input from the nose.
Today, what was then called the “olfactory brain” is called the “limbic
system” and “everyone knows” that our emotions are organized by our limbic
brain. But what constitutes what neuroscientists call the limbic system varies to
some considerable extent, and the relationship of the various parts of the limbic
system to emotion is, at times, considerably distorted. Thus the popular view has
built-in difficulties. The amygdala is only one structure among those loosely
thought of as the limbic forebrain. In fact, originally the amygdala was not
included at all in the limbic circuitry. Rather, two sources converged to be
baptized by Paul MacLean as the “limbic systems.” As noted above, one source
was Paul Broca who had discerned a rim of cortex around the internal edge of
the cerebral hemispheres to be different from the rest of the cerebral mantle. This
cortex appeared paler and was later shown to have only three layers rather than
six. He called this rim of cortex ”le grande lobe limbique.”
58. Upper and limbic basal ganglia
In Summary
1. The idea that the neocortex is responsible for organizing our cognitive
reasoning, while the older cortical formations are solely responsible for
generating our baser emotions and motivations, does not hold up. In
primates, including us, the neocortex adjacent to the older cortex is very
much involved in organizing the visceral and endocrine processes that
presumably underlie (or at least accompany) our emotions and motivations.
This finding calls into question the presumed primitive nature of our
emotions and motivations.
2. The olfactory systems of our brain, now called the “limbic systems,” are
composed of two distinct systems. One system is based on the amygdala, a
basal ganglion, the other on the hippocampus.
3. The basal ganglia of the brain can be divided into three constellations: an
upper, a mid and a limbic. This chapter reviewed evidence that the limbic
basal ganglia—the amygdala, accumbens and related structures—are
involved in organizing visceral and endocrine responses presumably critical
to emotional experience and behavior.
4. But direct evidence of the involvement of limbic basal ganglia in emotions
had, as yet, not been obtained. The next chapters describe evidence that
leads to the conclusion that the limbic basal ganglia are involved in
emotions and that the upper basal ganglia—the caudate nucleus and the
putamen—are involved in organizing motivational behavior.
5. The hippocampus, an intimate constituent of the Papez circuit, plays an
important role in the chemical regulation of the pituitary/adrenal/ hormonal
response to stress. Electrical and chemical stimulations showed that there is
a large input from other parts of our cortex to the hippo-campus but that its
output goes mostly to other limbic system structures.
6. However, the effects of damage to the Papez hippocampal circuit are much
more drastic on certain aspects of memory than on emotion or motivation.
This poses the question of how the response to stress, and therefore the
Papez circuit, relates to memory and, in turn, how memory relates to
emotion and motivation.
Surprises
The anatomy was done on both the overeating rats and the ones that had
stopped eating. Surprise #1: Hardly any of the lesions that had produced the
overeating were in the expected location (the ventromedial nucleus) in the
hypothalamic region. They were in the vicinity, but not “in,” the nucleus. This
confirmed, for me, a long-held view that a nucleus isn’t a “center,” a local place,
at all: the cells that make up the “nucleus” are spread over a fairly long distance
up and down the hypothalamic region. This was later ascertained by a stain that
specializes in picking out the cells of this nucleus.
Surprise #2: The electrodes that Brobeck had aimed at the amygdala had
gone far afield—the lesions made with these electrodes were located halfway
between those that had been aimed at the hypothalamic region and those that had
been aimed at the amygdala. Thus, Anand and Brobeck had obtained an
important result: they had discovered a new region, a region that influenced
eating behavior, which they dubbed the “far-lateral” hypothalamic region. The
results were published in the Journal of Neurophysiology with an introductory
note acknowledging my role in the experiment, and they were quoted for many
years to come.
But the story does not end here. Shortly after the publication of this paper,
Philip Teitelbaum, then a young doctoral student at Johns Hopkins, became
intrigued with the “stop eating” effect and got his rats to eat and drink enough to
tide them over the initial period after surgery. He did this by giving the rats sugar
water and chocolate candy. At a meeting, he reported the results as having been
produced by lesions in the far-lateral nucleus of the hypothalamus and I stood
up, noted his excellent work, and went on to point out that the locus of the
electrolytic lesions in the Anand and Brobeck experiments (for which I’d
supervised the anatomy) and presumably in Teitelbaum’s, were located in white
matter consisting of nerve fibers, not of cells, and that there really was no such
“far-lateral hypothalamic nucleus.” At that time we didn’t know where those
nerve fibers originated or ended. Teitelbaum has told me on several occasions
since that he was totally shaken by my comment.
He persisted nonetheless, and we know now, through his efforts and those
of others, that these fiber tracts connect places in the brain stem with the basal
ganglia; that they are an important part of the dopamine system which, when the
system becomes diseased, accounts for Parkinsonism.
Anand established the meaning of these results. He went back to India and
pursued the research he had started at Yale using electrical recordings made from
the regions he had studied there. In his India experiments, he found that as a rat
ate, the electrical activity of its medial region progressively increased while that
of its lateral region progressively decreased. During abstinence, the opposite
occurs: the rat’s lateral region becomes progressively more active while the
medial region becomes progressively more quiescent. His experiments
demonstrated that there is a seesaw, reciprocal relationship: The medial region
serves a satiety process; the lateral region serves an appetitive process. This
reciprocal relationship applies to drinking as well as to eating.
Hit or Run?
The experimental results of our studies of “fighting” and “fleeing” behavior
were equally rewarding. At Yale we set up several colonies of young male
monkeys and selected one of these colonies in which the monkeys clearly
formed a dominance hierarchy. We had developed quantitative measures of
dominance based on procedures used by Rains Wallace, a friend of mine who
was chief of the research institute representing the insurance companies in
Hartford, Connecticut, where I lived at the time. Wallace had found that, in order
to quantify social behavior that is almost always determined by a variety of
factors (in the insurance case, factors involved in injury or death)— one must
choose an “anchor.” The anchor chosen must be the most obvious or the most
easily measurable behavior. Then other behaviors that might influence the factor
we are studying (in our case, dominance) can be rated with respect to the
behavior we have chosen as an anchor. As an anchor in our experiment with the
dominance hierarchy of monkeys, I chose the number of peanuts each monkey
picked up when the peanuts were dropped into their cage one by one. We also
used the order in which each monkey grabbed the peanut, followed by other
“subsidiary” measures such as facial grimace, body posture of threat, and actual
displacement of one monkey by another. We filmed all of this so that we could
study the films frame by frame.
I removed the amygdalas (of both hemispheres of his brain) of the most
dominant monkey in the colony. As expected, he gradually fell to the bottom of
the hierarchy, but it took about 48 hours for his fall to be completed. Other
monkeys were challenging him, and he seemed not to know how to meet these
challenges.
60. A: dominance hierarchy of a colony of eight preadolescent male rhesus monkeys before any surgical
intervention. B: same as A after bilateral amygdalectomy had been performed on Dave. Note his drop to the
bottom of the hierarchy. (From Pribran, 1962)
Next, I operated on the monkey who had replaced the first as the most
dominant. The result regarding the change in the order of dominance was
essentially the same. To be certain that our results would be generally applicable,
I then operated on the monkey that had originally been third in the hierarchy.
Surprise! He became more aggressive and maintained his dominance. Of course,
my colleagues accused me of sloppy surgery. We had to wait a few years (while
the monkeys were used in other experiments) before an autopsy showed that my
removal of the amygdala was as complete in the third monkey as it had been in
the first two monkeys I had operated upon.
When my Yale collaborators, Enger Rosvold, Alan Mirsky and I were later
reviewing our films of the monkeys’ interactions after their surgery, we found
that in the case of the third monkey no challenges to his dominance had
occurred. The next monkey in line—monkey #4—was a peacenik, happy to get
his share of food and water without troubling the other monkeys.
Our Yale dominance experiment has been quoted over the years in many
social psychology texts. The significance of the results became especially
important when the Assistant District Attorney of the State of California was
heard to suggest that removal of the amygdala might be a treatment of choice for
the hardened violent criminals populating our jails. I called him to alert him to
our results—that the criminals might become even more vicious, unless the
postoperative social situation was very carefully established, and that we didn’t
really know how to do this. Luckily, the matter was quickly dropped like the hot
potato it was. It was the mid-1960s and I was concerned that someone might
recommend brain surgery for the “revolutionary” students on our campuses.
Brain surgery does not remove “centers” for aggression, fear, sex, etc.
Ordinarily brain processes enable certain behaviors and, once engaged, help to
organize and reorganize those behaviors.
There is a difference however when our brain becomes injured and “scars”
form, for instance when scars produce temporal lobe epilepsy: some patients
occasionally have uncontrollable episodes during which they become violent,
even to the extent of committing rape or murder. Once the seizure is over, the
patient returns to being a docile and productive person. If the patient recalls the
period of his seizure—though most do not—or when evidence of how he
behaved is shown to him, he is terribly upset and remorseful, and will often
request surgical relief if the seizures have become repetitive.
The Fourth F
As we were studying at Yale the effects of amygdalectomy on dominance
and on eating and drinking similar experiments were being performed at Johns
Hopkins University in the Department of Physiology and in the Army Research
Laboratories at Walter Reed Hospital in Washington, DC. In addition, both of
these research groups were studying the effects of amygdalectomy on sexual
behavior in cats. At Hopkins, there was no evidence of change in sexual
behavior after the surgical removal. By contrast, at the Army Laboratories, as in
Klüver’s original report of his and Bucy’s experiments on monkeys in Chicago,
“hypersexuality” followed the removal of the amygdala in cats.
I paid a visit to my friends in the Department of Physiology at Hopkins and
looked at the results of their surgical removals, which were identical in extent to
mine. I looked at their animals—cats in this case rather than monkeys. The cats
were housed in the dark basement of the laboratory in cages with solid metal
walls. Anyone planning to handle the cats was warned to wear thick gloves.
When the cats were looking out through the front bars of the cage, the
experimenters would blow into their faces to test for taming: the cats hissed.
Attempts to grab the cats resulted in much snarling, clawing and biting.
I then traveled 30 miles from Baltimore to Washington to visit my friends at
the Walter Reed Medical Research Center, where they were also removing the
amygdalas from the brains of cats. The brain surgery had been well carried out
and appeared, in every respect, identical to what I had seen at Hopkins. But the
situation in which these cats were housed was totally different. These cats were
kept in a large compound shared with other cats as well as with other animals—
including porcupines! The animals were playing, wrestling, eating and drinking.
My host, who was in charge of the experiments, approached one of the cats who
cuddled to him. To my consternation, he then tossed the cat into a corner of the
compound, where the cat cowered for a while before rejoining the other animals
in their activities. “See how tame these cats are? The experimenters at Hopkins
must have missed the amygdala.” By now, the cats had resumed their mounting
behavior, including a very patient and forgiving porcupine who kept its quills
thoroughly sheathed while a “hypersexed” tabby tried to climb on top of it.
I wrote up this story of the Baltimore and Washington cats for the Annual
Reviews of Psychology, again emphasizing that it is the social setting and not the
extent or exact placement of the brain removal that is responsible for the
differing outcomes we find after the surgery.
I was able to add another important facet to the story: experiments that were
done at the University of California at Los Angeles on a set of “Hollywood
cats.” This time, the experimenters had studied normal cats. They were placed in
a large cage that took up most of a room. There were objects in the cage,
including a fence at the rear of the cage. The behavior of the cats was monitored
all day and all night. The UCLA experimenters found what most of us are
already aware of: that cats go prowling at night, yowling, and, (what we didn’t
know) mounting anything and everything around! After the brain surgery this
nocturnal behavior took place during the day as well. Testosterone levels were
up a bit. However, in our experiments with monkeys at Yale, we had shown that
while testosterone elevations amplify sexual behavior such as mounting once it
has been initiated, these elevated levels do not lead to the initiation and therefore
the frequency of occurrence of sexual behavior. Today we have the same result
in humans from Viagra.
The Hollywood cats thus showed that the removal of the amygdalas
influenced the territory, the boundaries of the terrain within which sexual
behavior was taking place. Although an increase in sexuality due to an increase
in hormone level might have played a role in amplifying sexual activity, the
increase in that behavior might also have resulted in the modest increase in
sexual hormone secretion. By far, the more obvious result was a change in the
situation—day vs. night—in which the cats were doing It: For ordinary cats,
night is the familiar time to practice the fourth F. For the cats who had been
operated upon, every situation was equally familiar and/or unfamiliar. It is the
territorial pattern, the boundary which forms the context within which the
behavior occurs, that has been altered by the surgery, not the chemistry that fuels
the behavior.
Basic Instincts
In light of all these results, how were we to characterize the Four Fs? On
one of my several visits with Konrad Lorenz, I asked him whether the old term
“instinct” might be an appropriate label for the Four Fs. The term had been
abandoned in favor of “species-specific behavior” because if behavior is species
specific, its genetic origins can be established. I had noted during my lectures at
Yale that if one uses the term instinct for species-specific behavior, human
language is an instinct. This usage of the word is totally different from its earlier
meaning. Nevertheless, Steven Pinker, professor of linguistics at MIT, has
recently published a most successful book entitled The Language Instinct.
My suggestion to Lorenz was that we might use the term “instinct” for
species-shared behaviors. What makes the studies of birds and bees by
ethologists like Lorenz so interesting is that we also see some of these basic
patterns of behavior in ourselves. Falling in love at first sight has some
similarities to imprinting in geese, the observation that Lorenz is famous for. Sir
Patrick Bateson of Cambridge University, who, earlier in his career, was one of
my postdoctoral students at Stanford, has shown how ordinary perception and
imprinting follow similar basic developmental sequences.
Lorenz agreed that my suggestion was a good one, but felt that it might be a
lost cause in the face of the behaviorist zeitgeist. His view was also expressed by
Frank Beach in a presidential address to the American Psychological Association
in a paper, “The Descent of Instinct.”
Thus, the Four Fs, though they have something to do with emotional (and
motivational) feeling, can be described behaviorally as basic instincts or
dispositions. These dispositions appear to be a form of a hedonic- (pleasure and
pain) based memory that, as we shall shortly see, directs attention in the process
of organizing emotions and motivations. As an alternative to “basic instincts,”
the concept “dispositions” captures the essence of our experimental findings
regarding the Four Fs. Webster’s dictionary notes that dispositions are
tendencies, aptitudes and propensities. Motivations and emotions are
dispositions.
The brain processes that form these hedonically based dispositions are the
focus of the next chapters.
Chapter 12
Freud’s Project
Wherein I take a detour into the last decade of the 19th century to find important
insights into emotional and motivational coping processes.
One evening last week while I was hard at work, tormented with just
that amount of pain that seems to be the best state to make my brain
function, the barriers were suddenly lifted, the veil was drawn aside,
and I had a clear vision from the details of the neuroses to the
conditions that make consciousness possible. Everything seemed to
connect up, the whole worked well together, and one had the
impression that the thing was now really a machine and would soon go
by itself.
—Sigmund Freud, letter to Wilhelm Fliess, October 20, 1895
As I was moving from Yale to Stanford in the late 1950s, I had just
disproved Köhler’s DC theory of the brain as responsible for perception, and I
had arrived at an impasse regarding the “how” of the Four Fs. I had put up a
successful behaviorist front: One of my Yale colleagues, Danny Friedman, asked
me why I’d moved to Stanford. He truly felt that I’d come down a step or two in
Academe by moving there. He remarked, “You are a legend in your own time.“
If that was true, it was probably because I gave up a lucrative career as a
neurosurgeon for the chance to do research and to teach. My initial salaries in
research were one tenth of what I was able to earn in brain surgery. But I was
accomplishing what I wanted through my research: I had been accepted in
academic psychology and had received a tenured academic position, held jointly
in the Department of Psychology and in the Medical School at Stanford.
At that time, “The Farm,” as Stanford is tenderly called, really was that.
What is now Silicon Valley, a place of “concrete” technology, was then Apricot
Valley (pronounced “aypricot”) with miles of trees in every direction. Stanford
Medical School’s reputation was nonexistent: that’s why a whole new faculty,
including me, was put in place when the school was moved southward from San
Francisco to the Stanford campus in Palo Alto. Our first concern at the Medical
School: How will we ever fill the beds at the hospital? A heliport was established
to bring in patients from afar and advertisements were sent out to alert central
and northern California residents that we were “open for business.” How
different it was then from when, 30 years later, I would become emeritus and
leave California to return to the East Coast.
I had succeeded Lashley as head of the Yerkes Laboratory of Primate
Biology in Florida and Robert Yerkes had become my friend during the
negotiations for my appointment. Still, I had encountered insurmountable
problems in trying to obtain university appointments for prospective research
personnel at the Laboratory because of its distance from any university. I had
been equally unsuccessful in moving the laboratory to a friendly university
campus where teaching appointments could be obtained.
I felt that research institutions, unless very large (such as NIH or the Max
Planck), become isolated from a broader scope of inquiry that a university can
provide, and thus, over time, become inbred and fail to continue to perform
creatively. I had tried to convince Harvard to accept a plan I had developed to
move the Laboratory to a warehouse in Cambridge, since Harvard had a stake in
the Laboratory; Lashley’s professorship was at Harvard. Harvard faculty had
nominated me for a tenure position three times in three different departments
(psychology, neurology and even social relations), but the faculty
recommendations were not approved by the administration.
All effort in these directions was suddenly halted when Yale sold the
Laboratory to Emory University (for a dollar) without consulting the board of
directors. This was in keeping with the administrative direction Yale was taking:
the Department of Physiology had already shifted its focus from the brain
sciences to biochemistry, and the president had mandated loosening Yale’s ties
(that included remanding tenured professorships) to the Laboratory and other
institutions that were mainly supported by outside funding.
The time was ripe to “Go West, young man” where other luminaries in
psychology—Ernest (Jack) Hilgard, dean of Graduate Studies, and Robert Sears,
the head of the Department of Psychology, at Stanford—had been preparing the
way.
Hello, Freud
Despite my so-called legendary fame and the administrative
disappointments at Yale and Harvard, the issue that bothered me most, as I was
moving to Stanford, was that I still really didn’t understand how the brain
worked. In discussions with Jerry Bruner, whose house my family and I rented
for that summer while I taught at Harvard, he recommended that I read Chapter
7 of Freud’s Interpretation of Dreams. I also read the recently published
biography of Freud by Ernest Jones, who mentioned that someone
knowledgeable about the brain should take a look at Freud’s hitherto neglected
Project for a Scientific Psychology, the English translation of which had been
published only a few years earlier in 1954. I read the Project and was surprised
by two important insights that Freud had proposed:
Resurrection
A decade passed, and I was asked to give the keynote address at the
meeting of the International Psychoanalytic Society in Hawaii. I felt deeply
honored and accepted the invitation. Next, I was asked who might introduce me.
Merton Gill seemed to be the perfect person. I called Gill, now in New York. He
told me that he had moved from Berkeley shortly after we had last talked and
had been depressed and hadn’t given a talk in ten years, but that he felt honored
that I was asking him to introduce me! Gill, the curmudgeon, feeling honored?
The world was turning topsy-turvy. He asked that I send him some of my recent
brain physiology studies so he’d be up to date. I replied that no, I wasn’t going to
present those. What I wanted to do was to present the findings in our book. Did
he still have a copy of the last draft? Search. “Here it is in my files,” he said.
“Have you read it?” I asked him. “No.” “Will you?” “Yes. Call me next week.” I
did, on Tuesday. Gill was ebullient: “Who wrote this?” I said I did. “Brilliant.
Wonderful.” I called back a few days later to be sure I’d actually been talking to
Merton Gill. I had. All was well.
We met in Hawaii. My talk was well received, and Merton and I decided to
resume our long-delayed joint effort. I felt that we needed to re-do the first
chapter, which was about the death instinct—a horrible way to begin a book.
Merton thought that if we could solve what Freud meant by primary and
secondary processes, we would make a major contribution to psychoanalysis and
to psychology. He polished the final chapter, and I went to work on the
“processes” and found, much to my surprise, that they were similar to the laws
of thermodynamics that were being formulated by the eminent physicist Ludwig
Boltzmann in Vienna at the same time that Freud was actively formulating his
ideas summarized in the Project.
Freud’s Ideas
During our work on the Project many more rewarding insights emerged
about Freud’s ideas. Gill was especially pleased to find that Freud had already
addressed what was to become the “superego” in his later works. Freud
described the “origin of all moral motives” to a baby’s interaction with a
caretaking person who helped the infant relieve the tensions produced by his
internally generated drives. The brain representations of the actions of the
caretaking person are therefore as primitive as those initiated by the drives. It is
the function of the developing ego to bring the caretaking person into contact
with the infant’s drives. Gill had written a paper to that effect even before he had
read the Project, and was therefore pleased to see this idea stated so clearly. The
“superego” is therefore a process during which society, in the role of a
caretaking person, tries to meet and help satisfy drive-induced needs, not to
suppress them. This is contrary to the ordinary interpretation given today which
holds that the ‘superego’ acts to suppress the ‘id.’
Another gem was Freud’s description of reality testing: An input is sensed
by a person. The input becomes unconsciously—that is, automatically—
processed in our memory-motive structure. A comparison between the results of
the memory-motive processing and that input is then made consciously before
action is undertaken. This “double loop of attention,” as Freud called it, is
necessary because action undertaken without such double-checking is apt to be
faulty.
62. Stylized representation of the “machine” or “model” of psychological processes presented in the
Project. (From Pribram and Gill Freud’s “Project” Re-assessed)
In the 1880s, Helmholtz had already described such a parallel process for
voluntarily moving the eyes. Freud greatly admired Helmholtz’s work and was
overtly trying, in the Project, to model a “scientific psychology” according to
principles laid down by Helmholtz and Mach. In the absence of voluntary reality
testing, action and thought, which for Freud was implicit action, are merely
wishful. Freud defined neuroses as composed of overriding wishes. One can see,
given the rich flavor of the Project, why I became so excited by it.
For almost a half-century I’ve taught a course on “the brain in the context
of interpersonal relations,” and I always begin with Freud. The students are as
surprised and excited, as I have been, to realize that “cognitive neuroscience,” as
it is called today, has common threads going back to the beginning of the 19th
century—threads that formed coherent tapestries toward the end of that century
in the writings of Sigmund Freud and William James.
Within a year of our decision to complete our book, Freud’s “Project” Re-
assessed, Gill and I were ready for its publication, which immediately received a
few nice reviews from neurologists and psychiatrists, and quite a few really bad
ones from psychoanalysts. Basic Books disappointed us in its distribution, and
my presentations to various psychoanalytic institutes were greeted with little
enthusiasm except for those in the Washington-Baltimore Institute and the
William Allison White Institute in New York, where discussions were lively and
informed.
Now, over thirty years after the 1976 publication of our book, beginning on
the 100th anniversary the Project in 1995, both have become reactivated:
Psychoanalysis is returning to its neurobiological roots. There are now active
neuro-psychoanalytic groups in Ghent, Belgium, and Vienna, Austria, and at the
University of Michigan at Ann Arbor—and there is also a flourishing
Neuropsychoanalytic institute in New York.
Freud called his Project, which was about the brain, as well as his later
socio-cultural work, a “metapsychology.” The metapsychology was not to be
confused with his clinical theory, which was derived from his work with his
patients. Most psychoanalysts work within Freud’s clinical theory— but that is
another story, some of which is addressed in Chapter 19. The current endeavors
in neuro-psychoanalysis are bringing a new perspective to psychoanalysis and
psychiatry. As Gill and I stated in our introduction to Freud’s “Project” Re-
assessed, Freud already had formulated a cognitive clinical psychology in the
1890s, long before the current swing of academic and practicing psychologists to
cognitivism. But even more interesting to me is the fact that the Project is a
comprehensive, well-formulated essay into cognitive neuroscience a century
before the current explosion of this field of inquiry. Reading Freud and working
with Merton Gill helped me, a half century ago, to reach a new perspective in
my thinking about what the brain does and how it does it.
For a few years, in the mid-1960s, I would present lectures on the insights
and contents of Freud’s Project as my own—only to acknowledge to an
unbelieving audience during the discussion of the presentation, that what I had
told them was vintage Freud, 1895.
Choice
Chapter 13
Novelty: The Capturing of Attention
Transfer of Training
I had recently heard a lecture on “transfer of training.” The procedure
consisted in first training an animal or child to choose the larger of two circles
and then presenting two rectangles to see whether the animal (or human) will
again choose the larger of the two without any further training. I used monkeys
that had been operated on some three years earlier—their hair had grown back,
and they were indistinguishable from the monkeys who had not been operated
upon. I then trained all the monkeys, those who had been operated on and those
who had not, to choose the paler of two gray panels set in a background of a
third shade of gray. Once the monkeys were thoroughly trained, I began to
substitute, approximately every fifth trial, two new panels with shades of gray
that were substantially different from the gray of the original panels. A number
of the monkeys chose the lighter shade of gray of the two new panels. Others
chose randomly between the lighter or darker of the two new panels, as if they
had not learned anything.
When I checked the surgical records of the monkeys I had tested, I was
delighted: All the monkeys who chose randomly had been operated on; their
amygdalas had been removed. And all the monkeys who had transferred their
training—what they had learned—to the new task were the control animals who
had never been operated upon. Here was the performance of a task that was
affected by amygdalectomy and that could by no stretch of imagination be
labeled as one of the Four Fs.
I had finished this experiment at Yale just before moving to Stanford. It was
time to reconcile this result with those obtained on the Four Fs. Muriel Bagshaw
who, as a student, had assisted me with the “taste” studies, had received her MD
degree at Yale after having a baby and had relocated to the Pediatric Department
at Stanford just at the time I moved my laboratory to the new Stanford Medical
Center. We resumed our collaboration with enthusiasm and immediately
replicated the transfer of training experiment, using circles, rectangles and other
shapes. Once again, the monkeys who had their amygdalas removed were
deficient on tasks that demanded transfer of training, while their memory for
tasks that did not involve transfer of training had remained intact. I surmised that
what might underlie the deficits obtained in the Four F experiments is that the
animals that had had their amygdalas removed failed to transfer their pre-
operative experience to the postoperative situation and failed to accumulate the
results of new experiences that accrued daily.
Growing Up
I was still not completely over my behaviorist period. Though I was
“growing up,” as Lashley had predicted that I someday would, and had done so
to a considerable extent while Miller, Galanter and I were writing Plans and the
Structure of Behavior, I still felt that we were buying into representations and
Plans being stored and reactivated “in the brain”—and I was at a loss as to how
such a process might work. I had disposed of Köhler’s isomorphism and had not
yet adopted Lashley’s interference patterns. But for my conjecture that the
disposition underlying the Four Fs has to do with transfer of training, the issue
was straightforward: Does the deficit in transfer of training indicate that
processes occuring in the limbic basal ganglia allow specific experiences to
became represented in the brain? Lashley (who firmly supported
“representations”) and I had once argued this point while we were strolling down
the boardwalk in Atlantic City. I can still “see” the herringbone pattern of the
boards under our feet every time I think of representations—suggesting that
perhaps he was right.
As I was worrying the issue of representations in the early 1960s, while I
was putting together my laboratory at Stanford, Alexander Luria and Eugene
Sokolov, his pupil, came from Moscow for a weeklong visit. We three had just
attended a conference in Princeton where Sokolov had presented his results,
definitively demonstrating that repeated sensory inputs produce a representation
in the brain—a neuronal model of the input, as he called it. Sokolov had exposed
his human subjects to regular repetitions of tones or lights. Each subject had
demonstrated an “orienting reaction” to the stimulation, an orienting reaction
which Sokolov had measured by changes in the subject’s skin conduction, heart
rate, and the EEG. As Sokolov repeated the stimulation, the subject’s orienting
reaction to that stimulus gradually waned—that is, the subject’s response
habituated. Sokolov then omitted one in the series of stimuli. Leaving out the
“expected” stimulus provoked an orienting reaction to the absence of a stimulus
that was as large as the reaction the initial stimulus had produced! The reaction
to the absence of a stimulus meant that the person now reacted to any change in
the whole pattern of stimulation, a pattern that had become a part of his
memory. Sokolov’s evidence that we construct a specific representation of the
pattern of our experience was incontrovertible. We orient to a change in the
familiar!!!
But at the same time, my reservations regarding “representations” were
confirmed. At the moment of response to the omitted stimulus, the response was
not determined by the environmental stimulus but by the memory, the neuronal
model, of prior stimulation. As we shall see in the next few chapters, there are
whole brain systems devoted to storing the results of “nonstimulation.” Walter
Freeman, professor of neuroscience at the University of California, Berkeley, has
emphasized the fact that our perceptions are as much determined by our
memories as by the environmental situations we confront. Freeman had to come
to this conclusion because, in his experiments, his animals did not show the
same brain patterns when confronted by the same environmental patterns. As
William James had put it, our experience is like a flowing river, never quite the
same.
Freeman and I are good friends, and I have tremendous respect for his work
and have been influenced by it. But for decades we discussed his findings, and I
was always troubled by the changed brain patterns that appeared to the same
stimulus in his experiments. Finally, Walter articulated clearly that it is the
memory, the knowledge attained, that is as much if not more important to
forming a precept as is the stimulating event itself. Sokolov’s “leaving out” an
expected stimulus is a simple demonstration of this finding.
Our situation is like that of the Copernicans—we grope for ways to
articulate the results of our experiments, explanations that at one level we
already “know.” With respect to the results of the experiments on familiarization,
and their relevance to emotion, these insights into what constitutes a so-called
“representation” turned out to be even more significant than what constitutes a
“representation” with regard to perception.
At Princeton, Sokolov had read his paper in somewhat of a monotone but in
reasonably good English. By contrast, Luria had made his presentation—on the
scan paths that eye movements describe when looking at a picture—with a
flourish. Sokolov seemed unimpressed, twirling his eye glasses as Luria talked. I
assumed that Sokolov had heard Luria many times, had “habituated” and was
therefore bored. Or was he really just monitoring Luria for the KGB? We were,
after all, at the height of the Cold War.
Arriving at Stanford after several stops at universities on the way, Luria
appropriated my secretary and asked her to do a myriad of tasks for him.
Sokolov seemed extremely impatient, in his black suit and balding head,
confirming my suspicion that he might be KGB. Suddenly, Luria said he was
tired and needed a nap. As we were showing him to his room, Sokolov
whispered to me, “I must see you. It is urgent.” Of course, I understood: he must
be KGB and needed to convey to me some restrictions or warnings about Luria’s
rather notorious behavior. (Unbridled in his everyday transactions, he had been
“rescued” several times by his wife, a chemist in good standing in the Soviet
Union.)
Once Luria was tucked away for his nap, Sokolov heaved a sigh of relief
and exclaimed, “That man is driving me mad. We have stopped exclusively at
camera stores between Princeton and here. I need a shirt. I saw a shopping mall
as we arrived. Could you please, please take me there?” So much for the KGB!
Our own State Department, which had vetted and cleared the Luria-Sokolov
trip to the U.S. was a bit more serious about having me monitor them while
under my custody. I was not to take Luria more than 50 miles beyond Stanford;
not to Berkeley and not even to San Francisco! “Et cetera, et cetera, et cetera,” as
the King of Siam said in The King and I.
Government monitoring aside, we managed to have a wonderful week.
Between planning experiments, we went on several (short) trips. I took Luria and
Sokolov to Monterey to see Cannery Row; both of them had read Steinbeck. We
went to a lovely little café with a garden in back, situated on a slight slope.
Luria, who was facing the garden, got up and in his usual brisk manner dashed
off to see it, failing to notice the outline he had left in the sliding glass door he’d
crashed through! His impact was so sudden that only where he hit was the glass
shattered and scattered: the rest of the door was intact. I settled with the cafe-
owner— for $30. Luria was surprised that anything had happened: there was not
a mark on him.
Despite government “injunctions,” we did go to San Francisco: Sokolov
had read all the Damon Runyan and Dashiell Hammett stories and had to see all
the sites he remembered—that is, between Luria’s camera store stops. We had an
appointment for Luria and Sokolov to speak at Berkeley and were a bit late for
their talks—but all went smoothly. I never heard from either the State
Department or the KGB about our various visits.
Something New
In 1975, Diane McGuinness, then a graduate student at the University of
London, and I published a review of decades of experiments, performed around
the world, that aimed to show differences in bodily responses to various
emotions. McGuinness and I noted, in our review, that all the physiological
measures that scientists had used to characterize differences in emotional
responses in a subject ended in discovering differences in the subject’s manner
of attending. Donald Lindsley and Horace (Tid) Magoun, working at the
University of California at Los Angeles (UCLA), had proposed an “activation
theory of emotion” based on differing amounts of “attention” being mobilized in
a situation. Thus, the results of the experiments McGuinness and I had reviewed,
including those that Bagshaw and I had performed, were in some respects
relevant to the processing of emotions. But exactly how was, before our review,
far from clear.
Mc Guinness and I recalled that William James had divided attention into
primary and secondary. This division was later characterized as assessing “what
is it?” (primary) vs. assessing “what to do?” (secondary.) Because assessing
“what is it?” interrupts our ongoing behavior, we described it as a “stop”
process. Because assessing “what to do” initiates a course of action or inaction
we may take, we described it as a “go” process. The experiments in my
laboratory had demonstrated that the amygdala and related limbic brain systems
deal with “stop,” while the upper basal ganglia deal with “go” processes.
James’s division of attentional processes into primary and secondary
paralleled his characterization of emotions as stopping at the skin, while
motivations deal with getting into practical relations with the environment. One
popular idea derived from this distinction is that some people respond to
frustration more viscerally and others more muscularly: this difference is called
“anger-in” and “anger-out.” In our culture, women are more prone to respond
with anger-in while men are more prone to anger-out.
These divisions also parallel those made by Freud: primary processes fail to
include reality testing (which depends on a double loop of attentional
processing); reality testing defines secondary processes during which we seek to
get into practical relations with the world we navigate. It is important to note
that, for both James and Freud, assessing “what to do?” as well as assessing
“what is it?” deal with attention. Thus, assessing “what to do?” motivation, is a
disposition, an attitude, not an action. Interestingly, both in French and German
“behavior” is called comportment and verhaltung—how one holds oneself. In
this sense, attending to “what is it?” also becomes an attitude.
I received an award from the Society of Biological Psychiatry for showing
that the upper basal ganglia (cau-date and putamen) organize and assess, the
sensory input involved in “what to do?”—that these basal ganglia control more
than the skeletal muscle systems of the body, as is ordinarily believed. The
award was given for my demonstration that electrical stimulations of the basal
ganglia alter receptive field properties in the visual system in both the thalamus
and the cortex. In discussing my presentation of these results, Fred Mettler,
professor of anatomy and neurology at Columbia University, had this to say at a
conference there, during which he gave the introductory and final papers:
Though these experiments were performed in the 1960s and 1970s, they
have not as yet triggered an “orienting reaction” in the habitués of established
neuroscience.
Emotions as Hang-ups
Thus, the Four Fs are expressions of our basic dispositions to eat, to fight,
to flee, to engage in sex. The experiments I’ve just described have shown that
the behavioral expressions of these dispositions are rooted in the basal ganglia.
The upper basal ganglia are involved in starting and maintaining these
behaviors, while the limbic basal ganglia, especially the amygdala in its relation
to the hypothalamic region, regulate satiety: stopping eating and drinking, and
placing social and territorial constraints on fighting and fleeing, and on sex. This
raises a key question: What has “stopping” to do with emotion?
The answer to this question came from electrical stimulations of the medial
hypothalamic region and of the amygdala in rats, cats, monkeys and humans.
When the stimulation of these brain structures is mild, the animal or human
becomes briefly alert. In humans, drowsiness or boredom are briefly interrupted
—only to be quickly reinstated. Moderate stimulation results in greater arousal,
an “opening up,” with evidence of interest in the animal’s surroundings. (In
humans such stimulation results in a smile and even in flirting.) Medium-
strength stimulation results in withdrawal and petulance. Strong stimulation
produces panic in humans and what has been called “sham rage” in animals: the
subject of the stimulation lashes out at whatever target is available. Alerting,
interest, approach, withdrawal, panic and emotional outburst are on a
continuum that depends on the intensity of the stimulation!
These results led me to talk about emotions as “hang-ups” because alerting,
interest, withdrawal, panic and outbursts do not immediately involve practical
relations with an animal’s or a person’s environment. The coping with pain and
pleasure is performed “within the skin.” As noted, William James defined
emotion as stopping at the skin. When expressed, emotions don’t get into
practical relations with anything or anyone unless there is another person to
“read” and to become influenced by the expression.
When we are no longer “hung up,” we again become disposed to getting
into practical relations with our environment; that is, we are motivated and our
upper basal ganglia become involved.
Conditioned Avoidance
Popular as well as scientific articles have proclaimed that the amygdala and
hippocampus are “the centers in the brain for fear.” Such pronouncements make
some of us who are teaching feel exasperated. Of course there is a descriptive
base for the popular assertions: Some scientists are interpreting the function of a
particular brain system on the results of one experimental test or a group of
related tests. When they find evidence, such as a freezing posture and increase of
defecation in rats, they assume that the rat is “afraid.” Working with primates
makes such assumptions more difficult to come by.
My monkeys, and even dogs, figured out what my experiments that aimed
to test for “fear” were all about. In such experiments, a signal such as turning on
a light indicated to the animal that it needs to jump out of its cage into an
adjacent one to avoid a mildly unpleasant electrical current that would be applied
to the bottom of the cage 4 seconds hence. Animals learn such a conditioned
avoidance quickly. Monkeys showed me that they had become conditioned by
jumping—and then by repeatedly reaching over and touching the electrified cage
they had just jumped out of—that they knew what was going on. Knowing and
emotionally fearing are not the same.
The dogs came to me with a Harvard graduate student who was to study the
effects of removal of the amygdala in a “traumatic avoidance” experiment. In
this experiment the dogs were to learn to avoid a noxious degree of electric
current that was turned on some seconds after a gate dividing two compartments
was lifted. In an initial experiment I lifted the gate without turning on any
electric current—and the dogs jumped immediately to the other compartment.
Was the “grass greener on the other side of the fence”? The student and I then
substituted a flashing light for the “fence” as a signal that the dogs were to jump
over a low barrier to reach the other compartment. The dogs learned to do this
substitute for a ”traumatic” conditioning procedure in fewer than ten trials. (Of
course, as was my policy for the laboratory, I never turned on any traumatic
electric current whatever, at any time.) Dogs like to please and they quickly
figured out what we wanted. In these experiments, both monkeys and dogs
appeared pleased to have a chance to play and seemed cognitively challenged
rather than afraid.
A Story
How to choose the course of the “what is it?” and “what to do?” is
beautifully illustrated by a story Robert Sternberg loves to tell, based on his
dedication to Yale University, where he teaches and does research.
Two students, one from Yale and the other from Harvard, are hiking alone
in the woods in Yellowstone Park when they come upon a bear. Both students
are interested in bears, but have also heard that bears can be dangerous. The
Harvard student panics. What to do? Climb a tree? Bears can climb trees. Run?
Could the bear outrun us? Hide? The bear can see us. Then he noticed that his
friend, the Yalie, had taken out his running shoes from his backpack, had sat
down on a fallen log, and was lacing them up. “Why are you doing that? the
Harvard student asked. “Don’t you realize that you’ll never outrun that bear
whether you have track shoes on or no?” “I don’t have to outrun him,” replied
the ‘cortical’ Yalie. “I only have to outrun you.”
The bear was a novel experience for the students. Their visceral and
autonomic systems most likely responded to the novelty. The intensity of the
response depended on what they knew of bears in that habitat. The feelings of
interest initiated by the sight of the bear were due not only to the visceral
responses that were elicited; quickly the students’ “what to do?” processes
became involved. Further, at least one of the students entertains a “cortical”
exercise, an episode in morbid humor, in attitude—or so the story goes. I can see
them laughing, to the puzzlement of the bear, who wanders away from the scene.
What Is Novelty?
There is a great deal of confusion regarding the perception of novelty. In
scientific circles, as well as in art and literature, much of this confusion stems
from confounding novelty with something that scientists and engineers call
“information.” During a communication, we want to be sure of what is being
communicated. I often don’t quite get what another person is saying, especially
on a telephone, so I ask that the message be repeated, which it is, perhaps in a
slightly different form. My uncertainty is then reduced. In 1949, C. E. Shannon
and W. Weaver, working at the Bell Telephone Laboratories, were able to
measure the amount of reduction of “unsureness,” the amount of uncertainty
produced during a communication. This measure of the reduction in uncertainty
became known as a measure of information. (In 1954, Gabor related this
measure to his wavelet measure, the quantum of information, which describes
the minimum uncertainty that can be attained during a communication.)
The experiencing of novelty is something rather different. During the early
1970s, G. Smets, at the University of Leuven, performed a definitive experiment
that demonstrated the difference between information (in Shannon’s sense) and
novelty. Smets presented human subjects with a panel upon which he flashed
displays of a variety of characters equated for difficulty in being told apart. The
displays differed either in the number of items or in the arrangement of the
items. The subjects had merely to note when a change in the display occurred.
Smets measured the visceral responses of the human subjects much as we had
done in our monkey studies. Changing the number of items in the display—
demanding a change in the amount of information to be processed, in Shannon’s
terms— produced hardly any change in visceral responses. By contrast,
changing the arrangement of the items into novel configurations (without
changing the number or the items themselves) evoked pronounced visceral
reactions. Novelty is sensed as a change in pattern, a change in configuration, a
change from the familiar.
In presenting the results of Smets’s experiments, this past year one of my
students at Georgetown complained that she understood that novelty is different
from information: But how? Her classmates agreed that they would like to know
specifically what characterizes “novelty.” I stewed on the problem for three
weeks and on the morning of class, as I was waking up, the answer became
obvious:
Novelty actually increases the amount of uncertainty. This increase is the
opposite of the reduction of uncertainty that characterizes Shannon information.
Novelty, not information, is the food of the novelist as indicated in the quotation
that introduces this chapter.
Normally, we quickly habituate to a novel occur-rence when it is repeated.
We form a representation that we use as a basis for responding or not
responding, and how we respond. This representation of what has become
familiar constitutes an episode, which is punctuated by a “what is it?”, an
orienting reaction to an event that initiates another episode until another
orienting reaction stops that episode and begins a novel one. We can then,
together with other episodes, weave together the complex that forms these
episodes into a story, a narrative. Unless a particular episode becomes a part of
our “personal narrative,” we fail to remember it.
Constraints
How then are episodes formed from the stream of our experience? The
basic finding—that our stream of consciousness, the amount of attention we can
”pay,” is constrained—comes from knowing that the “what is it?” stops ongoing
experience and behavior. Monkeys with their amygdala removed did not stop
eating or drinking or fleeing or fighting or mounting in circumstances where and
when the normal animals do. After I removed their amygdala, the boundaries,
the where and when, that define a sequence of experience or behavior before a
change occurs, were gone. In the normal animal, as well as in us, those
boundaries define an experienced episode.
I removed the amygdala on young puppies, and we got the same result I had
obtained in monkeys. The puppies who had their amygdala removed did not rush
to eat, as did their intact control littermates, but once they began eating they
went on and on, long after their littermates had become satiated and had left the
food tray. When we tested monkeys who had had their amygdalas removed to
see whether they ate more because they were “hungrier” than their unoperated
controls (or their preoperative selves), we found that the monkeys who had had
their amygdalas removed were actually less eager to eat at any moment (as
measured by the amount eaten over a given time period or by their immediate
reaction to the presentation of larger food portions), but once they began to eat,
they would persist for much longer before quitting the eatery.
When we are in the grip of an intense feeling, such as love, anger,
achievement or depression, it feels as if it will never end, and it pervades all that
we are experiencing. But after a while, feelings change; a stop process has set a
bound, a constraint, on the feeling. Satiety constrains appetites; territoriality
constrains sexuality; cognition constrains fleeing or fighting.
A constraint, however, does not necessarily mean simply a diminution of
intensity. Constraints (Latin: “with-tightening”) may, under specifiable
circumstances, augment and/or prolong the intensity of a feeling. It’s like
squeezing a tube of toothpaste: the greater the squeeze the more toothpaste
comes out. Obsessive-compulsive experiences and behaviors, for instance,
provide examples of constraints that increase intensity.
Constraints are provided for us by the operation of a hierarchical set of our
neural systems, of which the amygdala and other basal ganglia are in an
intermediary position. Beneath the basal ganglia lie the brain stem structures that
enable appetitive and consummatory processes and those that lead to self-
stimulation: the homeorhetic turning on and off of pleasures and pains. At the
top of the hierarchy of brain structures that enable feelings and expressions of
emotion and motivation is the cortex of the frontal lobes.
Labeling our feelings as fear, anger, hunger, thirst, love or lust depends on
our ability to identify and appraise this environmental “what” or “who,” and such
identification depends on secondarily involving some of our other brain systems,
which include the brain cortex.
The chapters hereafter deal with the importance of the cortex in enabling
the fine grain of emotional and motivational feelings and expressions that permit
us to weave such episodes into a complex, discerning narrative.
Epilepsy
Before the advent of EEGs, the diagnosis of temporal lobe epilepsy was
difficult to make. I had a patient who had repeatedly been violent—indeed, he
was wanted by the police. He turned himself in to the hospital and notified the
police of his whereabouts. I debated with myself as to whether he did indeed
have temporal lobe epilepsy or whether he was using the hospital as a way to
diminish the possible severity of his punishment. The answer came late that
night. The patient died in status epilepticus during a major seizure.
Fortunately, most patients with temporal lobe epilepsy do not become
violent. On another occasion, decades later, I was teaching at Napa State
Hospital in California every Friday afternoon. After my lecture, one of the
students accompanied me to my car, and I wished her a happy weekend. She was
sure to have one, she said: she was looking forward to a party the students and
staff were having that evening. The following Friday, after my class, this young
lady and several others were again accompanying me to my car. I asked her how
she had enjoyed the prior weekend’s party. She answered that unfortunately
she’d missed it—she had had a headache and must have fallen asleep. The other
girls immediately contradicted her: they said she’d been at the party all evening
—had seemed a bit out of it, possibly the result of too much to drink. Such
lapses in memory are characteristic of temporal lobe epilepsy. The young lady
was diagnosed and medicated for her problem, which eliminated the seizures.
An oft-quoted set of experiments performed in the early 1960s by S.
Schachter and J. E. Singer at Columbia University showed that experienced
feelings can be divided into two separate categories: the feelings themselves and
the labels we place on those feelings. These experiments were done with
students who were given an exam. In one situation, many of the students (who
had been planted by the experimenter) said how easy the questions were, turned
in their exams early and also pointed out that this was an experiment and would
not count on the students’ records.
Another group of students was placed in the same situation, but these
planted students moaned and groaned, one tore up the test and stomped out of
the room, another feigned tears. Monitors said that the exam was given because
the university had found out it had admitted too many students and needed to
weed some out.
When asked after the exam how each group felt about it, and what their
attitude was toward the experimenters, the expected answers were forthcoming:
the “happy” group thought the exercise was a breeze; the “unhappy” group felt it
was a travesty to be exposed to such a situation.
As part of the experiment, some of the students had been injected with
saline solution and other students with adrenaline (as noted, the adrenal gland is
controlled by the amygdala.) Analysis of the result showed that the adrenaline-
injected students had markedly more intense reactions—both pleasant and
“painful”—than did the saline group.
The label we give to a feeling that has been engendered is a function of the
situation in which it is engendered; its intensity is a function of our body’s
physiology, in the case of this experiment, a function of chemistry, the injection
of adrenaline that influenced the amygdala/ adrenal system.
In Summary
1. Motivations get us into practical relations with our environment. They are
the proactive aspect of an action-based memory processes mediated by the
upper basal ganglia: the caudate and the putamen.
2. Emotions stop at the skin. Emotions are hang-ups. They stop ongoing
behavior. Emotions are based on attending to novelty which increases our
uncertainty. Thus, we alert to an unexpected occurrence. We may become
interested in such an occurrence and become “stuck” with this interest.
More often, we become bored, habituated. Habituation, familiarization
occurs as the result of a change in hedonic memory processes initiated in
the limbic basal ganglia. Familiarization depends on visceral activation.
Emotions include the prospective aspects of this change in memory.
3. Electrical stimulation of the amygdala of animals and people has produced,
in ascending order of the amount of stimulation (applied randomly),
reactions that range from a) momentary to prolonged interest; b) to
approaching conspecifics as in sexual mounting or flirting; c) to retreating
from the environment; and d) to outbursts of panic which may lead to
attack, labeled “sham rage.”
4. In short, although processing the content of an episode (the familiar) is
highly specific, the response engendered varies along an intensive
dimension reaching from interest to panic and rage.
Chapter 14
Values
Wherein I chart the brain processes that organize the manner in which we make
choices.
Motives like to push us from the past; values try to draw us into the
future.
—George A. Miller, Psychology: The Science of Mental Life, 1962
In 1957, I presented the results of these and related experiments at the 16th
International Congress of Psychology in Bonn, Germany. My text was set in
terms of economic theory—especially as developed by the thesis of “expected
utility” that had been proposed in 1953 by the mathematicians John von
Neumann and O. Morgenstern. I had become interested in how we measure
processes such as changes in temperature, and von Neumann and Morgenstern
had an excellent exposition of the issue. Also in their book they talked about
zero-sum games, a topic which became popular during the Cold War with the
Soviet Union. It was during these literary explorations that my experimental
results on preferences occurred, and Von Neumann and Morgen-stern’s depiction
of the composition of utilities caught my attention. Wolfgang Köhler, friend and
noted psychologist, came up to me after my initial presentation of these results
and said, “Karl, you’ve inaugurated a totally new era in experimental
psychology!”
I subsequently presented my findings at business schools and departments
of economics from Berkeley to Geneva. During the 1960s and 1970s at Stanford,
I often had more students from the Business School in my classes than from the
psychology and biology departments. But despite this enthusiastic reception
from the economic community, the inauguration of a new era in neuro-
economics would have to wait a half-century for fulfillment.
Preferences
But the most surprising finding in this series of experiments was that the
monkeys who had their amygdala removed showed no disruption of the order in
which they chose the items presented to them. Their preferences were not
disrupted. At the time when I was running my experiments, experimental
psychologists were discovering a surprising fact: preferences were shown to
depend on perceived differences arranged hierarchically.
In another set of experiments, my colleagues and I were showing that the
removal of the temporal lobe cortex biased (led) the monkeys toward taking
risks, whereas removal of the hippocampal system biased monkeys toward
caution. Preferences were shown to be arranged hierarchically according to
perceived risk: I prefer a Honda to a Cadillac; I prefer meat to vegetables; and
vegetables to potatoes. Thus, I was able to relate my results on maintaining
preferences after amygdalectomy to the monkeys’ intact ability to perceive
differences. The amygdala are not involved with preferring meat to potatoes, as
my experiments showed.
By contrast, the intensive character of “desirability” is not arranged
hierarchically, as was shown by Duncan Luce of the University of California at
Irvine in his experiments on expectations. For instance, in his discussions with
me, he noted that desirability, and therefore demand, changes dramatically, not
according to some hierarchy, with changes in the weather (packing for a trip to a
location that has a different climate is terribly difficult.) Desirability changes
dramatically with familiarity (what is exciting initially has become boring) and
with recent happenings as, for instance, when air travel became momentarily less
desirable after the attack on the World Trade Center in New York— airlines
earnings fell precipitously. It is practically impossible to rank (hierarchically)
desirability and to try to do so can sometimes get one in trouble: Never, never
attempt to charm your lover by ranking the desirability of his or her
characteristics against someone else’s. We either desire something or we don’t,
and that desire can change abruptly. Damage to the amygdala dramatically alters
a person’s or monkey’s desire for food or any of the other Four Fs, not the
ranking of preferences.
“Concilience”—What It Means
Recently a comprehensive review paper published in Science has heralded a
new approach to neuroscience: neuroeconomics. “Economics, psychology and
neuroscience are converging today into a single, general theory of human
behavior. This is the emerging field of neuroeconomics in which concilience, the
accordance of two or more inductions drawn from different groups of
phenomena, seems to be operating.” (Glimcher and Rustichini, 2004)
The review presents evidence that when paradoxical choices are made
(those choices that are unpredicted from the objective probabilities, the risks, for
attaining success), certain parts of the brain become selectively activated (as
recorded with fMRIs). These studies showed that, indeed, preferences, risks,
involve the posterior cortex—a welcome substantiation of my earlier conjecture.
But the review, though mentioning von Neumann and Morgenstern’s theorem,
fails to distinguish clearly between preferences and utilities, the important
distinction that resulted directly and unexpectedly from my brain experiments.
The recent studies reviewed in Science confirmed many earlier ones that
had shown that our frontal cortex becomes activated when the situation in which
our choices are made is ambiguous. This finding helps explain the results,
described earlier, that I had obtained in my fixed-interval and conditioning
experiments: The frontal cortex allows us a flexibility, a flexibility that may
permit us to “pick up an unexpected bargain” which would have been missed if
our shopping (responding) had been inordinately “fixed” to what and where we
had set our expectations.
Another recent confirmation of the way brain research can support von
Neumann and Morgenstern’s utility theory came from psychologist Peter
Shigzal’s experiments using brain stimulation as an effective reward. He
summarized his findings in a paper entitled “On the Neural Computation of
Utility” in a book on Well-Being: The Foundations of Hedonic Psychology.
Shigzal discerns three brain systems: “Perceptual processing determines the
identity, location and physical properties of the goal object, [its order in a
hierarchical system of preferences]; whereas . . . evaluative processing is used to
assess what the goal is worth. A third processor . . . is concerned with when or
how often the goal object is available.”
The results of Shigzal’s experiments are in accord with the earlier ones
obtained in my laboratory and are another welcome confirmation. Contrary to
practices in many other disciplines, the scientist working at the interface between
brain, behavior and experience is most always relieved to find that his or her
results hold up in other laboratories using other and newer techniques. Each
experiment is so laborious to carry out and takes so many years to accomplish
that one can never be sure of finding a “truth” that will hold up during his
lifetime.
These recent explorations in neuroeconomics owe much to the work of
psychologist Danny Kahneman of Princeton University whose extensive
analyses of how choices are made received a Nobel Prize in economics in 2003.
Kahneman set his explorations of how we make choices within the context of
economic theory, explicitly contributing to our understanding of how our
evaluations enable us to make choices, especially in ambiguous situations.
Valuation
The important distinction I have made between preference and utility is
highlighted by experiments performed by psychologists Douglas Lawrence and
Leon Festinger in the early 1960s at Stanford. In these experiments, animals
were taught to perform a task and then, once it had been learned, to continue to
perform it. For example, a male rat must learn to choose between two alleys of a
V-shaped maze. At the end of one of these alleys, which had been painted with
horizontal stripes, is a female rat in estrus. The alleys with the female at the end
are switched more or less randomly so that the placement of the female changes,
but the striped alley always leads to her. The rat quickly catches on; he has
developed a preference for the stripes because they always lead him to the
female he desires.
Once the rat knows where to find the female, does he stop? No. He
increases his speed of running the maze since he needs no more “information”
(in scientific jargon, reducing his uncertainty) in order to make his choice of
paths to reach her. But we can set a numerical “value” on running speed, on how
fast the male runs down the alley. This value measures the level, the setpoint, of
the male’s desire for the female.
Rewards (and deterrents) thus display two properties: they can provide
information, that is, reduce uncertainty for the organism, or they can bias
choices, place a value on their preferences.
The experiments performed at Stanford by Festinger and Lawrence showed
that the “laws” that govern our learning and those that govern our performance
are different and are often mirror images of one another. The more often, and the
more consistently, a reward is provided, the more rapid the learning. During
performance, such consistency in reward risks catastrophe: when the reward
ceases, so does the behavior, sometimes to the point that all behavior ceases and
the animal starves. In human communities, such a situation arises during
excommunication: a dependable source of gratification is suddenly withdrawn,
sometimes leading to severe and incurable depression.
A slower and more inconsistent schedule of reward results in slower
learning, but it assures that our behavior will endure during the performance
phase of our experience. This is due to the buildup of the expectation that the
current lack of reward may be temporary.
In Summary
My surprising experimental results obtained in exploring the effects of
amygdalectomy on expectation led to their interpretation in terms of economic
theory. I had framed the results of experiments on the amygdala on changes in
emotion and attention as indicating that emotions were based on familiarity, a
memory process, and that the prospective aspects of familiarity controlled
attention, and thus led to expectations. In turn, expectations were shown to be
composed of three processes that were formed by three separate brain systems:
preference, organized hierarchically, formed by the posterior systems of the
brain; estimations of the probability of reward by the prefrontal systems of the
brain; and desirability, an intensive dimension, formed by the amygdala (and
related systems). This intensive dimension is non-hierarchical and ordinarily is
bounded—a stop process—by the context within which it occurs. The
composition of that context is the theme of the next chapter.
Chapter 15
Attitude
Wherein the several brain processes that help organize our feelings are brought
into focus.
“A wise teacher once quoted Aristotle: ‘The law is reason free from
passion.’ Well, no offense to Aristotle, but in my three years at Harvard
I have come to find that passion is a key ingredient to the study and
practice of law and life.
“It is with passion, courage of conviction, and strong sense of self that
we take our next steps into the world, remembering that first
impressions are not always correct. You must always have faith in
people, and most importantly, you must always have faith in yourself.”
— Elle’s commencement address in the movie Legally Blonde
Efficiency: Remembering What Isn’t
When I first began to remove the amygdala, I did so on one side at a time.
Later, I was able to remove both amygdala in one operation. While the initial
single-sided removal took from four to six hours, my most recent removals of
both amygdala took forty minutes—twenty minutes per side. I have often stood
in wonder when a skilled artisan such as a watchmaker, pianist or sculptor
performs a task with an ease and rapidity that is almost unbelievable.
My experience shows that efficiency in performance develops by leaving
out unnecessary movements. This chapter reviews the evidence that the Papez
Circuit, the hippocampal-cingulate system, has a great deal to do with processing
our experience and behavior efficiently, and how efficiency is accomplished.
One of the Stanford graduate students in my class referred to the
hippocampus as “the black hole of the neurosciences.” It is certainly the case
that more research is being done on the hippocampus than on any other part of
the brain (except perhaps the visual cortex) without any consensus emerging as
to what the function of the hippo-campus actually is. Part of the reason for the
plethora of research is that hippocampal tissue lends itself readily to growth in a
Petri dish: such neural tissue cultures are much easier to work with, chemically
and electrically, than is tissue in situ. These studies clarify the function of the
tissue that composes the hippocampus but does not discern its role in the brain’s
physiology or in the organism’s behavior.
Another reason for the difficulty in defining the function of the
hippocampus is that there is a marked difference between the hippocampus of a
rat, the animal with which much of the research has been accomplished, and the
hippocampus of a primate. In the primate, the upper part of the hippocampus has
shriveled to a cord of tissue. The part of the hippocampus that is studied in rats is
adjacent to, and receives fibers from, the part of the isocortex that receives an
input from muscles and skin. Thus, removal of the hippocampus in rats changes
their way of navigating the shape of their space.
What remains of the hippocampus in primates is mainly the lower part. This
part is adjacent to and receives most of its input from, the visual and auditory
isocortex. Studies of the relationship of the hippocampus to behavior have
therefore focused on the changes in their patterns of behavior beyond those used
in navigating the shape of their space.
Just as in the case of research on the amygdala, almost all research on
damage to the hippocampus in primates, including humans, has devolved on the
finding of a very specific type of memory loss: the memory of what is happening
to the patient personally fails to become familiar. Again it is memory, and
familiarity, not the processing of information, or for that matter of emotion, that
is involved.
Fortunately, I have found a way to work through the welter of data that are
pouring into the “black hole” of hippocampal research: I can begin with the
results of research that my colleagues and I performed in my laboratory. The
most important of these was based on the observation that when we navigate our
world, what surrounds the target of our navigating is as important as the target
itself. When I steer my way through an opening in a wall (a door), I am not
aware of the wall—unless there is an earthquake. I am familiar with the way
rooms are constructed and, under ordinary circumstances, I need not heed, need
not attend the familiar. The example of the earthquake shows, however, that
when circumstances rearrange the walls, the “representation” of walls, the
memory, the familiar has been there all along. The experiments we performed in
my laboratory showed that the hippocampus is critically involved in “storing”
that familiar representation.