0% found this document useful (0 votes)
45 views248 pages

The Form Within Karl H Pribram Part 1

Uploaded by

adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views248 pages

The Form Within Karl H Pribram Part 1

Uploaded by

adam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 248

The Form Within copyright © 2013 Karl H.

Pribram

All rights reserved.


Printed in the United States of America
First Edition

No portion of this book may be reproduced in any fashion, print, facsimile, or


electronic, or by any method yet to be developed, without the express written
permission of the publisher.

For information about permission to reproduce selections from this book, write
to:

PROSPECTA PRESS
P.O. Box 3131
Westport, CT 06880
www.prospectapress.com

Book production by Booktrix


Cover illustration of “Nested Brain” by Dr. Karl H. Pribram & Katherine
Neville;
Graphic representation by HMS Graphic Design Photography;
Cover design by Barbara Aronica-Buck
Book design by Katherine Neville

Hardcover ISBN: 978-1-935212-80-5


Ebook ISBN: 978-1-935212-79-9
WASHNGTON ACADEMY OF SCIENCES

The Washington Academy of Sciences was incorporated in 1898 as an


affiliation of the eight Washington D.C. area scientific societies. The
founders included Alexander Graham Bell and Samuel Langley,
Secretary of the Smithsonian Institution. The number of Affiliated
Societies has now grown to over sixty. The purpose of the Academy has
remained the same over the years: to encourage the advancement of
science and “to conduct, endow, or assist investigation in any
department of science.”

The Academy offers distinguished scientists the opportunity to submit


a book to our editors for review of the science therein. The manuscript
receives the same rigorous scientific review accorded articles to be
published in our Journal. If the reviewer(s) determine(s) that the
science is accurate the book may use the approved seal of the
Academy to demonstrate that it has met the required criteria.

The Academy is particularly proud to award the seal to Dr. Karl


Pribram who, in 2010, received the Academy’s award for his
Distinguished Career in Science.
Table of Contents
Prologue: The Quest
Preface: The Form Within
Formulations
Chapter 1: Correlations
Chapter 2: Metaphors and Models
Chapter 3: A Biological Imperative
Chapter 4: Features and Frequencies
Navigating Our World
Chapter 5: From Receptor to Cortex and Back Again
Chapter 6: Of Objects and Images
Chapter 7: The Spaces We Navigate and Their Horizons
Chapter 8: Means to Action
A World Within
Chapter 9: Pain and Pleasure
Chapter 10: The Frontolimbic Forebrain: Initial Forays
Chapter 11: The Four Fs
Chapter 12: Freud’s Project
Choice
Chapter 13: Novelty: The Capturing of Attention
Chapter 14: Values
Chapter 15: Attitude
Chapter 16: The Organ of Civilization
Chapter 17: Here Be Dragons
Re-Formulations
Chapter 18: Evolution and Inherent Design
Chapter 19: Remembrance of Things Future
Chapter 20: Coordination and Transformation
Chapter 21: Minding the Brain
Applications
Chapter 22: Talk and Thought
Chapter 23: Consciousness
Chapter 24: Mind and Matter
Chapter 25: Meaning
Appendix A: Minding Quanta and Cosmology
Appendix B: As Below, So Above
Prologue
The Quest

When Clerk-Maxwell was a child it is written that he had a mania for


having everything explained to him, and that when people put him off
with vague verbal accounts of any phenomenon he would interrupt
them impatiently by saying, ‘Yes; but I want to know the particular go
of it!’ Had the question been about truth, only a pragmatist could have
told him the particular go of it. . . . Truths emerge from facts; but they
dip forward into facts again and add to them; which facts again create
or reveal new truth. . . . And so on indefinitely. The facts themselves
are not true. They simply are. Truth is the function of beliefs that start
and terminate among them.
—William James, Pragmatism: A New Name for Some Old Ways of Thinking, 1931
Have you noticed the massive interest shown by the scientific community
in studies of mind and brain? Scientific American is publishing a new journal,
Mind; Psychology Today is headlining studies of the workings of the brain;
Popular Science and Discover are filled with tales of new discoveries that relate
mind to brain and 37,000 neuroscientists are working diligently to make these
discoveries possible.
It has not always been this way. When, as an undergraduate at the
University of Chicago in the 1930s, I took a course on the nervous system, we
stopped short of the brain cortex—and what we now know as the “limbic
systems” was dismissed as a most mysterious territory called the “olfactory
brain.”
It was this very lack of knowledge that attracted me: The Form Within is the
story of my adventures in charting this most important world about which we
then knew so little. In the process, the data I gathered and the colleagues with
whom I worked changed what opinions I had brought to the research and added
sophistication to my interpretations. In retrospect what impresses me—and what
I urge my students to appreciate—is the long time required, sometimes decades
of passion, persistence and patience, for an idea to ripen into a body of hard
evidence and a theoretical formulation that I was finally able to communicate
clearly to others.
The Form Within tells the story of my voyage of discovery during almost a
century of research and theory construction. The story is set within the frame of
the discoveries of the previous century, the 19th, which totally revised our
understanding in the Western world of how our brains work. During the 17th
century and earlier, the human brain was thought to process air, often termed
“spirits.” With the advent of the sciences of chemistry and of electricity, a basic
understanding of brain processes was attained that, to a large extent, resembles
our current views. Writings like those of William James and Sigmund Freud,
around the juncture of the two centuries, will be referenced repeatedly in the
following chapters.
My own story has a somewhat different perspective from that which is
currently available in most neuro- and psychological science books and journals.
For example, my interactions with the 20th century “shapers of viewpoints” in
the brain and behavioral sciences—B.F. Skinner and Karl Lashley, experimental
psychologists; Wilder Penfield, neurosurgeon; Arthur Koestler, author; John
Eccles, neurophysiologist and Karl Popper, philosopher—expose the questions
that puzzled us rather than the dogma that has become associated with their
names.
Thus, The Form Within is the story of my quest. It charts a voyage of
discovery through 70 years of breakthroughs in brain and psychological
research. More than a hundred students have obtained their doctoral and
postdoctoral training in my laboratory and gone on to productive and in some
cases outstanding careers. Collaboration with scientists on five continents has
formed and continues to form my views. The voyage is not over: each month
I’ve had to add to, and in some cases revise, this manuscript. But as a work in
progress, The Form Within is to me an inspiring chronicle of a most exciting
voyage through what, often, at the time, has seemed to be a storm of
experimental and theoretical findings. As you will see, the concept of form—the
form within as formative causation—provides a novel and safe vessel for this
journey.
Preface
The Form Within

The first thing we have to say respecting what are called new views . . .
is that they are not new, but the very oldest of thoughts cast into the
mold of these new times.
—Ralph Waldo Emerson, “The Transcendentalist,” 1842
A major thrust in our current changing view of humanity has been our
growing understanding of what our brain does and how it does it. As always, in
times of change, controversies have arisen. Not surprisingly, some of the most
exciting findings, and the theories that are derived from them, are being swept
under the rug. In The Form Within I have enjoyed placing these controversies
within the larger context of complexity theory, framed by two ways of doing
science from the time of classical Greece to the present. Within such a frame I
bring to light findings that deserve our attention which are being ignored, and I
clarify the reasons why they are being ignored.
Critical to any communication of our brain’s complex inner activity and its
interaction with the world we navigate is the concept of “form”: The Form
Within. We are in the midst of an in-form-ation revolution, begun in the latter
half of the 20th century, that is superseding the Industrial Revolution of the 19th
and early 20th centuries. The Industrial Revolution had created a vast upheaval
through the impact of changes that occurred in our material world. The
information revolution that is now under way is changing the very form, the
patterns of how we navigate that world.
This revolution has many of the earmarks of the Copernican revolution of
earlier times. That revolution inaugurated a series of scientific breakthroughs
such as those by Galileo, Newton, Darwin and Freud. Though often unintended,
all of these and related contributions shifted humans from center stage to ever
more peripheral players in the scientific scenario.
The current information revolution redresses and reverses this shift: Once
again science is showing that we humans must be perceived as the agents of our
perceptions, the caretakers of the garden that is our earth, and how we can be
actively responsible for our fellow humans. As during the Copernican revolution,
the articulation of new insights is halting, often labored, and subject to revision
and clarification. But the revolution is under way, and the brain and
psychological sciences are playing a central role in its formulation. This
orientation toward the future is what has made The Form Within so interesting
to write.
This frame provides the form in which we communicate scientific research
and theory. Form is the central theme of this book. Form is defined in Webster’s
dictionary as “the essential nature of a thing as distinguished from its matter.”
What has been lacking in the brain sciences is a science-based alternative to
matter-ialism. Form provides such an alternative.
The dictionary continues its definition. “Form” forms two currents: form as
shape and form as pattern. Twentieth-century brain scientists felt comfortable in
their explanations using descriptions of shape but less so with descriptions of
pattern. Matter has shape. Most brain scientists are materialists. By contrast,
communication by way of the transmission of information is constituted of
pattern. Despite paying lip service to describing brain function in terms of
information processing, few classrooms and textbooks make attempts to define
brain processing in the strict sense of measures of information as they are used in
the communication and computational sciences. A shift from the materialism
(explanations in terms of matter) of the Industrial Revolution of the 19th and
early 20th centuries, to this “form-al” understanding of in-form-ation as pattern
is heralding the Communications Revolution today.
But measures of information per se are not enough to convey the
impressions and expressions by which we think and communicate. When the
man in the street speaks of information, he is concerned with the meaning of the
information. Meaning is formed by the context, the social, historical and material
context within which we process the information. Thus, in The Form Within I
address meaning as well as information.
The distinction between form as shape and form as pattern has many
ramifications that make up the substance of the arguments and explanations I
will explore throughout this book. For instance, as Descartes was declaring that
there was a difference between “thinking” (as pattern) and “matter” (as shape),
he was also providing us the way to trans-form shape and pattern by bringing
them together within co-ordinates—what we know today as “Cartesian
coordinates.”
The resolution of our question “what does the brain do and how does it do
it” rests on our ability to discern, in specific instances, the transformations, the
changes in coordinates, that relate our brain to its sensory and motor systems and
the ways in which different systems within the brain relate to each another. The
Form Within gives an account of many of these transformations, especially those
that help give meaning to the way we navigate our world.

The Cortical Primate


Toward the end of the Last Ice Age—about 10,000 years ago—a new breed
of primates began to leave its mark on Earth. These creatures possessed a large
brain cortex that allowed them to refine—that is, to fine-tune—their experience
by reflecting and acting upon it.
In the autumn of 1998, Katherine Neville and I were granted rare private
invitations to visit the prehistoric painted caves at Altamira, in Spain. We arrived
on a cold, blustery day, and we were happy to enter the shelter of the ancient
caves. Despite the many photos we had seen, nothing prepared us for the power
of these images, hand colored more than 12,000 years ago. I immediately
recalled my colleagues’ attempts to get chimpanzees (who can do so many other
things very well) to make paintings. The difference between the crude swishes
that the chimpanzees lobbed onto flat surfaces and these detailed figures of
gazelles and oxen that adorned the rough stone walls and ceilings in the caves
revealed, in a few seconds, the difference between us humans and other
creatures. By what process had swipes of ochre been refined into such exquisite
painting?

1. Altamira cave painting

What also impressed me was the location of the paintings: Today the floor
where we walked is hollowed out, but when the paintings were made, the ceiling
was so low that no one could stand erect. During prehistoric times, the painters
must have lain on their backs in this restricting passage. Anyone who has tried to
cover a ceiling using a paintbrush has experienced the aches that go along with
doing something from an unnatural position. And by what light were the
paintings executed? There is no evidence in the caves of smoke from torches.
How, in that ancient time, did the artists illuminate those caves? Just how and to
what purpose did they re-form the world of their caves?
Early human beings refined and diversified—formed —their actions to re-
form their world with painting and song. Storytellers began to give diverse
meanings to these diverse refinements by formulating legends and myths. In
turn, their stories became accepted as constraints on behavior such as laws
(modes of conduct) and religious injunctions (from Latin, religare, “to bind
together, to tie up”). Today we subsume these forms of cultural and social
knowing and acting under the heading “the humanities.”
Along with stories, early human beings began to formulate records of
diverse observations of recurring events that shaped their lives: the daily cycle of
light and darkness as related to the appearance and disappearance of the sun; the
seasonal changes in weather associated with flooding and depletion of rivers; the
movements of the stars across the heavens and the more rapid movements
conjoining some of them (the planets) against the background of others; the tides
and their relationship to the phases of the moon—and these to the patterns of
menstrual cycles. Such records made it possible to anticipate such events, and
subsequently to check these anticipations during the next cycle of those
recurring events. Formulating anticipation of their occurrence gave meaning to
our observations; that is, they formed predictable patterns. Today we subsume
these patterns, these forms of knowing about and acting on the world we
navigate, under the heading “the sciences.”
There has been considerable dissatisfaction among humanists with the
materialist approach taken by scientists to understanding our human condition.
What has been lacking is a science-based alternative to materialism. Form as
“the essential nature” of our concerns provides us such an alternative.

Form as Shape and Form as Pattern


Both form as shape and form as pattern have roots in early Greek
mathematics. Today we associate form as shape with the name Euclid, who
wrote the first known axiomatic theory (a theory based on a few assumptions
from which propositions are logically deduced). Euclid’s assumptions were
devoted to geometry. We were taught Euclid’s geometry in high school and we
are thus familiar with it. In geometry (that is, earth-metrics) we measure—in the
British system—what is in our Yards as we pace them with our Feet, or in the
case of the height of horses, with our Hands. Geometry is a hands-on way of out-
lining shapes. In Euclid’s geometry, parallel lines do not meet.
By contrast, form as pattern is today associated with the name Pythagoras.
Pythagoras is an elusive figure who is known principally through the writings of
the “school” he founded. We are currently acquainted with Pythagoras through
trigonometry, the study of tri-angles that are archetypes, “forms or patterns from
which something develops.“ The term “archetypes” is derived from tepos,
“tapping.” Pythagoras noted that tapping pieces of metal of different sizes gave
rise to different sounds. He later noticed that, in a similar fashion, tapping taut
strings of different lengths gave rise to oscillations in the string that appeared as
waves. Different sounds were produced by strings of different lengths. A string
divided in two gives rise to sounds an octave higher than the original.
Measurement of the length of strings that give rise to different sounds could be
ordered, that is given ciphers or numbers. From the archetype of waving strings,
music was derived.
The cyclic oscillations of the strings that appear as waves vary in frequency
(density) according to the length
of the oscillating strings. An insight that seems ordinary to us but has had to
be repeatedly achieved anew is that the wave-forms can also be represented by a
circle: perhaps noted by the Pythagoreans as the form taken by sand spread
lightly on the surface of a drum. Achieving this seminal insight will be
encountered on several occasions in the chapters of The Form Within. But we
come across it first with Pythagoras.
Thus, once the circle form is recognized, numbers can be assigned to
different parts of the circle. Pythagoras did this by forming a tri-angle within the
circle, a triangle one of whose angles rests on a point on the circle. Arranging the
numbers within a cycle as a circle allowed Pythagoras to tri-angulate (as in
trigonometry) a location on the circle and describe the location by its angles (by
way of their sine and cosine). These angles could measure contexts beyond those
that we can lay our hands on: Go to the nearest railroad track— the tracks do
not obey the Euclidean postulate that parallel lines do not meet. Rather the
tracks appear to meet at an angle near the horizon which forms a circle.
Pythagoras had therefore two number systems available: the frequency or
density of the wave-forms and the numbers forming each cycle.
Note again that geometry develops an active, out-lining method of
measurement that is limited in the range being investigated. By contrast,
relationships among numbers engage an in-formative, an imaginative and
inventive proactive procedure (as in trying to assess the meaning of the apparent
convergence of railroad tracks at our horizon). Form as shape and form as
pattern have led to two very different ways of guiding our approach to the world
we navigate.
In the succeeding chapters of The Form Within, I describe a newer, still
different type of formulation: proactive, dynamical “self-organizing” processes
that occur in our brains and lead to progressively refining our observations, and
thus to refining the targets of our actions and the diversity of our perceptions.
This is the very essence of both the humanities and the sciences.
Sounds forbidding? It is. But accompany me on this fascinating and
amazing journey and you may be surprised at how comprehensible the many
facets of this three-pound universe have become today. And how an intimate
knowledge of your brain will affect the way you transform your world.

The Story
Most of The Form Within is organized according to topics and issues of
general interest as viewed from the vantage of those of us working within the
sciences of brain, behavior and mind. However, in the first three chapters, I
address issues such as how to proceed in mapping correlations, a topic totally
ignored in the current surge in using fMRI to localize the relations between brain
and behavior; how I came to use metaphors and models, which is just coming
into focus in doing experiments in psychology; and how I came to describe self-
organizing processes in the fine-fiber webs and the circuitry of the brain.
The main body of the text begins with the topic of perception. Of special
interest is the fact that, contrary to accepted views, the brain does not process
two-dimensional visual perceptions; rather, the brain handles many more
dimensions of perceptual input. This conclusion is reached because I make a
distinction between the functions of the optic array and those of retinal
processing. Furthermore, evidence shows that there is as much significant input
from the brain cortex to the receptors as there is input from the environment.
In a subsequent chapter, I present evidence to show that the brain’s control
of action is by way of “images of achievement,” not just motor programs—and
the resultant support of the currently dismissed “motor theory” of language
acquisition.
And there is more: The neural processing of pleasure, which accounts for
the intertwining of pain- and temperature-transmitting fibers in the spinal cord, is
shown to be straightforward; so is the evidence for how brain processes are
involved in setting values and making choices, as well as how brain processes
compose, through interleaving cognitive with emotional/motivational processes,
the formation of attitudes.
Finally, a series of chapters is devoted to new views on encompassing
issues such as evolution, memory, consciousness, mind/brain transactions, and
the relation between information processing and meaning. These chapters
challenge, by presenting alternatives, the prevailing views presented by
philosophers of science, especially those concerning the relation between brain
and mind. Much of what composes these and other chapters of The Form Within
cannot be readily found in other sources.
The first three chapters that follow this preface are organized, somewhat in
the order that I utilized them, according to three different types of procedure that
I used to study what the brain does and how it does it: 1) Correlations, 2)
Metaphors and Models, and 3) The Biological Imperative.
By correlations I mean discoveries of relationships within the same context,
the same frame of reference (the same coordinate systems.) We humans, and our
brains, are superb at discovering correlations, sometimes even when they are
questionable. Such discoveries have abounded during the two past decades, by
way of PET and fMRI image processors, techniques that have provided a
tremendous surge of activity in our ability to make correlations between our
brain, behavioral and mental processes. But correlations do not explain the
processes, the causes and effects that result in the correlations.
Metaphors and models, by contrast, are powerful tools that we can use to
guide our explorations of process. We can model a brain process that we are
exploring by doing research on a device, or with a formulation that serves as a
metaphor, and we can then test in actual brain research whether the process that
has been modeled actually works. The Form Within uses telephone
communications, computer programs, and optical-engineered holographic
processes wherever relevant. For the most part our metaphors in the
brain/behavior/mind sciences come from engineering and mathematics.
Brain and behavioral research (such as that obtained through correlation
and the use of metaphors) often develops questions that lead directly to brain
physiological research. My research led to understanding a biological
imperative: the self-organizing property of complex processing.
In subsequent chapters of The Form Within—the topic-oriented chapters—I
have noted the results of the application of the insights obtained as they become
relevant.

In Summary
1. The Prologue and Preface of The Form Within have stressed that we are
experiencing a period of revolution in science akin to that which shaped our
views during the Copernican revolution. We humans are reasserting the
centrality of our viewpoints and our responsibility toward the environment
within which we are housed.
2. This reassessment of our centrality can be formulated in terms of a neglect
of a science based on pattern that has resulted in a science based on matter,
an overarching materialism.
3. Form deals with the essence of what is being observed, not only its matter.
Form comes in two flavors: form as shape and form as pattern.
4. Current science, especially the brain and behavioral sciences, has been
comfortable with explanations in terms of form as the shapes of matter. But
contemporary science is only rarely concerned with form as patterns.
5. The distinction between form as shape and form as pattern is already
apparent in Greek philosophy. Euclid’s geometry, as taught in our high
schools, uses descriptions of form as shapes. Pythagoreans have dealt with
patterns of numbers that describe wave-forms, angles and circles, as in
tonal scales and in trigonometry.
6. In The Form Within, when appropriate, I trace these two different
formulations of research and theory development in the
brain/behavior/experience sciences.
7. Within this context, I develop the theme that the brain/behavior/mind
relationship receives a major new explanatory thrust based on the observed
biological imperative toward self-organization.
Formulations
Chapter 1
Correlations

Wherein I explore the shapes and patterns that inform the brain’s cortex.

I said to them: ‘Let’s work, we’ll think later.’ To have intentions in


advance, a project, a message—no, never! You must begin by plunging
in. If not, it’s death. You can think—but afterwards, after it’s finished.
One form gives rise to another—it flows like water.
—Joan Miró, Selected Writings and Interviews, edited by Margit Rowell, 1998
During the 1940s, I worked at the Yerkes Laboratory of Primate Biology
with its director, Karl Lashley, Harvard professor and zoologist-turned-
experimental-psychologist. Those interactions with Lashley were among the
most fruitful of my career. In this chapter as well as subsequent ones, I discuss
the fruits of our discussions and research.
I had chosen Jacksonville, Florida, to practice neurosurgery because of its
proximity to the Yerkes Laboratory. Lashley had been looking for a
neurosurgeon to help operate on the brains of some of the chimpanzees housed at
the laboratories. We were both delighted, therefore, when, in 1946, Lashley gave
me an opportunity to inaugurate some research.
The time was propitious. Among those working at the laboratory were
Roger Sperry, who later received a Nobel Prize for his experiments dividing the
human brain into left and right hemispheres and was at that time transplanting
the nerves of amphibians. Donald Hebb was there as a postdoctoral student,
writing a book that was to become extremely influential but a bone of contention
between him and Lashley. The laboratory was investigating the effect of the
deprivation of vision in infancy upon later visual effectiveness, a set of studies to
which I made a brief but seminal contribution. Most animals recover almost
completely from temporary loss of vision in infancy, but humans develop a more
permanent disability called “ambliopia ex anopsia” (weakness of vision due to
failure to use the eyes). Neither Lashley nor Hebb had known this at the time,
but Don Hebb immediately made this developmentally produced condition a
centerpiece of his subsequent work.
Lashley was using techniques for testing primate behavior that he had
refined from those already widely used in clinical neurology and those he had
developed in testing rats. In Lashley’s theoretical writings, based on his
experience in animal research, mostly with rats, he was known for being a
proponent of the view that the brain’s cortex operated pretty much as an
undifferentiated unit. On the basis of his research, he had formulated “the laws
of mass action and equipotentiality,” which referred to the findings that the
change in problem-solving behavior following brain damage: a) is proportional
to the amount of tissue damage and b) that the location of the damage is
essentially irrelevant. Gary Boring, another Harvard professor of psychology and
the historian of experimental psychology—noted that inadvertently Lashley’s
view of brain function made it easy for psychologists to ignore the details of
brain anatomy and physiology.
Lashley’s view was decidedly different from mine, which was based on my
experience in brain surgery that consisted of removing tumors whose location
had to be found (at a time before EEGs and brain scans) by observing the
specific behavioral changes the tumors had produced.
I had read Lashley’s views about the brain cortex in his 1929 book Brain
Mechanisms and Intelligence, which I had bought second-hand for ten cents. I
doubted that anyone would currently hold such views, even teasing Lashley after
I got to know him by saying that his ideas seemed to be worth only the ten cents
I had paid for them. But Lashley did indeed still hold these views, and he was
ready to defend them. During a decades-long and rewarding friendship, we
debated whether separate behavioral processes such as perception, thought,
attention and volition were regulated by spatially separate brain systems,
“modules” in today’s terminology, or whether all such behaviors were regulated
by patterns of processes distributed throughout the brain.
Two decades later, in 1960, George Miller, Eugene Galanter and I would
articulate my side of the argument in our book Plans and the Structure of
Behavior as follows:

There is good evidence for the age-old belief that the brain has
something to do with . . . . mind. Or to use less dualistic terms, when
behavioral phenomena are carved at their joints, there will be some
sense in which the analysis will correspond to the way the brain is put
together . . . . The procedure of looking back and forth between the two
fields [psychology and neurophysiology] is not only ancient and
honorable—it is always fun and occasionally useful.

By contrast, Lashley noted that

Here is the dilemma. Nerve impulses are transmitted over


definite, restricted paths in the sensory and motor nerves and in the
central nervous system from cell to cell, through definite intercellular
connections. Yet all behavior seems to be determined by masses of
excitation, by the form or relations or proportions of excitation within
general fields of activity, without regard to particular nerve cells. It is
the pattern and not the element that counts. What sort of nervous
organization might be capable of responding to a pattern of excitation
without limited, specialized paths of conduction? The problem is
almost universal in the activities of the nervous system and some
hypothesis is needed to direct further research.

My task was set.


Inaugural Experiments
Lashley and I decided to engage in experiments that addressed this issue.
Lashley resected small strips of cortex, a cyto-architectural (cellular
architectural) unit, while I chose to remove, in one session, practically all of a
region that had a non-sensory input from the thalamus.
To anticipate—the results were as might be expected: Lashley was
delighted to show that his resections had no effect whatsoever on the behavior of
his monkeys, while my resections produced various sensory-related effects that
had to be analyzed anatomically, neurophysio- logically, and behaviorally by
further experimentation.
I began my research by asking what might be the functions of the so-called
silent areas of the brain cortex. Both in the clinic and in the experimental
laboratory vast areas of cortex failed to yield their secrets. When tumors or
vascular accidents damaged these regions, no consistent results were observed,
and when electrical excitation was applied to the regions, no results at all were
obtained.
The clinical evidence especially was puzzling. Sometimes, cognitive
deficits were obtained, and these were attributed to associations that occurred
between sensory inputs or between sensory inputs and higher-order brain
processes. The deficits were always sensory specific: for instance, visual, tactile,
or auditory (called “aphasia” when restricted to speech). As von Monikov
pointed out in 1914, this raised the problem as to whether such deficits were
always due to simultaneous involvement of the “silent” cortex and the sensory-
specific cortex, or whether they could be produced by damage to the silent
cortex alone. I set out to resolve this issue by working with selective removals of
the silent cortex in monkeys without damaging the primary sensory input to the
brain.
The term “association” was given to these silent cortical areas during the
1880s by Viennese neurologists because, in patients, damage, when it was
extensive, resulted in cognitive disturbances that were not obtained when
damage was restricted to regions that received one or another specific sensory
input. The Viennese neurologists were intellectually aligned with British
“associationist” philosophy. The British philosophers and, consequently, the
Viennese neurologists, claimed that higher-order mental processes such as
cognition are patched together in the brain by association from lower-order
sensory processes such as vision, audition, tactile and muscle sensibilities. This
view is still dominant in British and American neuroscience and philosophy.
Against the “associative” view is the fact that the cognitive disturbances are
most often selectively related to one or another sensory modality. This
observation led Sigmund Freud to coin the term “agnosia” (failure to know) as a
modifier for such cognitive disturbances: for instance, visual agnosia, tactile
agnosia. Freud did not share the associationist view. He proposed an alternative,
an analysis of differential processing, in his classical book On Aphasia.

The Intrinsic Cortex of the Brain


The only way to resolve whether cognitive difficul-ties could arise from
damage that is restricted to the silent cortex was to work with animals, work
enabling me to control the extent of removal and test the animals over a period
of years. The similarity of the monkey brain to the human made the monkey an
ideal choice for this research.
Monkeys have an added advantage: once a part of their brain has been
removed, the rest of the brain becomes reorganized and reaches a stable
condition after a few months. The monkeys can then be tested daily for years in
that stable condition, allowing us to pursue a full exploration of the condition the
monkey and his brain are now in. The animals are precious and we treat them as
we would treat any treasured creature.
The results of my initial program were immediately successful. With my
collaborators, I was able to show that parts of the silent cortex are indeed
responsible for organizing sensory-specific cognitive functions, without any
involvement of adjacent primary sensory systems. Monkeys have a sufficient
reach of cortex that lies between the sensory receiving regions to allow careful
removal without inadvertent damage to these regions or the tracts reaching them
from sensory receptors. My experimental results showed that indeed sensory-
specific deficiencies were produced by resections of cortex that had no direct
connections with sensory receptors or motor controls. I labeled these parts of the
cortex “intrinsic” (following a nomenclature introduced for the thalamus) by
Jersey Rose, professor at the Johns Hopkins University and “extrinsic” (those
parts of the cortex that had “direct” connections with receptors and effectors). It
is now generally accepted that processing in parts of the “intrinsic” cortex is
sensory specific, while in other parts a variety of inter-sensory and sensory-
motor processes are organized.

My success was due to my bringing to bear neurosurgical


techniques that I had learned to apply in humans, techniques that
allowed me to gently make extensive removals of cortex without
inadvertent damage to underlying and adjacent tissue. I obtained
dramatic changes in behavior: the monkeys were markedly impaired
when they had to make choices despite their unimpaired performances
on tests of perception. I then restricted the extent of removal and
demonstrated that the impairment of choices is not a “global
cognitive” difficulty but is related to one or another sense modality.
Today, these more restricted regions are commonly known as making
up the “sensory-specific association cortex” (an oxymoron).

A brilliantly conceived study performed on humans at the Weizmann


Institute in Israel has supported the conclusions I obtained with monkeys.
Subjects were shown a variety of videos and fMRIs were recorded while they
were watching. No response was asked for. Rafael Malach concludes:

Examining fMRI activations under conditions where no report or


introspective evaluation is requested— such as during an engaging
movie—reveals a surprisingly limited activation . . . the very same
networks which are most likely to carry high order self-related
processes appear to shut off during intense sensory-motor perception.

Local networks in the posterior visual cortex were the only ones activated
by a visual percept. Simple visual stimulation engaged the extrinsic occipital
region; greater involvement, as in viewing movies, engages the intrinsic visual
systems of the temporal lobe. (Malach, in his presentation, used the terms
“extrinsic” and “intrinsic,” which I had used decades earlier.) A fronto-parietal
spread of activation occurs with self-related processing such as motor planning,
attentional modulation and introspection.

Mapping the Brain


Mapping the brain’s relationship to “faculties of mind” had been a favorite
pastime of neurologists since the 19th century, which had spawned the heyday of
phrenology.
It all began with the observations of the Viennese neuro-anatomist, Francis
J. Gall (1758–1828). Gall performed autopsies on patients who had displayed
ailments that could possibly be related to their brains and correlated the brain
pathology he found to the disturbed behavior. He then mapped his resulting
correlations and the maps became ever more intricate as his observations
multiplied. This situation was complicated by decades-long popular fascination
during which observations of the location of bumps on the head were substituted
for dissections of the brain at autopsy.
Although I had been convinced by my own neuro-surgical training and
experience that useful correlations could be made relating specific behavioral
disturbances to specific brain systems, these 19th- and early 20th-century maps
of brain-behavioral relationships seemed ridiculous. Cognitive faculties,
emotional attributes and elementary movements were assigned locations in the
brain cortex in a more or less haphazard fashion.

Reporting the Results of My Research


My approach to mapmaking was different. I was trying to pin down what
part of the temporal lobes that Paul Bucy (with whom I took my first residency
in neurosurgery) had removed was responsible for the monkeys’ failure to make
choices among simple visual stimuli. I realized that in order to do the temporal
lobe surgery effectively, my maps must specify what it is that is being mapped—
such as the connections from the thalamus, the halfway house of sensory input to
our brain’s cortex. Otherwise, the results of my surgery would continue to be the
same hodgepodge of “localization” as that based upon brain damage in humans,
which was then still in vogue.
A map of the major highways of the United States looks considerably
different from one of the identical terrain that shows where mineral resources are
located. Likewise, the results of damaging a part of the brain cortex would differ
if one had disrupted a part of the “railroad” (the connections within the brain)
versus the “mineral resources” (the sensory input to the brain).
In reporting the results of my research I therefore began by presenting a
series of brain maps. I did not have the resources or the money to prepare slides,
so I drew the maps with colored crayons on window shades. (The maps shown
here are more sophisticated than the ones I actually drew. Unfortunately, those
early maps disintegrated after several decades, and I have not been able to find
photographs of them.) One window shade presented what we knew of the
endings in the cortex of the sensory input through the thalamus. Another window
shade presented the fine architecture of the arrangement of cells in layers of the
cortex. Still another noted the results of brain electrical stimulation. Finally, after
the initial experiments were completed, I was able to show the effects of
experimental surgical removals of cortex on specific behavioral tests.
2. Cytoarchitectural map

3. Mapping regions of the brain according to the effects of experimental surgical removals on visual choices
4. Map of the functions of the cortex derived from clinical studies

5. Map of the anatomical output from the cortex


6. Map of the input to the cortex from the thalamus

Presenting the Results of Experiments


I arrived at each lecture site with my five window shades, found places in
the lecture hall to hang them, and pulled the maps down when I needed them.
The high point of this display was reached during a symposium at the 1950
meeting of the American Psychological Society at Penn State University. The
symposium was chaired by Don Hebb, by now the chairman of the Department
of Psychology at McGill University in Montreal. It was a time when interest
among establishment experimental psychologists in brain research had reached a
nadir, to the point where they had even abolished the Division of Comparative
and Physiological Psychology of the American Psychological Association.
Being good friends, Hans Lukas Teuber from the Department of Neurology
at New York University, Harry Harlow from the University of Wisconsin and I
as participants therefore decided to attack each other as bluntly as we could with
the hope of creating some interest in the audience.
Much to our surprise, more than 800 psychologists filled the lecture hall,
even sitting on the steps. The participants of the symposium each made a
presentation; I started the symposium and was roundly attacked first by Harlow
for “using only one or two problems to test my monkeys,” then by Teuber, who
merely stated: “Just like a neurosurgeon, to hog all the space available (my
window shades) only to show that we really haven’t come very far since the days
of phrenology.” When it came my turn to criticize Harlow, I noted that we at
Yale at least knew how to distinguish the front of the brain from the back; and
after Teuber had given a most erudite and very long talk, quoting every bit of
research that had been performed over the past century, I noted that he seemed
not to know the literature, having missed the seminal volume on the functions of
the motor cortex (edited by my neurosurgical mentor Paul Bucy, who shortly
became head of the Department of Neurosurgery at Northwestern University).
The audience had quickly caught the spirit in which we were making our
attacks, and the lecture hall rang with laughter. A considerable number of
influential experimental psychologists have told me that their interest in brain
research stemmed from that symposium.
I published my critique of 19th-century mapping procedures in a 1954
paper titled “Toward a Science of Neuro-psychology: Method and Data.” By this
time, I felt that brain mapmaking, even when carefully based on sustainable
correlations, was not sufficient to constitute a science. Just because we can show
by electrical recording or by clinical or surgical damage that a select place in the
brain is involved in color vision does not mean that it is the or even a “center”
for processing color. We need to establish by behavioral techniques (for instance,
tests of form vision) what else those cells in that location in the brain are doing
—and then we must also show, through carefully controlled physiological
experiments (for instance, by electrical stimulation or electrode recording), how
these particular parts of the brain do what they do.

Beyond Correlation
Correlating an individual’s experience or his test behavior with a restricted
brain location is a first step, but is not enough to provide an understanding of the
brain-behavior-experience relationship. Currently, I am finding it necessary to
stress this caveat once again with the advent of the new non-invasive brain-
imaging techniques that are mostly being used to map correlations between a
particular human experience or a particular test behavior with particular
locations in the brain.
Unfortunately, many of the mistakes made during the 19th century are
being made again in the 21st. On the brain side, finding “mirror neurons”
matters much to materialists who feel that matter has now been shown to mirror
our feelings and perceptions. But, as David Marr (now deceased) of the
Massachusetts Institute of Technology pointed out, all we do with such
microelectrode findings is remove the problem of understanding a process—in
this case of imitation and empathy—from ourselves to the brain. We still are no
further in understanding the ”how” of these processes.
At the behavioral level of inquiry there is a similar lack of understanding.
There is almost no effort being made to classify the behaviors that are being
correlated with a particular locus in the brain. What do the behaviors have in
common that are being correlated with activity in the cingulate gyrus (or in the
anterior insular cortex)? And how are these behaviors grounded in what we
know about the physiology of these parts of the brain?
When relations of commonalities between behavior and parts of the brain
are noted, they are often mistaken. “Everyone knows the limbic systems of the
brain deal with our emotions.” This would be a nice generalization—except for
the observation that when damage to the human limbic system occurs, a certain
kind of memory, not human emotion is impaired.
And further—much has been made of the fact that the amygdala deal with
“fear.” Fear is based on pain. The threshold for feeling pain remains intact after
the removal of the amygdala. Fear is also based on memory of pain. Is memory
the contribution of the amygdala to fear? If so, what other memory processes do
the amygdala serve? Chapters 13 and 14 explore these issues
Among other issues, the chapters of The Form Within recount the research
accomplished during the early part of the second half of the 20th century that
provide answers to these issues—answers that were forgotten or swept under the
rug by subsequent generations of investigators.
Mapping correlations, even when done properly, is only a first step in
reaching an understanding of the processes involved in relating brain to behavior
and to mind— correlations do not tell us “the particular go” of a relationship.
As David Hume pointed out toward the end of the 17th century, the
correlation between a cock crowing and the sun rising needs a Copernican
context within which it can be understood. Here in The Form Within, I shall
provide the context within which the correlations revealed by the technique of
brain imaging need to be viewed in order to grasp the complex relationships
between our experience, our behavior and our brain. To do this, I have had to
provide a richer understanding of the relationship between our actions and the
brain than simply making maps, which is the epitome of exploring form as
shape. Sensitized by my discussions with Lashley, without overtly articulating
what I was doing in my research, I thus undertook to explore form as pattern.
Chapter 2
Metaphors and Models

Wherein I trace my discovery of holographic-like patterns of brain processing.

There is still no discipline of mathematical idea analysis from a


cognitive perspective, namely to apply the science of mind to
mathematics—no cognitive science of mathematics.

A discipline of this sort is needed for a simple reason: Mathematics is


deep, fundamental, and essential to the human experience. As such, it
is crying to be understood. It has not been.

It is up to cognitive science and the neurosciences to do what


mathematics itself cannot do—namely apply the science of mind to
human mathematical ideas.
—George Lakoff and Rafael Núñez, Where Mathematics Comes From
An Early Observation
In the late 1930s, the University of Chicago neurophysiologist Ralph
Gerard demonstrated to those of us who were working in his lab that a cut made
in the brain cortex did not prevent the transmission of an electrical current across
the cut, as long as the severed parts of the brain remained in some sort of
contact. I was fascinated by the observation that a crude electrical stimulus could
propagate across severed nerves. Gerard suggested that perhaps his experimental
observation could account for our perceptions, but I felt that our perceptual
experience was more intricate, more patterned, and so needed a finer-grain
description of electrical activity to account for it. I discussed this conjecture with
Gerard and with my physics professor, asking them whether anyone had
observed an extended fine-grain electrical micro-process. Gerard did not know
of such a micro-process (nor did my physics professor), but he emphasized that
something holistic—something more than simple connections between neurons
—was important in understanding brain function.
Later, in the 1950s at Yale University, along with my colleagues, Walter
Miles and Lloyd Beck, I was pondering the neural substrate of vision. While still
at Chicago, I had written a medical school thesis on retinal processing in color
sensation, under the supervision of the eminent vision scientist Stephen Polyak.
In that paper I had suggested that beyond the receptors of the retina, the next
stage of our visual processing is anatomically organized to differentiate the two
or three colors to which our retinas are sensitive into a greater number of more
restricted spectral bandwidths. Now, at Yale, Miles, Beck and I were frustrated
by our inability to apply what I thought I had learned at Chicago about color to
an understanding of vision of patterns, of form. I remember saying to them,
“Wouldn’t it be wonderful if we had something like a spectral explanation for
processing black-and-white patterns?”

Brain Shapes: Gestalts


Not long afterward, the famous German Gestalt psychologist Wolfgang
Köhler told me about his “direct current” hypothesis as the basis for cortical
processing in vision. Here was the holistic “something” which I could relate to
the results of Gerard’s neurophysiological experiment. Köhler and his fellow
Gestalt psychologists believed that instead of working through the connections
between neurons, our perceptions are reflected in electrical fields in the brain
that actually have a geometrical shape resembling the shape of perceived objects.
This view is called “geometric isomorphism” (iso = same and morph = shape).
The Gestalt psychologists believed, for example, that if one is looking at a
square—a geometrical shape—the electrical fields in one’s brain also have a
square geometrical shape. Our brain representations will literally “picture” the
significant environment of the organism, or at least caricature the Euclidian
shape of the object while we are observing it.
In the mid-1950s, Köhler enthusiastically demonstrated his electrical fields
to me and to my PhD students Mort Mishkin from McGill University in
Montreal (later to become head of the Laboratory of Neuropsychology at the
United States Institutes of Health) and Larry Weiskrantz from Harvard (later to
become head of the Department of Experimental Psychology at Oxford
University) during one of his visits to my laboratory at the Institute of Living, a
mental hospital in Hartford, Connecticut. Köhler showed us just how the
putative direct current (DC) electrical activity could anatomically account for
visual and auditory perception.
Köhler would drive from Swarthmore College in Pennsylvania, where he
was teaching, to my laboratory. We would put monkeys in a cage and take them
to MIT, where we had access to the necessary auditory-stimulus control
equipment to do our experiments. At MIT he and I were recording changes in
direct current fields recorded from the auditory cortex of monkeys while we
briefly turned on a loudspeaker emitting a tone.
On other occasions we would meet in New York City to work with patients
who were being operated upon for clinical reasons and who gladly gave their
consent to serve as subjects. In these experiments, Köhler and I would display a
piece of reflective white cardboard in front of the subjects’ eyes while we made
DC electrical recordings from the back of their brains, where the cortex is
located that receives the nerve tracts from the eye. Indeed, as Köhler had
predicted, we found that both monkey brains and human brains responded with a
DC shift when we turned on a tone or displayed a white cardboard.

More Experiments
Somewhat later (1960), another of the students in my laboratory (Robert
Gumnit, an undergraduate at Harvard) performed a similar experiment to the one
that Köhler and I had done on monkeys. In this new experiment, we used a series
of clicking sounds and again demonstrated a DC shift, recording directly from
the surface of the auditory cortex of cats as shown in the accompanying figure.
Thus we successfully verified that there is a DC shift in the primary receiving
areas of the brain when sensory receptors are stimulated. Furthermore, we
demonstrated that the DC shift is accompanied by a shift from a slow to a fast
rhythm of the EEG, the recording of the oscillatory electrical brain activity.
Previously, human EEG recordings had shown that such a shift in the frequency
of the EEG occurs when a person becomes attentive (for instance, by opening
the eyes to view a scene, after having been in a relaxed state with eyes closed).

A Neurophysiological Test
In order to explore directly Köhler’s hypothesis that DC brain activity is
involved in perception, I created electrical disturbances by implanting an irritant
in the visual cortex of monkeys. To create the irritation, I filled small silver discs
with “amphogel,” aluminum hydroxide cream, a popular medication used for
peptic ulcers at the time. I then placed a dozen of the discs onto the visual cortex
of each of the monkeys’ two hemispheres. Amphogel (or alternatively,
penicillin), when applied to the brain cortex, produces electrical disturbances
similar to those that initiate epileptic seizures—large slow waves and “spikes”—
a total disruption of the normal EEG patterns. I tested the monkeys’ ability to
distinguish very fine horizontal lines from fine vertical lines. I expected that,
once the electrical recordings from the monkeys’ visual cortex indicated that
electrical “seizures” had commenced, the monkeys’ ability to distinguish the
lines would be impaired or even totally lost.
Contrary to my expectation, the monkeys were able to distinguish the lines
and performed the task without any deficiency. They were able to “perceive”—
that is, to distinguish whether the lines were vertical or horizontal”—even
though the normal pattern of electrical waves recorded from their brains had
been disrupted. Therefore, “geometric isomorphism”—the matching of the
internal shape of electrical brain activity to an external form”—did not explain
what was happening in the brain, at least with respect to visual perception.
I told Köhler of my results. “Now that you have disproved not only my
theory of cortical function in perception, but everyone else’s as well, what are
you going to do?” he asked. I answered, “I’ll keep my mouth shut.”
And indeed, since I had no answer to explain my results, when, two years
later, I transferred from Yale to Stanford University, I declined to teach a course
on brain mechanisms in sensation and perception. I had to wait a few years for a
more constructive answer to the problem.

What the Experiments Did Show


Although the experiments I had conducted had effectively demonstrated
that the shape of what we see is not “mirrored” by the shape of electrical activity
in the brain, as Köhler had believed, my research did show something very
surprising and intriguing: When the brain electrical activity was disrupted by
epileptic seizures, it took the monkeys seven times longer than usual to learn the
same tasks. However, once the animals started to learn, the slope of the learning
curve was identical to the rapidity of learning by normal monkeys. Only the
commencement of learning had been delayed. The abnormally long delay, which
also occurs in mentally retarded children, may be due to an unusually long time
taken to create the proper background brain activity necessary for learning to
start. (More on this shortly.)

6. Direct Current Field responses from auditory area in response to auditory stimulation. (A) Shift in
response to white noise. (B) Shift in response to tone of 4000 Hertz. (C) Shift in response to white noise
returning to baseline before end of auditory stimulation. (D) Response to 50 clicks/sec. returning to baseline
before end of stimulation. (From Gumnit, 1960.)

My finding came about as a consequence of the fact that all of the


implantations of aluminum hydroxide cream had not produced an epileptic
focus. My testing, prior to implantation, of the animals that did not show a
seizure pattern was therefore wasted. So, I began to implant the epileptogenic
cream before commencing testing, and to limit the testing to those monkeys who
had already developed the electrical seizure pattern. Thus I was inadvertently
testing the monkeys who had developed seizure patterns for their ability to learn
as well as their ability to perceive.
The delay in learning had to be examined further. So, in the late 1950s, I
devised, along with John Stamm, a postdoctoral fellow in my Hartford lab,
another series of experiments in which we imposed a DC current across the
cortex from surface to depth. We found that a cathodal (negative-going) current
delayed learning while an anodal (positive-going) current enhanced it.
What we had demonstrated in these experiments was that direct currents in
the brain cortex influence attention and learning, but not perception.

DC Currents and States of Awareness


More recently, during the 1980s, one of the postdoctoral fellows in my
laboratory at Stanford, when he returned to his laboratory in Vienna, Austria,
demonstrated a large downward shift in the DC current recorded from the scalp
during sleep. This was of great interest to me because Sigmund Freud had
described such a shift in his 1895 paper “A Project for a Scientific Psychology.”
I assume that one or more of his colleagues, using galvanometers, had made
such a demonstration, but there is no actual record that such experiments had
been done.
Even more recently, a large upward shift in the DC current was recorded
from Sufi and other Islamic subjects while they were in a trance that made it
possible for them to painlessly put ice picks through their cheeks and to perform
other such ordinarily injurious feats, not only without pain but also without
bleeding or tissue damage. These same individuals when not in a trance respond
with pain, bleeding and tissue damage as we all do when injured.
Scientific exploration had come up with some most unexpected rewards:
Looking for brain correlates of perception proved to be a dead end but revealed
surprising results that challenge us to understand the brain processes operating
during attention, learning and states of awareness.
As Gerard had argued during my Chicago days, direct (DC) currents must
be important for something, but as I had surmised, they are too global to account
for the fine structure of the patterns we experience and remember. What we
needed for perception and memory was something more like micro-currents.
Fortunately, those micro-currents were to appear in a fascinating context during
the next decade.

Perception Revisited
It was not until the early 1960s, a few years into my tenure at Stanford, that
the issue of brain processes in perception would confront me once again. Ernest
(Jack) Hilgard, my colleague in psychology at Stanford, was preparing an update
of his introductory psychology text and asked me to contribute something about
the status of our current knowledge regarding the role that brain physiology
played in perception. I answered that I was very dissatisfied with what we knew.
I described the experiments, conducted both by myself and others, that had
disproved Köhler’s belief that perception could be ascribed to direct current
brain electrical fields shaped like (isomorphic with) envisioned patterns.
Further, I noted that David Hubel and Thorsten Wiesel had recently shown
that elongated stimuli, such as lines and edges, were the most successful shapes
to stimulate neurons in the primary visual receiving cortex. They had suggested
that perception might result from something like the construction of stick figures
from these elementary sensitivities. However, I pointed out, this proposal
wouldn’t explain the fact that much of our perception involves patterns produced
by shadings and textures—not just lines or edges. For me, the stick-figure
concept failed to provide a satisfactory explanation of how our brain organizes
our perceptual process. Hilgard asked me to give this important issue some
further thought. We met again about a week later, but I still had nothing to offer.
Hilgard, ordinarily a very kind and patient person, seemed just a bit peeved, and
declared that he did not have the luxury of procrastination. He had to have
something to say in his book, which was due for publication. He asked me to
come up with a viable alternative to the ones I had already disproved or so
summarily dismissed.

Engaging Lashley
I could see only one alternative, one that provided a middle way between
the direct currents of Köhler and the lines and stick figures of Hubel and Wiesel.
Years earlier, during the 1940s, when I had been involved in research at the
Yerkes Laboratory, its director Karl Lashley had suggested that interference
patterns among wave fronts in brain electrical activity could organize not only
our perceptions but also our memory. Lashley, a zoologist, had taken as his
model for patterning perception those developmental patterns that form a fetus
from the blastula stage to the fully developed embryo. Already, at the turn of the
19th century, the American zoologist Jacques Loeb had suggested that “force
lines” guide such development. At about the same period, A. Goldscheider, a
German neurologist, had wondered if such force lines might also account for
how the brain forms our patterns of perceptions.
Lashley’s theory of interference patterns among wave fronts of electrical
activity in the brain had seemed to suit my earlier intuitions regarding patterned
(as opposed to DC) brain electrical activity. However, Lashley and I had
discussed interference patterns repeatedly without coming up with any idea of
what micro-wave fronts might look like in the brain. Nor could we figure out
how, if they were there, they might account for anything at either the behavioral
or the perceptual level.
Our discussions became somewhat uncomfortable with regard to a book
that Lashley’s postdoctoral student, Donald Hebb, was writing at the time.
Lashley declined to hold a colloquium to discuss Hebb’s book but told me
privately that he didn’t agree with Hebb’s formulation. Hebb was proposing that
our perception is organized by the formation of “cell assemblies” in the brain.
Lashley could not articulate exactly what his objections were: “Hebb is correct
in all his details, but more generally, he’s just oh so wrong.” Hebb was deeply
hurt when Lashley refused to write a preface to his book—which nonetheless
came to be a most influential contribution to the brain/behavioral sciences.
In hindsight, it is clear that Hebb was addressing brain circuitry, while
Lashley, even at such an early date, was groping for an understanding of the
brain’s deeper connectivity and what that might mean in the larger scheme of
psychological processes.
With respect to my colleague Jack Hilgard’s problem of needing an answer
about brain processes in perception in time for his publication date, I took his
query to my lab group, outlining both my objections to Köhler’s DC fields and to
Hubel and Wiesel’s stick figures. I suggested that since there were no other
alternatives available right now, perhaps Lashley’s proposal of interference
patterns might be worth pursuing. Not totally tongue-in-cheek, I noted that
Lashley’s proposal had the advantage that we could not object to something
when we had no clue of what was involved.

Microprocessing at Last
Within a few days of my encounters with Jack Hilgard, a postdoctoral
fellow in my laboratory, Nico Spinelli, brought us an article by the eminent
neurophysiologist John Eccles that had appeared in 1958 in Scientific American.
In this article, Eccles pointed out that, although we could only examine synapses
—junctions between brain cells—one by one, the branching of large fiber axons
into fine fibers just as they are approaching a synapse makes it necessary for us
to regard the electrical activity occurring in these finefibered branches as
forming a wave front. This was one of those “aha” moments scientists like me
are always hoping for. I immediately recognized that such wave fronts when
reaching synapses from different directions, would provide exactly those long-
sought-after interference patterns Lash-ley had proposed.
Despite my excitement at this instant realization, I also felt like an utter
fool. The answer to my years of discussion with Lashley about how such
interference patterns among micro-waves could be generated in the brain had all
the time been staring us both in the face—and neither of us had had the wit to
see it.

Brain Waves, Interference Patterns and Holography


Fortunately for my self-respect, (and only because “P” comes before “S” on
the mailing list) I received my next issue of Scientific American before Nico
Spinelli received his. In that issue, two optical engineers at the University of
Indiana, Emmet Leith and J. Upaticks, described how the recording of
interference patterns on photographic film had tremendously enhanced the
storage capacity and retrieval of images. Images could now readily be recovered
from the “store” by a procedure (described below) that had actually been
developed by the Hungarian mathematician Dennis Gabor almost two decades
earlier. Gabor had hoped to increase the resolution of electron microscopy. He
had called his procedure “holography.”
Gabor’s mathematical holographic procedure, which Leith had later used to
produce a holographic process, was a miraculous answer to the very question
Jack Hilgard had posed to me about perception just a few weeks earlier.
Everything in a pattern that we perceive—shading, detail and texture—could be
accomplished with ease through the holographic process.
Russell and Karen DeValois’s 1988 book Spatial Vision as well as my own
1991 book Brain and Perception would provide detailed accounts of each of our
own experimental results and those of others who, over the intervening decades,
used Gabor’s mathematics to describe the receptive fields of the visual system.
Unfortunately, at the time Hilgard’s textbook revision was due, these
insights were as yet too untested to provide him with the material he needed. But
making visible the processing of holographic images showed me the critical role
mathematics would play in our ability to grasp and interpret the complexity of
the brain/ mind relationship.
8. Interference patterns formed by wave fronts of synapses

What is the Holographic Process?


Leith actualized (made visible) Gabor’s mathematical holographic
procedure, but other striking examples of this process are visible to us in nature
every day. A holographic process is nothing more than an interference pattern
created by the interaction of two or more waves. For example, if you throw a
pebble into a pond, a series of concentric circles—waves—will expand outward.
If you drop two pebbles side-by-side, two sets of waves will expand until they
intersect. At the point where the circles of waves cross, they reinforce or
diminish each other, creating what we call an interference pattern. A remarkable
aspect of these interference patterns is that the information of exactly when and
where the pebbles hit the pond is spread throughout the surface of the water—
and that the place and moment that the pebbles had hit the pond can be
reconstructed from the pattern. We accomplish this by simply reversing the
process: if we filmed it, we run the film backward; or, if we used Gabor’s
mathematics, we simply invert the transformation. Thus, the location and time of
the point of origin can always be recovered from the interference pattern.

9. Add caption here


Try It Yourself
You can simply and directly demonstrate how the holographic process
works by using an ordinary slide projector. Project the image of a slide on a large
white screen and then remove the lens of the slide projector. Now you see
nothing on the screen but a big bright white blur of scattered light. But if you
take a pair of ordinary reading glasses and hold them in the array of light
between the projector and the screen, you will see two images appear on the
screen wherever you place the two lenses of your glasses. No matter where you
hold the reading glasses in the scatter field of the projected light, the two lenses
will create two distinct images of your original slide on the screen. If four lenses
are used, four images will appear on the screen. A hundred lenses would produce
a hundred images. The “information,” the pattern contained in the original
image, in the absence of the lens, becomes spread, scattered. The lens restores
the image. As the demonstration using the projector indicates, lenses such as
those in our eyes or in the slide projector perform a transformation on the
“information” that is scattered in the light: patterns that seem to be irretrievably
annihilated are in fact still present in every portion of the scattered light, as the
evidence from the use of the eyeglass lenses demonstrates.
Gabor based his holographic procedure on transforming the space and time
coordinates of our experience, within which we navigate every day of our lives,
into a spectrum of interference patterns. My hope from the earliest days of my
research had now been realized: that we might one day achieve an understanding
of form in terms of spectra akin to that which we had for our perception of
spectra of color.
But the form that the answer to my hopes took, and the broader implication
for our understanding of how we navigate our universe (as indicated in
Appendix A), was and still is to me, totally surprising.
The spectral processing of color and the spectral processing of patterns
must become meshed in the brain. Russ and Karen DeValois, on the basis of
their research, have been able to construct a single neurological network that can
process both color and pattern, depending on how that network is addressed. The
addressing can be initiated either by the visual input or from a higher-order brain
process. Their network accounts for the observation that we rarely see color
apart from form.
Chapters 4 to 8 tell the story of how the scientific community has dealt with
the insights derived from these new ways of thinking, ways derived from the
metaphors and models that have been outlined in the current chapter.
10. A complete diagram of the proposed stage 3. Russ and Karen DeValois’s Spatial Vision
Chapter 3
A Biological Imperative

Wherein I describe the experimental results that provided substance to the self-
organizing processes in the brain.

In 1957 it was possible to say . . . ‘these considerations also lead us to


the suggestion that much of normal nervous function occurs without
impulses but is mediated by graded activity.’ . . . I referred to Gerard
(1941) who influenced me most in this view which remained for a long
time ignored in the conventional orthodoxy. I propose that a ‘circuit’ in
our context of nervous tissue is an oversimplified abstraction. . . . that,
in fact there are many types of signals and forms of response, often
skipping over neighbors—and acting upon more or less specific
classes of nearby or even remote elements. Instead of the usual terms
‘neural net’ or ‘local circuit’ I would suggest we think of a neural
throng, a form of densely packed social gathering with more structure
and goals than a mob.
—T. H. Bullock, 1981
Brain Patterns
My 1971 book Languages of the Brain begins:

‘I love you.’ It was spring in Paris and the words held the
delightful flavor of a Scandinavian accent. The occasion was a
UNESCO meeting on the problems of research on Brain and Human
Behavior. The fateful words were not spoken by a curvaceous blonde
beauty, however, but generated by a small shiny metal device in the
hands of a Swedish psycholinguist.
The device impressed all of us with the simplicity of its design.
The loudspeaker was controlled by only two knobs. One knob altered
the state of an electronic circuit that represented the tensions of the
vocal apparatus; the other regulated the pulses generated by a circuit
that simulated the plosions of air puffs striking the pharynx.
Could this simple device be relevant to man’s study of himself?
Might not all behavior be generated and controlled by a neural
mechanism equally simple? Is the nervous system a “two knob” dual
process mechanism in which one process is expressed in terms of
neuroelectric states and the other in terms of distinct pulsatile
operators on those states? That the nervous system does, in fact,
operate by impulses has been well documented. The existence of
neuroelectric states in the brain has also been established, but this
evidence and its significance to the study of psychology has been slow
to gain acceptance even in neuro-physiology. We therefore need to
examine the evidence that makes a two-process model of brain
function possible.

What needed to be established is that brain patterns are formed by


interactions between: 1) a formative “web” of dynamic states that involves the
fine branches of brain cells (called “dendrites,” Latin for “small branches”) and
their connections (membranes, glial cells, chemical synapses and electrical
junctions) and 2) “circuits” composed of large fibers (axons) that operate in such
a way as a) to sample the web, and b) to convey the resulting samples from one
brain region to another and also convey samples from sensory receptors to the
brain and from the brain to glands and muscles.
In a sense, the web composes the form of a process, while the circuits
address that form, enabling contemplation or action.
The Retina: Key to Formative Brain Processing
In contrast to the rather straightforward use of the experimental technique
of microelectrode recording to understand the form of processing in the circuits
of the brain, the small diameter of the dendritic branches makes the electrical
recordings of the web a real challenge. Nonetheless, techniques were developed
early on, in the 1950s, that allowed us to map the receptive fields that are formed
in the dendritic web. How these mappings are made is described in the following
sections.
An anecdote highlights the issue: In the early 1970s, I attended an
International Physiological Congress where the keynote speaker was the noted
British scientist A. L. Hodgkin. Two decades earlier, he had received the Nobel
Prize, with A. F. Huxley, for the elegant mathematical description of the results
of their experiments on the generation of nerve impulses.
Hodgkin began his address with the question: “What does one do after
winning the Nobel Prize?”
He had thought a while about this, he said, and realized that he had not
become a neurophysiologist merely to describe how a membrane generates a
nerve impulse. No, he was interested in how the brain works—how it regulates
the way the body operates, and how it makes our conscious experience possible.
What followed his realization, he told us, was a brief period of depression: alas,
he was not a brain surgeon and didn’t have the tools to address the problems that
initially had challenged him. (His prize-winning experiments were actually done
on the very large nerve fibers of crayfish.) One morning, however, Hodgkin had
awakened with a promising insight: God, he realized, had put a piece of brain
outside the skull, just so that physiologists such as himself could study it without
having to drill holes to get at it!
That piece of “brain” is the retina of the eye. The retina, like the brain
cortex, is made up of layers of nerve cells with the same fine-fibered profusion
of branches. These fine fibers are present between the layers that reach from the
receptors inward, and they also make up a feltwork, a web, within each of these
layers. Hodgkin went to work on the retina at once. He told us that he was
surprised and amazed by what he found—or rather by what he didn’t find: Over
the next decade, in his many studies, he hardly ever encountered a nerve
impulse, except at the very final stage of processing—the stage where those
nerves with large diameters, large axons, generate signals that are transmitted to
the brain. Therefore, everything we “see” is first processed by the fine fibers of
the retina before any nerve impulses are actually generated.
11. The Fine-Fibered Neuro-Nodal Web in the Retina

Scanning electron micrograph showing the arrangement of nerve fibers in the retina of Necturus. Fibers
(dendrites) arise in the inner segment and course over the outer segment of a cone. Note that points of
contact do not necessarily take place at nerve endings. (From Lewis, 1970.)

The situation that Hodgkin found in the retina is identical to what occurs in
the web of dendrites in the brain cortex: processing occurs in the fine fibers, and
these processes compose and form the patterns of signals relayed by axons, the
large nerve trunks.

Mapping the Neuro-Nodal Web


At about the same time, in the 1950s, that Hodgkin had begun his work on
the retina, I shared a panel at a conference held in 1954 at MIT with Stephen
Kuffler of the Johns Hopkins University. In his presentation, Kuffler declared
that the “All or None Law” of the conduction of nerve impulses had to be
modified into an “All or Something Law”: That the amplitude and speed of
conduction of a nerve impulse is proportional to the diameter of the nerve fiber.
Thus the speed of conduction and size of a nerve impulse at the fine-fibered
presynaptic termination of axon branches (as well as in the fine-fibered
postsynaptic dendrites) is markedly reduced, necessitating chemical boosters to
make connections (or a specialized tight junction to make an ephaptic electrical
connection).
Kuffler focused my thinking on the type of processing going on in the fine-
fibered webs of the nervous system, a type of processing that has been largely
ignored by neuro-scientists, psychologists and philosophers.
Kuffler went further: he devised a technique that allowed us, for the first
time, to map what is going on in these fine-fibered webs of dendrites in the
retina and the brain. Kuffler noted that in the clinic, when we test a person’s
vision, one of our procedures is to map his visual field by having him tell us
where a spot of light is located. The visual field is defined as that part of the
environment that we can see with a single eye without moving that eye. When
we map the shape of the field for each eye, we can detect changes due to such
things as retinal pathology, pituitary tumors impinging on the place where the
optic nerves cross, strokes, or other disturbances of the brain’s visual pathways.
Kuffler realized that, instead of having patients tell us when and where they
see a spot, we could—even in an anesthetized animal—let their neurons tell us.
The signal we use to identify whether a neuron recognizes the spot is the change
in pattern of nerve impulses generated in an axon. The “receptive field” of this
axon is formed by its finefibered branches and corresponds precisely, for the
axon, to what the “visual field” is for a person. By this procedure, the maps we
obtain of the axon’s dendritic receptive field vary from brain cell to brain cell,
depending upon what kind of stimulus (e.g., a spot, a line, a grating) we use in
order to elicit the response: as in the case of whole brain maps, the patterns
revealed by mapping depend on the procedure we use to display the micro-maps.
Kuffler’s initial work with receptive fields, followed by that of others in
laboratories in Sweden and then all over the world, established that the receptive
fields of those axons originating in the retina appear like a bull’s-eye: some of
the retina’s receptive fields had an excitatory (“on”) center, while others had an
inhibitory (“off”) center, each producing a “complementary” surrounding
circular band.
When he used color stimuli, the centers of those receptive fields which
were formed by a response to one specific color—for instance, red—were
surrounded by a ring that was sensitive to its complementary (also called its
opponent) color—in the case of red: green; centers responding to yellow were
surrounded by rings sensitive to blue, and so forth. Having written my medical
student’s thesis on the topic of retinal processing of color stimuli, I remember
my excitement in hearing of Kuffler’s experimental results. For the first time this
made it possible for us to establish a simple classification of retinal processing of
color and to begin to construct a model of the brain’s role in organizing color
perception.
In addition, we learned that in more central parts of the brain’s visual
system, in the halfway house of the paths from the retina to the cortex—the
thalamus (“chamber,” in Latin)—the receptive field maps have more rings, each
ring alternating, as before, between excitation and inhibition— or between
complementary colors.
However, in initial trials at the brain’s cortex, the technique of moving a
spot in front of an eye at first failed to produce any consistent response. It took
an accident, a light somewhat out of focus, to produce an elongated stimulus, to
provide excellent and interesting responses: neurons were selective of the
orientation of the stimulus; that is, different orientations of the elongated “line”
stimulus activated different neurons.
Furthermore, for those neurons that responded, the map of the receptive
field became elongated, with some fields having inhibitory “flanks,” like
elongated rings of the bulls-eye, while others without such flanks looked more
like flat planes. We’ve all learned in high school geometry that by putting
together points we can make lines and putting together lines at various
orientations we can compose stick figures and planes. Perhaps, thought the
scientists, those brain processes that involve our perceptions also operate
according to the rules of geometry that we all had learned in school.
The scientific community became very excited by this prospect, but this
interpretation of our visual process in terms of shape, as is the case with regard
to brain maps, though easy to understand, fails to explain many aspects of the
relationship of our brain to our experience.
In time, another interpretation, which I shall explore in the next chapters of
The Form Within, would uncover the patterns underlying these shapes. A group
of laboratories, including mine, would provide us a more encompassing view of
processing percepts and acts by the brain.

The Two Processes That Form Patterns in Our Brains


To review briefly: The circuits of our brain consisting of large nerve fibers,
axons, have sufficient girth to sustain the transmission of nerve impulses
produced by complete depolarization—the discharge of the electrical
polarization—of their membranes. The nerve impulses, acting like sparks, can
travel along the nerve over long distances.
The nature of processing in circuits of the brain has been well documented
and is therefore well established within the neuroscience community.
Neuroscientists have focused on nerve impulse transmission in large nerve fibers
because, due to the available experimental tools over the past century, this aspect
of brain function has been more accessible to us and has yielded results that are
easy to interpret and to understand. (To anticipate a discussion later in this
chapter: circuits are of two sorts, flexible short-range assemblies and fixed long-
range connections.)
By contrast, the fine fibers composing the web—the branches or dendrites
of nerve cells—cannot sustain nerve impulse conduction the way axons can. The
electrical activity of dendrites rarely “sparks” but oscillates between excitation
and inhibition; thus, dendritic activity occurs locally, forming patches or nodes
of synchronized oscillating activity.
The dendritic patches, the nodes, are not only produced by electrical and
chemical activity at the junctions (synapses and ephapses) that connect the fine
fibers, but also independently of what is going on at these junctions: Dendrites
have been found to secrete chemical modulators in conjunction with the glia or
“glue” cells that surround them. The patches of activity that are carried on within
the fine fibers in our brain thus form a neuro-nodal web.
Unlike the longer-range activity of large nerve fibers, the coordinating
influence of dendrites on neighboring regions of the brain results only from a
local spread of excitation. The fact that dendritic processing is restricted to such
a local patch posed a serious problem for me back in the mid-1960s when I first
suggested its importance. How could the sort of global brain activity that is
necessary to our thought, perception and memory be achieved from such isolated
activities in small patches? Surprisingly, I found the resolution to my quandary
in astrophysics.

12. The Dendritic Web

13. The Cortical Chem-Web Demonstrated by Staining Acetyl Choline Esterase Receptors
14. The Neuro-Nodal Web

Astrophysicists need to be able to explore large reaches of sky, but


telescopes could capture only a small portion at any one viewing. They finally
solved this problem by converting the images obtained through the telescope
using a mathematical equation, the Fourier transformation— the procedure that
Gabor used to invent holography—that will be referred to again and again in
much of the rest of this book. The procedure works by Fourier-transforming
adjacent images obtained by the telescope; patching them together and
performing another transformation, the reverse of the initial transformation, on
the patched whole, to give a usable image of large expanses of sky. This same
procedure is now used routinely in current image processing in hospitals such as
PET scans and fMRI.
When the formative processing web provided by the dendrites has to be
accessed by the rest of the brain, axons relay the results of these interactions to
other parts of the brain or to receptors and effectors in the body.
The neuroscience community as a whole has until recently tended to
disregard completely what is going on in the fine-fibered branches, the dendrites,
as well as ignoring the importance of what goes on within these webs, as
illustrated above in the neuro-nodal and chem-webs. This despite the statements
such as those by George Bishop, professor at Washington University in St.
Louis, considered at the time to be the dean of neurophysiologists (during the
1950s), who repeatedly stated that processing in the nervous system is a function
of its fine fibers; the results of processing are conveyed to other parts of the
nervous system and to sensory and motor systems by circuits of axons. Another
influential neuroscientist, Ted Bullock, professor at the University of California
at San Diego, on the basis of his research experience, put it even more bluntly
and detailed what is actually occurring in the fine-fibered processes, as described
in the quotation that inaugurated this chapter.
His description of neural transactions as “a densely packed social gathering
in which connections often skip neighbors and act upon more or less specific
classes of nearby or even remote elements” fits what computer scientists
describe as the makings of self-organizing systems.
Hopefully, in the past few years the scene has at last begun to change as
heralded, for example, by an excellent review by R. Douglas Fields in the
June/July 2006 issue of Scientific American MIND of the circuit vs. the web-like
processing in the brain cortex.

Flexible Assemblies and Executive Circuits


The dendritic formative process is directly addressed by the circuitry of the
brain, organized along two very different principles: 1) by local circuits to form
flexible cell assemblies that, in turn, are addressed 2) by long-range “executive”
circuits of larger axons.
The short-range flexible assemblies directly sample the neuro-nodal web in
an organization called “heterarchical” by which the members connected interact
on an equal level. These heterarchical organizations are the basis of self-
organizing procedures in the formation of complex systems.
The long-range, more or less fixed connections form a “hierarchy;” that is,
each level of organization controls a lower level.
In practice, the distinction between flexible assemblies and long-range
circuits is the distinction between gray and white matter in the cortex. I have
made fairly extensive removals limited to gray matter (performed with a
specially designed suction apparatus that would not sever the white fibers) with
only minor effects on behavior. When, however, the removals invade the white
matter underlying the cells and their short-range connections that compose the
gray matter, extensive deficits in behavior result. (The distinction between
resections of gray and white matter came to my attention when evaluating the
effects of lesions of parts of the prefrontal cortex. My resections were carefully
limited to gray matter; others were not so careful, so their resections invaded the
white matter connecting other parts of the prefrontal cortex, making their results
uninterpretable.)
Once again, the long-range, more or less fixed connections form a
“hierarchy,” that is, each level of organization controls a lower level.
The two principles, heterarchy and hierarchy appear to be ubiquitous
wherever the members of an organization with equal potential for interaction
become organized.
Warren McCulloch, a lifelong friend and one of the founders of cybernetics,
the study of control systems, used to enjoy telling the following story regarding
the risk of making decisions that are dependent on heterarchical controls:
After the World War I battle of Jutland, in which many British ships were
disastrously sunk, the British and American navies changed from a hierarchical
control organization, in which every decision had to be referred to the admiralty,
to a heterarchical organization in which control and the formation of a relevant
organization was vested in whomever had information relevant to the context of
the situation. During World War II, two Japanese air squadrons simultaneously
attacked an American fleet in the South Pacific from different directions. These
attacks were spotted by different sections of our fleet, the information was
relayed to the relevant sections of our fleet and, as was the practice, they were
taken as commands. As a result, the two sections of our fleet steamed off in
separate directions to deal with the separate oncoming attacks, leaving the
admiral on his centrally located ship completely unprotected. Fortunately, both
Japanese attacking forces were routed, leaving the fleet intact. McCulloch liked
to suggest that this story might serve as a parable for when we are of two minds
in a situation in which we are facing an important decision.
I have learned even more about the nature of human heterarchical and
hierarchical self-organizing systems from Raymond Trevor Bradley. Bradley did
his doctoral dissertation at Columbia University in the 1970s, observing several
communes and meticulously documenting the way each member of such a
closely articulated community engaged each of the other members. Bradley
(Charisma and Social Structure: A Study of Love and Power, Wholeness and
Transformation, 1987) found that each commune was self-organized by a
heterarchical structure made up of personal emotional transactions as well as a
hierarchical structure that guided the motivations of the overall commune as a
whole. Further, the hierarchy was composed of heterarchical cliques that were
often connected by a single bond between a member of one clique and a member
of another.
Those communes that remained stable over a number of years were
characterized by a balanced expression between heterarchical and hierarchical
processing. Bradley’s results held for all communes whether they had strong
leadership, temporary leadership, or no consistent leadership. If the motivational
hierarchy overwhelmed the emotional heterarchy, the communes tended to be
short-lived. Likewise, if the emotional structure overwhelmed the motivational
structure, the commune also failed to survive.
The striking parallels between social and brain processing brought Bradley
and me together, and we have collaborated (e.g., Pribram, K. H. and R. T.
Bradley, The Brain, the Me and the I; in M. Ferrari and R. Sternberg, eds., Self
awareness: Its Nature and Development, 1998), in several brain/ behavior/ mind
studies over the years.
These parallels—parallels that compose General Systems Theory—make a
good starting point for inquiry. They form the foundation of our understanding
of the relationship between our world within and the world we navigate—the
contents of Chapters 9 to 17.
At the same time, the parallels tell us too little. Just because a brain system
and a social system have similar organizations does not tell us “the particular
go,” the “how” of the relation between brains and societies. For understanding
the “how” we must know the specific trans-formations (changes in form) that
relate the various levels of inquiry to each other—brain to body and bodies to
various forms of society. I take up the nature of transformations, what it takes to
transform one pattern into another, in Chapters 18 and 19.
Chapter 4
Features and Frequencies

Wherein I juxtapose form as shape and form as pattern as they are recorded from
the brain cortex and describe the usefulness and drawbacks of each.

We live in a sea of vibrations, detecting them through our senses and


forming impressions of our surroundings by decoding information
encrypted in these fluctuations.
—Ovidiu Lipan, “Enlightening Rhythms,” Science, 2008

It sometimes appears that the resistance to accepting the evidence that


cortical cells are responding to the two-dimensional Fourier
components of stimuli {is due} to a general unease about positing that
a complex mathematical operation similar to Fourier analysis might
take place in a biological structure like cortical cells. It is almost as if
this evoked, for some, a specter of a little man sitting in a corner of the
cell huddled over a calculator. Nothing of the sort is of course implied:
the cells carry out their processing by summation and inhibition and
other physiological interactions with their receptive fields. There is no
more contradiction between a functional description of some
electronic component being a multiplier and its being made up of
transistors and wired in a certain fashion. The one level describes the
process, the other states the mechanism.
—Russell DeValois and Karen DeValois, Spatial Vision, 1988
In this chapter I detail for the visual system the difference between the two
modes of scientific exploration discussed in earlier chapters. One of these modes
explores form as shape, while the other explores form as pattern. Features can be
thought of either as building blocks that determine the shape of our environment
or as components of perceived patterns that we as, sentient, conscious, beings
choose to focus on.
Within the research community the contrast between these two views of
perception was wittily joked about as pitting “feature creatures” against
“frequency freaks.” Feature creatures were those who focused on “detectors”
within our sensory systems, of features in our external environment —a purely
“bottom-up” procedure that processes particulars into wholes. Frequency freaks,
by contrast, focused on similarities between processes in the visual mode and
those in the auditory (and incidentally the somatosensory) mode. For the
frequency freaks, the central idea was that our visual system responds to the
spectral, frequency dimension of stimulation very much as the auditory system
responds to the oscillations that determine the pitch and duration of sound. The
approach of the frequency freaks, therefore, is “top-down”—wholes to
particulars. Within the neuroscience community, the feature creatures still hold
the fortress of established opinion, but frequency freaks are gaining ascendance
with every passing year.

An Early Intuition
My initial thought processes and the research that germinated into this
section of The Form Within reinforced my deeply felt intuition that the “feature
detection” approach to understanding the role of brain processes in organizing
our perceptions was of limited explanatory value.
Michael Polanyi, the Oxford physicist and philosopher of science, defined
an intuition as a hunch one is willing to explore and test. (If this is not done, it’s
just a guess.) Polanyi also noted that an intuition is based on tacit knowledge;
that is, something one knows but cannot consciously articulate. Matte Blanco, an
Argentine psychoanalyst, enhanced this definition by claiming that “the
unconscious” is defined by infinite sets: consciousness consists of being able to
distinguish one set of experiences from another.
The “feature detection” story begins during the early 1960s, when David
Hubel and Torsten Wiesel were working in Stephen Kuffler’s laboratory at the
Johns Hopkins University. Hubel and Wiesel had difficulty in obtaining
responses from the cortical cells of cats when they used the spots of light that
Kuffler had found so effective in mapping the responses in the optic nerve. As so
often happens in science, their frustration was resolved accidentally, when the
lens of their stimulating light source went out of focus to produce an elongation
of the spot of light. The elongated “line” stimulus brought immediate results: the
electrical activity of the cat’s brain cells increased dramatically in response to the
stimulus and, importantly, different cells responded selectively to different
orientations of the elon-gated stimulus.
During the late 1960s and early 1970s, excitement was in the air: like Hubel
and Wiesel, many laboratories, including mine, had been using microelectrodes
to record from single nerve cells in the brain. Visual scientists were able to map
the brain’s responses to the stimuli they presented to the experimental subject.
Choosing the stimulus most often determined what would be the response they
would obtain.
Hubel and Wiesel settled on a Euclidian geometry format for choosing their
visual stimulus: the bull’s eye “point” receptive fields that characterized the
retina and geniculate (thalamic) receptive fields could be shown to form lines
and planes when cortical maps were made. I referred to these results in Chapter
3 as making up “stick figures” and that more sophisticated attempts at giving
them three dimensions had essentially failed.
Another story, more in the Pythagorean format, and more compatible with
my intuitions, surfaced about a decade later. In 1970, just as my book Languages
of the Brain was going to press, I received a short reprint from Cambridge
University visual science professor Fergus Campbell. In it he described his
pioneering experiments that demonstrated that the form of visual space could
profitably be described not in terms of shape, but in terms of patterns of their
spatial frequency. The frequency was that of patterns of alternating dark and
white stripes whose fineness could be varied. This mode of description was
critical in helping to explain how the extremely fine resolution of a scene is
possible in our visual perception. Attached to this reprint was a note from
Fergus: “Karl—is this what you mean?” It certainly was!!
After this exciting breakthrough, I went to Cambridge several times to
present my most recent findings and to get caught up with those of Campbell and
of visual scientist Horace Barlow, who had recently moved back to Cambridge
from the University of California at Berkeley, where we had previously
interacted: he presenting his data on Cardinal cells (multiple detectors of lines)
and I presenting mine on the processing of multiple lines, to each other’s lab
groups.
Campbell was showing that not only human subjects, but brain cells
respond to specific frequencies of alternating stripes and spaces. These
experiments demonstrated that the process that accounts for the extremely fine
resolution of human vision is dependent on the function of the brain cortex.
When the Cambridge group was given its own sub-department, I sent them a
large “cookie-monster” with bulging eyes, to serve as a mascot.
Once, while visiting Fergus’s laboratory, Horace Barlow came running
down the hall exclaiming, “Karl, you must see this, it’s right up your alley.”
Horace showed me changes produced on a cortical cell’s receptive field map by
a stimulus outside of that brain cell’s receptive field. I immediately exclaimed,
“A cortical MacIlvain effect!” I had remembered (not a bad feat for someone
who has difficulty remembering names) that an effect called by this name had
previously been shown to occur at the retina. Demonstrating such global
influences on receptive fields indicated that the fields were part of a larger brain
network or web. It took another three decades for this finding to be repeatedly
confirmed and its importance more generally appreciated by the neuroscience
community.
These changes in the receptive fields are brought about by top-down
influences from higher-order (cognitive) systems. The changes determine
whether information or image processing is sought. Electrical stimulation of the
inferior temporal cortex changes the form of the receptive fields to enhance
information processing— as it occurs in communication processing systems. By
contrast, electrical stimulation of the prefrontal cortex changes the form of the
receptive fields to enhance image processing—as it is used in image processing
such as PET scans, fMRI and in making correlations such as in using FFT. The
changes are brought about by altering the width of the inhibitory surround of the
receptive fields. Thus both information and image processes are achieved,
depending on whether communication and computation or imaging and
correlations are being addressed.
15. Cross sections of thalamic visual receptive fields (n) showing the influence of electrical stimulation of
the inferior temporal cortex (i) and the frontal cortex (f)

Encounter at MIT
In the early 1970s, I was invited to present the results of these experiments
at the Massachusetts Institute of Technology. The ensuing discussion centered
not on the results of my experiments but—ironically, it being MIT— on my use
of computers. David Hubel stated that the use of computers would ruin visual
science by taking the research too far from a hands-on, direct experience
necessary to understanding the results of observation. I retorted that in every
experiment, in the mapping of every cell, I had to use the direct hands-on
mapping before I could engage the computer to quantify my observations. I
stated that in order to obtain the results that I had shown, I needed stable
baselines—baselines that could only be statistically verified by the use of a
computer. Only by using these baselines, could I show that the changes produced
by the electrical stimulation were reliable.
Then my host, Hans-Lukas Teuber, professor and chair of the Department
of Psychology at MIT, rose to support Hubel by saying that the use of computers
would also ruin behavioral psychology. He was an old friend of mine (our
families celebrated each Thanksgiving together), so I reminded him that my
laboratory had been completely computerized since 1959, when I’d first moved
to Stanford. That alone had made it possible to test 100 monkeys a day on a
variety of cognitive (rather than conditioning) tasks. I pointed out that he should
be pleased, since he was a severe critic of operant conditioning as a model for
understanding psychological processing.
I came away from the MIT lecture disappointed by the indifference to the
remarkable and important experimental results obtained both at Cambridge and
in my laboratory, a disappointment that persisted as the feature approach to
visual processing became overwhelmingly dominant during the 1990s.

Feature Detection vs. Feature Extraction


Within the feature approach, two very different procedures can be followed.
In one, the emphasis is on detecting some element in the sensory input, a
bottom-up direction of inquiry; in the other, the feature is extracted from a
myriad of sensory and cognitive processes in a top-down fashion. Extraction
involves identification of the feature, essentially a creative process.
Different cells in the brain’s visual cortex differ in their response to the
orientation, to the velocity or to the acceleration of movement of a line. We, as
visual creatures, can analyze any scene into lines and edges; however, the maps
of the brain’s receptive fields were initially interpreted not as analyzers of an
entire vista but as brain cells that were “detecting” lines. A detector, as the word
implies, needs to be uniquely sensitive to that which is being detected. A Geiger
counter, fitted to detect radioactivity, would be useless if it were sensitive to
everything in an environment including auditory noise, aromas, changes in
visual brightness, or the contours of a shape.
The feature detection approach had an initial advantage in exploring form
as Euclidian shapes. Maps of the receptive fields recorded from visual pathways
yielded an “irresistible” progression: the maps change from a circular shape in
the retina and the thalamus to an elongated shape at the cortex. An important
aspect of the maps was that they were elicited only by stimuli of specific
orientations (horizontal, vertical and all angles in between.) These oriented
elongated shapes—interpreted as lines and edges—could then be considered to
be a two-dimensional “stick-figure“ process representing features of our visual
environment. But, as is well known, lines need not actually be present for us to
perceive a shape.

16. The Kaniza Triangle

For example:
Contrary to what was becoming “established,” we showed in my laboratory
that the supposed “Geiger counter” cells demonstrated sensitivities to many
stimuli far beyond their sensitivity just to the presentation of lines. Some of these
cells also responded to select bandwidths of auditory stimulation. All of them
respond to changes in luminance (brightness), while other cells respond also to
color. As described in Chapter 2, the Berkeley brain scientists Russell and Karen
DeValois, on the basis of their decades-long experimental results, have
constructed a model showing that the same network of cells can provide for our
experience both of color and of form: When connected in one fashion,
processing leads to the experience of form; when connected in another fashion,
our experience is of color. The fact that a single network can accomplish both is
important because we rarely experience color apart from form.
Perhaps more surprising was our finding that some of these cells in the
primary visual sensory receiving cortex, when recorded in awake problem-
solving monkeys, respond to whether the monkey’s response to a visual stimulus
had been rewarded or not—and if so, whether the monkey had pressed one panel
or the other in order to receive that reward.

A Show of Hands
To illustrate how such a network might function, I ask the students in my
class to do the following: “All those who are wearing blue jeans, please raise
your hand. Look at the distribution of hands. Now lower them. Next, all those
wearing glasses, please raise your hands. Look at the distribution of hands and
note how different it is from the distribution produced by those wearing blue
jeans.” Again, “All girls, raise your hands; then all boys.” We quickly observe
that each distribution, each pattern, is different. The features—blue jeans,
glasses, gender—were coded by the various patterns of raised hands, a pattern
within a web, not by the characteristic of only a single individual person,
Brain cells are like people when you get to know them. Each is unique. All
features are distributed among the group of individuals, but not all the
individuals share all the features. Also, as in the saying “birds of a feather flock
together,” brain cells sharing the same grouping of features tend to form clusters.
Thus, one cluster will have more cells that are sensitive to color; another cluster
will have more cells that are sensitive to movement. Still, there is a great deal of
overlap of features within any cluster. People are the same in many respects. For
instance, some of us have more estrogen circulating in our bodies while others
have more testosterone. But all of us have some of each.
With this arrangement of “features as patterns,” the question immediately
arises as to who asks the question as I did with my students: Who is wearing
blue jeans? This question is known as an example of a top-down approach to
processing, whereas feature detection is a bottom-up approach. In my laboratory,
my colleagues and I spent the better part of three decades establishing the
anatomy and physiology of the routes by which top-down processing occurs in
the brain. Together with the results of other laboratories, we demonstrated that
electrical stimulation of the sensory-specific “association” regions of the brain
cortex changed the patterns of response of sensory receptors to a sensory input.
Influence of such stimulation was also shown to affect all the “way stations” that
the sensory input traversed en route to the sensory receiving cells in the cortex.
Even more surprising was the fact that the effects of an auditory or touch
stimulus would reach the retina in about the same amount of time that it takes for
a visual stimulus to be processed. Tongue in cheek, I’ve noted these results in the
phrase: “We live in our sensory systems.” The experimental findings open the
important question as to where all our processing of sensory input begins, and
what is the sequence of processing. That question will be addressed shortly and
again in depth when we analyze frames and contexts.

What Features Are Featured?


But meanwhile, to return for a moment to the notion of line detectors: the
visual cortex cannot be processing straight lines or edges per se because the
input from the retina is not received in the form of a straight line. As Leonardo
da Vinci showed in the 15th century, the retina is a hemisphere. A perceived line
must therefore be abstracted from a curve. When you fly from New York to
London you travel via Newfoundland, Iceland and Greenland in an arc that
describes the shortest distance between two points on the globe. On a sphere, the
shortest distance on a globe is not a straight line. Riemann developed a geometry
that applies to curved surfaces. The brain’s visual processing, therefore, must
operate according to Riemannian principle and must be based on Pythagorean
arcs, angles and cones, not on the points and lines of Euclidian geometry.
Experimental psychologist James Gibson and his students at Cornell University
have taken this aspect of vision seriously, but their work needs to be
incorporated by brain scientists in their account of visual sensory processing.

A Symposium in the Roman Tradition


I was able to bring the attention of the neuroscience community to the issue
raised during the encounter at MIT and to the spatial frequency formulation of
visual receptive fields at a convention of the Society for Neuroscience held in
Minneapolis (in the mid-1970s.) Mortimer Mishkin —a friend and former
doctoral and postdoctoral student of mine during the 1950s—had, in his capacity
of program chairman, asked me to organize a special symposium covering
microelectrode studies in visual neuroscience. The symposium was to be special
in that it would be held in the Roman tradition: the symposiasts arguing their
respective views over flasks of wine!
The society was still of a manageable size (it has since grown to 36,000
members) and attendance at this particular symposium was restricted to an
audience of 300. Wine and cheeses were served to all.
Symbolically (both from a political and a geographical standpoint) I placed
Russell and Karen DeValois from the University of California at Berkeley to my
left, and, to my right, Horace Barlow from Cambridge University and David
Hubel from Harvard. Hubel, together with Torsten Wiesel, also of Harvard, had
recently received the Nobel Prize for their work on “feature detection,” and I
was pleased that he had graciously consented to participate in what was meant to
be a challenging session.
I asked David to go last in our series of presentations, hoping that, if
allowed to imbibe enough wine, he would be sufficiently mellow to welcome a
discussion of any disagreements with his views on the neural substrate of
perception. My plan worked.
Russell and Karen DeValois presented their data, which provided solid
evidence for a frequency description of the pattern of receptive fields in the
visual cortex. They also presented critical evidence against the formulation that
line or edge detection of shapes best describes those receptive fields. Horace
Barlow then presented evidence for how groups of cells in the brain extract a
feature from the multiple features that characterize a visual scene, as opposed to
the view that any single brain cell responds solely and uniquely to a particular
feature such as a line.
Finally, I introduced David Hubel. In my introduction, I noted three things:

1. that David was a modest person, who would claim that he hadn’t
understood the mathematical aspects of the DeValois presentation—but I
would not let him get away with this because he had taught high school
math before going to medical school;
2. that, in my experience, each cortical neuron encodes several features of our
visual environment, that each neuron is like a person with many attributes,
and thus our ability to recognize features had to depend on patterns
displayed by groups of neurons for further processing;
3. that, as just presented, Karen and Russ DeValois had done critical
experiments showing that the visual cortical cells were tuned to spatial
frequency rather than to the shapes of “lines” or “edges.”

Hubel immediately announced to the audience, “Karl is right in everything


he says about cortical cells, but I have never seen anything like what Russell
DeValois is talking about.” Russ DeValois responded, “You are welcome to
come to our laboratory at any time and see the evidence for yourself.” Hubel
replied, “But I wouldn’t believe it if I saw it.” I chimed in, “David I didn’t
realize you were a flatlander.” The symposium was on.
Hubel went on to point out that although cells might respond to many
inputs, he was convinced they responded more to what he called a “critical
sensory stimulus.” As a moderator I felt that I had said enough and didn’t
challenge him on this, but as demonstrated in my laboratory and in many others,
although his statement is correct for the receptive fields in further processing
stations, (where the so-called “grandmother cells” are located) this
characterization is incorrect for receptive fields in the primary sensory receiving
cortex.
Hubel did admit that his views on feature detection had received little
confirmation over the decades since he had first proposed them, but the
identification of so-called “grandmother” cells that responded especially well to
higher-order configurations, such as hands and (grandmothers’) faces, kept his
faith in the feature detector view alive.
The audience and I felt amply rewarded by the exchange. Issues had been
clearly engaged in a friendly and supportive fashion. This was science at its best.
Several issues had been aired. One, highlighted in Barlow’s presentation, is,
of course, that feature extraction —as described earlier in this chapter by the
metaphor of a show of hands—is eminently plausible but should not be confused
with feature detection.
A second issue, highlighted by the DeValoises, and explored in the next
chapter—sensory processing— involves the frequency domain.

Green Hand Cells


Finally, Hubel’s faith in “grandmother cells” was well placed. As we
proceed from the primary sensory receiving cortex to the temporal lobe of the
brain, we find receptive fields that are “tuned” to an ever “higher order,” that is,
to ever more complex stimuli. The receptive fields of these “grandmother cells”
are nodes in an assembly that is capable of representing features as higher-order
properties abstracted from lower-order properties. Thus, we can come to
understand how a property like “blue jeans” in the metaphor I’ve used can
become abstracted from the population of persons who wear them. Sales
counters at retail stores handle blue jeans; cells in the brain’s temporal lobe
respond to pictures of grandmother faces and other sensory images such as
flowers and hands.
Cells responding primarily to pictures of hands were the first “grandmother
cells” to be discovered. Charles Gross at Princeton University was the first to
record them. I asked Gross, who had attended a course I was teaching at Harvard
and as a consequence became a student of Larry Weiskrantz at Cambridge and
Oxford, whether the face cells responded to stimuli other than faces. They did,
but not as vigorously (as noted by Hubel in our symposium). However, in my
laboratory, we unexpectedly found, in one monkey who’d had considerable
experience choosing a green rather than a red set of stripes, that his cell’s activity
went wild when he was shown the combination as a green hand!
As had also been found by Gross, in experiments in my laboratory, we
found that the amount of activity elicited from these “hand cells” was almost as
great when we visually presented broad stripes to the monkey. We presented
stripes because they could provide the basis for a frequency interpretation of
how the cells might process a hand.
We can derive two important conclusions from the results of these
experiments:

1. Higher-order processing locations in the brain enable us to make


distinctions among complex stimuli.
2. Our ability to make such distinctions can be learned.

Seeing Colors
James and Eleanor Gibson of Cornell University, using purely behavioral
techniques, had also reached the conclusion that higher-order distinctions are
learned. The Gibsons reported their results in a seminal publication in which
they showed that perceptual learning is due to progressive differentiation of
stimulus attributes, not by forming associations among them.
Taking Matte Blanco’s definition that our conscious experience is based on
making distinctions, as described in the opening paragraphs of this chapter, this
line of research indicates how our brain processes, through learning, make
possible progressively greater reaches of conscious experience.
The processes by which such distinctions are achieved were addressed at
another meeting of the Society for Neuroscience—this one in Boston in the
1980s. Once again, David Hubel gave a splendid talk on how brain processes
can entail progressively greater distinctions among colors. He noted that, as we
go from receptor to visual cortex and beyond, a change occurs in the coordinates
within which the colors are processed: that processing changes from three colors
at the receptor to three opponent pairs of colors at the thalamus and then to six
double-opponent pairs at the primary sensory cortex. Still higher levels of
processing “look at the coordinates from an inside vantage” to make possible the
experiencing of myriads of colors. A change in coordinates is a transformation, a
change in the code by which the form is processed.
Transformations are the coin of making ever-greater distinctions,
refinements in our conscious experience. After his presentation in Boston, I
spoke with David Hubel and said that what he had just done for color was
exactly what I was interested in doing for form. David said he didn’t think it
would work when it came to form. But of course it did. The prodigious
accomplishments of the 1970s and 80s form the legacy of how visual angles at
different orientations can become developed into a Pythagorean approach to
visual pattern vision. (Hubel and I have never had the opportunity to discuss the
issue further.)

Form as Pattern
A different path of experimentation led me to explore the view of form as
pattern. In my earlier research, while examining the effects of cortical removals
on the behavior of monkeys, I had to extensively examine the range of responses
to stimuli that were affected by removing a particular area of the brain cortex.
Thus, during the 1960s, when I began to use micro-electrodes, I used not only a
single line but two lines in my receptive field experiments. The relationship of
the lines to one another could thus be explored. Meanwhile, other laboratories
began to use multiple lines and I soon followed suit.
Multiple lines of different width and spacing (that is, stripes) produce a
“flicker” when moving across the visual field. We experience such a flicker
when driving along a tree-lined road: the rapid alternation of such light and
shadow stripes can actually incite a seizure in seizure-prone people. The rapidity
of the flicker, the speed of the alternation between light and dark can be
described mathematically in terms of its frequency.
When we moved such stripes before the eye of a subject in our experiments,
we could change the response of the subject’s cortical receptive field by
changing the frequency of the stimulation in two ways: by altering the speed at
which we moved the stimulus, or by changing the width of the stripes and the
spaces between them.
Mathematically, this relationship among the angles on the retina that result
from stimulation by stripes is expressed as degrees of arc of the portion of a
circle. A circle, as noted earlier, represents a cycle. Degrees of arc are measures
of the density, the rapidity of oscillations between light and dark in the cycle.
Thus, the oscillations are the spectral transform of the spatial widths of the
alternating light and dark stripes. It follows that changes in the width of the light
and dark stripes will change the frequency spectrum of the brain response.
Those of us who used multiple lines in our experiments have described our
results in terms of “spatial frequency.”Frequency over space as distinct from
frequency over time can be demonstrated, for example, if we visually scan the
multiple panels that form a picket fence or similar grating. When we visually
scan the fence from left to right, each panel and each gap in the fence projects a
pattern of alternating a bright region and a shadow region onto the curve of our
retina. Therefore, each bright region and each shadow covers an angle of the
curve of our retina. The “density” of angles is defined by how close the
stimulating bright and dark regions (the width of the fence panels and their
spacing) are to each other.

17. Low- and High-Frequency Gratings


18. Receptive Field and Contours

In our hearing, within the acoustical domain, we have long been


accustomed to thinking in terms of the temporal frequency of a sound
determining its pitch. In seeing, we now had demonstrated that a visual pattern,
an image, is determined in the same fashion by its spatial frequency. “Wow! The
eye works just like the ear.” This insight— that all senses might function in a
similar mode—was a most dramatic revelation. My long-sought quest for a way
to understand the processing of the visual as radiation, similar to the processing
of color, was at last realized.

Waves: A Caveat
In the early 1960s, when the holographic formulation of brain processes
was first proposed, one of the main issues that had to be resolved was to identify
the waves in the brain that constituted a hologram. It was not until I was writing
this book that the issue became resolved for me. The short answer to the
question as to what brainwaves are involved is that there are none.
For many years David Bohm had to chastise me for confounding waves
with spectra, the results of interferences among waves. The transformation is
between space-time and spectra. Waves occur in space-time as we have all
experienced. Thus the work of Karen and Russell DeValois, Fergus Campbell
and the rest of us did not deal with the frequency of waves but with the
frequency of oscillations— the frequency of oscillations expressed as wavelets,
as we shall see, between hyperpolarizations and depolarizations in the fine-fiber
neuro-nodal web.
Waves are generated when the energy produced by oscillations is
constrained as in a string—or at a beachfront or by the interface between wind
and water. But the unconstrained oscillations do not produce waves. Tsunamis
(more generally solitons) can be initiated on the east coast of Asia. Their energy
is not constrained in space or time and so is spread over the entire Pacific Ocean;
their effect is felt on the beaches of Hawaii. But there are no perceptible waves
over the expanse of the Pacific Ocean in between. The oscillations that “carry”
the energy can be experienced visually or kinesthetically, as at a beach beyond
the breakers where the water makes the raft bob up and down in what is felt as a
circular motion. In the brain, it is the constraints produced by sensory inputs or
neuronal clocks that result in various frequencies of oscillations that we record in
our ERPs and EEGs.

Critical Tests
Fergus Campbell provided an interesting sidelight on harmonics, the
relationships among frequencies. The test was critical to viewing the perception
of visual form as patterns of oscillation. When we hear a passage of music, we
hear a range of tones and their harmonics. A fundamental frequency of a tone is
produced when a length of a string is plucked. When fractions of the whole
length of the string also vibrate, harmonics of the fundamental frequency are
produced. When we hear a passage of music, we hear the fundamental tones
with all of their harmonics. Surprisingly, we also hear the fundamental tones
when only their harmonics are present. This is called experiencing the “missing
fundamental.”
I was able to experience another exciting moment when in the 1970s I was
visiting Cambridge, Fergus Campbell demonstrated that in vision, as in audition,
we also experience the “missing fundamental.” Yes, the eye is like the ear.
The results of two other sets of experiments are critical to demonstrating the
power of the frequency approach to analyzing visual patterns. Russell and Karen
DeValois and their students performed these experiments at the University of
California at Berkeley. In one set, they mapped an elongated receptive field of a
cortical neuron by moving a single line in a particular orientation across the
visual field of a monkey’s eye. (This form of mapping had been the basis of the
earlier interpretation that cells showing such receptive fields were “line
detectors.”) The DeValoises went on to widen the line they presented to the
monkey until it was no longer a line but was now a rectangle: The amount of the
cell’s response did not change! The cell that had been described as a line detector
could not discriminate—could not resolve the difference—between a line and a
rectangle.
In this same study, the experimenters next moved gratings with the same
orientation as that of the single line across the visual field of the monkey. These
gratings included a variety of bar widths and spacing (various spatial
frequencies). Using these stimuli, they found that the receptive field of the cell
was tuned to a limited bandwidth of spatial frequencies. Repeating the
experiment, mapping different cortical cells, they found that different cells
responded not only to different orientations of a stimulus, as had been found
previously, but that they were tuned to different spatial frequencies: the
ensemble of cortical cells had failed to discriminate between a line and a
rectangle but had excellent resolving power in distinguishing between band
widths of frequencies.

19. The missing fundamental: note that the patterns do not change: the bottom, as you view it, is the pattern
formed with the fundamental missing

For the second set of experiments the DeValoises elicited a receptive field
by moving a set of parallel lines (a grating) across the monkey’s receptive field.
As noted earlier, one of the characteristics of such fields is their specificity to the
orientation of the stimulus: a change in the orientation of the lines will engage
other cells, but the original cell becomes unresponsive when the orientation of
the grating is changed.
Following their identification of the orientation selectivity of the cell, the
DeValoises changed their original stimulus to a plaid. The pattern of the original
parallel lines was now crossed by perpendicular lines. According to the line
detector view, the cell should still be “detecting” the original lines, since they
were still present in their original orientation. But the cell no longer responded as
it had done. To activate the cell, the plaid had to be rotated.
The experimenters predicted the amount of rotation necessary to activate
the cell by scanning each plaid pattern, then performing a frequency transform
on the scan using a computer program. Each cell was responding to the
frequency transform of the plaid that activated it, not detecting the individual
lines making up the plaid. Forty-four experiments were performed, and in 42 of
them the predicted amount of, rotation came within two minutes of arc; in the
other two experiments the results came within one degree of predicting the
amount of rotation necessary to make the cell respond. Russell DeValois, a rather
cautious and reserved person, stated in presenting these data at the meeting of
the American Psychological Association that he had never before had such a
clear experimental result. I was delighted.

20. Tuning functions for two representative striate cortex cells. Plotted are the sensitivity functions for bars
of various widths (squares) and sinusoidal gratings of various spatial frequencies (circles). Note that both
cells are much more narrowly tuned for grating frequency than for bar width. (From Albrecht et al., 1980,
copyright 1980, AAAS. Reprinted by permission.)

Signs of the Times


An excellent example of the strengths and limitations of currently accepted
views regarding the feature detection versus the frequency views is provided by
a Scientific American article (April 2007) entitled “The Movies in Our Eyes,”
authored by Frank Werblin and Botond Roska. The article describes a series of
elegant experiments performed by the authors, who recorded the electrical
activity of ganglion cells of the retina of rabbits. It is the ganglion cells that give
rise to the axons that relay to the brain the patterns developed during retinal
processing of radiant energy. The authors had first classified the ganglion cells
into 12 categories depending on their receptive fields, their dendritic connections
to earlier retinal processing stages. These connections were demonstrated by
injecting a yellow dye that rapidly spread back through all of the dendrites of the
individual ganglion cell under investigation.

21. Orientation tuning of a cat simple cell to a grating (squares and solid lines) and to checkerboards of
various length/width ratios. In A the responses are plotted as a function of the edge orientation; in B as a
function of the orientation of the Fourier fundamental of each pattern. It can be seen that the Fourier
fundamental but not the edge orientation specifies the cell’s orientation tuning (From K. K. DeValois et al.,
1979. Reprinted with permission.)

The authors recorded the patterns of signals generated in each of the


different types of ganglion cell by a luminous square and also by the presentation
of a frame of one of the author’s illuminated faces. They then took the resulting
patterns and programmed them into an artificial neural network where they were
combined by superposition. The resulting simulation was briefly (for a few
milliseconds) displayed as an example of a retinal process, a frame of a movie,
sent to the brain. In sequence, the movies showed a somewhat fuzzy depiction of
the experimenter’s face and how it changed as the experimenter talked for a
minute.
In their interpretation, the experimenters emphasized the spatial aspect of
the representation of the face and the time taken to sequence the movies. They
also emphasize the finding that the retina does a good deal of preprocessing
before it sends a series of partial representations of the photic input to the brain
for interpretation: “It could be that the movies serve simply as elementary clues,
a kind of scaffolding upon which the brain imposes constructs” (p. 74). Further:
“ . . . the paper thin neural tissue at the back of the eye is already parsing the
visual world into a dozen discrete components. These components travel, intact
and separately, to distinct visual brain regions—some conscious, some not. The
challenge to neuroscience now is to understand how the brain interprets these
packets of information to generate a magnificent, seamless view of reality” (p.
79).
However, the authors ignore a mother lode of evidence in their own
findings. The electrical activity, both excitatory and inhibitory, that they record
from the ganglion cells, even from the presentation of a square, show up as
wavelets! This is not surprising since the input to the ganglion cells is essentially
devoid of nerve impulses, being mediated only by way of the fine-fiber neuro-
nodal web of the prior stages of processing—a fact that is not mentioned in the
Scientific American paper. Wavelets are constrained spectral representations.
Commonly, in quantum physics, in communication theory and in neuroscience
the constraint is formed in space-time. Thus, we have Gabor’s quanta of
information.
Werblin and Roska speak of “packets of information” but do not specify
whether these are packets of scientifically defined (Shannon) information or a
more loosely formulated concept. They assume that the space-time “master
movie” produced by the retinal process “is what the brain receives.” But this
cannot be the whole story because a considerable amount of evidence has shown
that cutting all but any 2% of the optic nerve does not disturb an animal’s
response to a visual stimulus. There is no way that such an ordinary master
movie could be contained as such in just 2% of any portion of the optic tract and
simply repeated in the other 98%. Some sort of compression and replication has
to be involved.
Given these observations on Werblin and Roska’s experimental results and
the limitations of their interpretations, their presentation can serve as a major
contribution to the spectral view of visual processing as well as the authors’ own
interpretation in terms of a feature view. The spectral aspect of their data that
they ignore is what is needed to fill out their feature scaffolding to form what
they require to compose the seamless view of the world we navigate.

22. The Retinal Response to a Rectangle


23. Activity of a Second Ganglion Cell Type

Revisiting Geometry and Trigonometry


The strengths and weaknesses of the feature and the frequency (spectral)
views of brain processes in perception provide a fresh look at the way science
works. The experiments that led to the feature view showed the importance of
the orientation of the stimulus. The experiments upon which the frequency view
is based showed that the cortical cells were not detectors of stimulus “elements”
but were responsive to interrelations among them. Furthermore, the frequency
approach demonstrated that the power and richness of a spectral harmonic
analysis could enable a full understanding of neural processing in perception.
Unfortunately, those holding to the feature approach have tended to ignore
and/or castigate the frequency approach and the results of its experiments. There
is no need to exclude one or the other approach because the results of harmonic
analysis can be translated into statistics that deal with features such as points;
and into vectors that deal with features such as oriented lines.
There is a good argument for the ready acceptance of the feature detection
approach. It is simple and fits with what we learned when taking plane geometry
in high school. But the rules of plane geometry hold only for flat surfaces, and
the geometry of the eye is not flat. Also, the rules of plane geometry hold only
for medium distances, as a look at the meeting of parallel railroad tracks quickly
informs us. A more basic issue has endeared brain scientists to the feature
detection approach they have taken: the rules of plane geometry begin with
points, and 20th century science insisted on treating events as points and with the
statistical properties of combinations of points which can be represented by
vectors and matrices. The frequency approach, on the other hand, is based on the
concept of fields, which had been a treasured discovery of the 19th century.
David Marr, Professor of Computational Science at MIT, was a strong
supporter of the feature view. But he could not take his simulations beyond two
dimensions— or perhaps two and a half dimensions. Before his untimely death
from leukemia, he began to wonder whether the feature detector approach had
misled him and noted that the discovery of a cell that responded to a hand did
not lead to any understanding of process. All the discovery did was shift the
scale of hand recognition from the organism to the organ, the brain.

24. The Receptive Fields of Seven Ganglion Cell Types Arranged Serially

As I was writing the paragraphs of this chapter, it occurred to me that the


resolution to the feature/frequency issue rests on the distinction between research
that addresses the structure of circuits vs. research that addresses processes at the
deeper neuro-nodal level of the fine-fibered web. The circuits that characterize
the visual system consist of parallel channels, the several channels conveying
different patterns of signals, different features of the retinal process. (My
colleagues and I identified the properties of x and y visual channels that reach
the cortex as identical to the properties of simple and complex cortical cells.)
Thus, feature extraction, the creative process of identifying features, can be
readily supported.
The frequency approach uses the signals recorded from axons to map the
processes going on in the fine- fibered dendritic receptive fields of those axons.
This process can generate features as they are recorded from axons. Taking the
frequency approach does not require a non-quantitative representation of the
dendritic receptive fields, the holoscape of the neuro-nodal process. Holoscapes
can be quantitatively described just as can landscapes or weather patterns. The
issue was already faced in quantum physics, where field equations were pitted
against vector matrices during the early part of the 20th century. Werner
Heisenberg formulated a comprehensive vector matrix theory and Erwin
Schrödinger formulated a comprehensive set of wave functions to organize what
had been discovered. There was a good deal of controversy as to which
formulation was the proper one. Schrödinger felt that his formulation had greater
representational validity (Anschaulichkeit). Heisenberg, as was Niels Bohr, was
more interested in the multiple uses to which the formulation could be put, an
interest that later became known as the Copenhagen implementation of quantum
theory which has been criticized as being conceptually vacant. Schrödinger
continued to defend the conceptual richness of his approach, which Einstein
shared. Finally, to his great relief, Schrödinger was able to show that the matrix
and wave equations were convertible, one into the other.
I shared Schrödinger’s feeling of relief when, three quarters of a century
later, I was able to demonstrate how to convert a harmonic description to a
vector description using microelectrode recordings taken from rats’ cortical cells
while their whiskers were being stimulated. My colleagues and I showed (in the
journal Forma, 2004) how vector representations could be derived from the
waveforms of the receptive fields we had plotted; but showed that the vector
representation, though more generally applicable, was considerably
impoverished in showing the complexity of what we were recording, when
compared with the surface distributions and contour maps from which the
vectors were extracted.
25. The Holoscape
26. An example for the three-dimensional representation of the surface distribution and associated contour
map of the electrical response to buccal nerve stimulation. The surface distributions were derived by a
cubic interpolation (spline) procedure. The contour maps were abstracted from the surface distributions by
plotting contours in terms of equal numbers of spikes per recording interval (100 msec). The buccal nerve
stimulation whisked the rat’s whiskers against teflon discs whose textures (groove widths that determined
the spatial frequency of the disc) varied from 200 to 1000 microns and whose grooves were oriented from 0
to 180 degrees. Separate plots are shown for each frequency of buccal nerve stimulation: 4Hz, 8Hz, and 12
Hz (top to bottom).
27. The cosine regressions of the total number of spikes for the orientation of the stimulus for the 4 Hz
condition are shown.
28. Vectors

The use of both the harmonic and the vector descriptions of our data reflect
the difference between visualizing the deep and surface structures of brain
processing. Harmonic descriptions richly demonstrate the holoscape of receptive
field properties, then patches—the nodes—that make up the neuro-nodal web of
small-fibered processing. The vector descriptions of our data describe the
sampling by large-fibered axons of the deep process.

In Summary
The feature detection approach to studying how our brain physiology
enables our perceptions, outlines a Euclidian domain to be explored. This
approach, though initially exciting to the neuroscience community, has been
shown wanting: research in my laboratory and that of others showed that the
receptive fields of cells in the brain’s visual cortex are not limited to detecting
one or another sensory stimulus.
Instead, research in my laboratory, and that of others, showed that a feature,
relevant to a particular situation, is identified by being extracted from a set of
available features.
The frequency approach, based on recording the spectral dimensions of arcs
of excitation subtended over the curvature of the retina, is essentially
Pythagorean. Deeper insights into complex processing are obtained, insights that
can provide the basis for identifying features.
Neither approach discerns the objects that populate the world we navigate,
the subject of Chapter 6.
Navigating Our World
Chapter 5
From Receptor to Cortex and Back Again

Wherein I trace the transformation of perceived patterns as they influence the


brain cortex and as the brain cortex influences them.

If you throw two small stones at the same time on a sheet of motionless
water at some distance from each other, you will observe that around
the two percussions numerous separate circles are formed; these will
meet as they increase in size and then penetrate and intersect each
other, all the while retaining as their respective centers the spots struck
by the stones. And the reason for this is that the water, though
apparently moving, does not leave its original position. . . . [This] can
be described as a tremor rather than a movement. In order to
understand better what I mean, watch the blades of straw that because
of their lightness float on the water, and observe how they do not
depart from their original positions in spite of the waves underneath
them

Just as the stone thrown in the water becomes the center and causes
various circles, sound spreads in circles in the air. Thus every body
placed in the luminous air spreads out in circles and fills the
surrounding space with infinite likeness of itself and appears all in all
and all in every part.
—Leonardo da Vinci, Optics

Enough has now been said to prove the general law of perception,
which is this, that whilst part of what we perceive comes through our
senses from the objects before us another part (and it may be the
larger part) always comes out of our own head.
—William James, Principles of Psychology, 1890
Lenses: How Do We Come to Know What Is Out
There?
In the early 1980s the quantum physicist David Bohm pointed out that if
humankind did not have telescopes with lenses, the universe would appear to us
as a holographic blur, an emptiness of all forms such as objects. Only recently
have cosmologists begun to arrive at a similar view. String theoreticians, the
mathematician Roger Penrose and Stephen Hawking (e.g., in A Universe in a
Nutshell) have, for their own reasons, begun to suspect that Bohm was correct, at
least for exploring and explaining what we can about the microstructure of the
universe at its origin and horizon. But the horizon itself is still a spatial concept,
whereas Bohm had disposed of space entirely as a concept in his lens-less order.
With respect to the brain, I have extended Bohm’s insight to point out that,
were it not for the lenses of our eyes and similar lens-like properties of our other
senses, the world within which we navigate would appear to us as if we were in
a dense fog—a holographic blur, an emptiness within which all form seems to
have disappeared. As mentioned in Chapter 2, we have a great deal of trouble
envisioning such a situation, which was discovered mathematically in 1948 by
Dennis Gabor in his effort to enhance the resolution of electron microscopy.
Today, thanks to the pioneering work of Emmet Leith in the early 1960s, we now
have a palpable realization of the holographic process using laser optics.

Holography
Leith’s procedure was to shine a beam of coherent (laser) light through a
half-silvered angled mirror that allowed some of the beam to pass straight
through the mirror, and some of it to be reflected from its surface. The reflected
portion of the laser light was beamed at right angles to become patterned by an
object. The reflected and transmitted beams were then collected on a
photographic plate, forming a spectrum made by the intersections of the two
beams. Wherever these intersections occur, their amplitudes (heights) are either
reinforced or diminished depending on the patterns “encoded” in the beams. A
simple model of the intersections of the beams can be visualized as being very
much like the intersections of two sets of circular waves made by dropping two
pebbles in a pond, as described by Leonardo da Vinci.
When captured on film, the holographic process has several important
characteristics that help us understand sensory processing and the reconstruction
of our experience from memory. First, when we shine one of the two beams onto
the holographic film, the patterns “encoded” in the other beam can be
reconstructed. Thus, when one of the beams has been reflected from an object,
that object can again be viewed. If both beams are reflected from objects, the
image of either object can be retrieved when we illuminate the other.
An additional feature of holography is that one can retrievably store huge
amounts of data on film simply by repeating the process with slight changes in
the angle of the beams or the frequency of their waves (much as when one
changes channels on a television set). The entire contents of the Library of
Congress can be stored in 1 cubic centimeter in this manner.
Shortly after my inauguration to Emmet Leith’s demonstration of
holography, I was invited to a physics meeting held in a Buddhist enclave in the
San Francisco Bay area. I had noted that the only processing feature that all
perceptual scientists found to be non-controversial was that the biconcave lens of
the eye performed a Fourier transform on the radiant energy being processed.
Jeffrey Chew, director of the Department of Physics at the University of
California at Berkeley, began the conference with a tutorial on the Fourier
relationship and its importance in quantum physics. He stated that any space-
time pattern that we observe can be transformed into spectra that are composed
of the intersections among waveforms differing in frequency, amplitude, and
their relation to one another, much as Leonardo Da Vinci had observed.
My appetite for understanding the Fourier transformation was whetted and
became amply rewarded.

Fourier’s Discovery
Jean Baptiste Joseph Fourier, a mathematical physicist working at the
beginning of the 19th century, did an extensive study of heat, that is, radiation
below the infrared part of the visible spectrum. The subject had military
significance (and thus support) because it was known that various patterns of
heat could differently affect the bore of a cannon. Shortly thereafter, the field of
inquiry called “thermo-dynamics” became established to measure the dissipation
of heat from steam engines: the amount of work, and the efficiency (the amount
of work minus the dissipation in heat) with which an engine did the work would
soon be major interests for engineers and physicists. The invention of various
scales that we now know as “thermo-meters”—Fahrenheit, Celsius and Kelvin—
resulted, but what interested Fourier was a way to measure not only the amount
of heat but rather to measure the spectrum of heat. The technique he came up
with has been discovered repeatedly, only to have slipped away to be
rediscovered again in a different context. We met this technique when we
discussed the Pythagoreans, but both Fourier and I had to rediscover it for
ourselves. The technique is simple, but if Fourier and other French
mathematicians had difficulty with it, it may take a bit of patience on your part
to understand it.
What is involved is looking at spectra in two very different ways: as the
intersection among wave fronts and as mapped into a recurring circle.
To analyze the spectrum that makes up white light, we use a prism. What
Fourier came up with to analyze the spectrum of heat can be thought of as
analyzing spectra with a “clock.” The oscillations of molecules that form heat
can be displayed in two ways: 1) an up-and-down motion (a wave) extended
over space and time, or 2) a motion that goes round and round in a circle as on a
clockface that keeps track of the daily cycle of light and darkness.
Fourier, as a mathematician, realized that he could specify any point on a
circle trigonometrically, that is by triangulating it. Thus he could specify any
point on the circumference of the circle, that is, any point on an oscillation by its
sine and cosine components. The internal angle of the triangle that specifies the
point is designated by the cosine; the external angle, by the sine.
When he plotted these sine and cosine components on the continuous up-
and-down motion extended over space and time, the sine wave and the cosine
wave are displaced from one another by half a cycle. Where the two waveforms
intersect, those points of intersection are points of interference (reinforcement or
diminution of the height of oscillations) just as at the intersection of Leonardo’s
waves created by pebbles dropped into a pool. The heights of the points of
intersection are numbers that can be used to calculate relations between the
oscillations making up the spectra of heat. In this manner Fourier digitized an
analogue signal.
Thinking back and forth between the two ways by which waveforms can be
plotted is not easy. It took Fourier, a brilliant mathematician, several decades to
work out the problem of how to represent the measure of heat in these two ways.
One of my “happier times” was reading, in the original French and in English
translations, Fourier’s stepwise groping toward this solution. My reason for
being so happy was that I had tried for myself to understand his solution for over
a decade and found not only that my solution was correct, but that Fourier had
gone through the steps toward the solution much as I had. Further, I have just
learned that quantum physicists are rediscovering this analytic “trick”: they
speak of “modular operators” which divide the processes under investigation
into clock-like repetitious segments. I asked my colleagues, “Like Fourier?” and
received a nod and smile: I had understood.
29. Representing Oscillations as Waveforms and as Circular Forms

Meanwhile, as he was also active in politics during a most turbulent period


in French history, Fourier had been put in jail and released several times before
being appointed head of a prefecture. When Napoleon came to power, he soon
recognized Fourier’s genius and took him to Egypt. Although French
mathematics was the best in Europe, something might still be learned from the
Arabs, whose mathematics had flowered some thousand years earlier.

Fourier the Man


In addition to being given the opportunity to learn something about Arab
mathematics firsthand, Fourier was put in charge of recording and describing
Egyptian cultural artifacts for the expedition; he and his team made sketches of
the architecture, sculpture and painting. In many instances, these sketches
remain the only record we have of Egyptian monuments that have subsequently
been destroyed. Fourier spent the rest of his life preparing these sketches for
publication and providing financial and intellectual support for young
Egyptologists such as his protégé Champollion, famed for deciphering Egyptian
hieroglyphics by using the Rosetta stone (brought back to Europe by the same
expedition).
On his return to Paris, Fourier first presented his “Analytic Theory of Heat”
that had attained some maturity during his tour in Egypt. Within two weeks of
this presentation, Laplace countered with a refutation showing that Fourier had
not worked out an acceptable proof of his theoretical formulation. Laplace was
correct in his critique of Fourier’s proof: it took many years before a plausible
proof was worked out. But the overriding importance of the tool that Fourier had
devised for the measurement of the spectrum of heat made it possible for him to
continue his mathematical explorations.
Fourier persisted. He had been appointed head of his local prefecture and
paid for the publication of his fur- ther work using the salary of his office.
Meanwhile, a great many scientists and engineers were using his technique to
solve their problems in a variety of disciplines. After a decade, when Fourier
was awarded a prestigious prize, Laplace began to support the publication of his
work and his election to the French Academy of Sciences.
Over nearly two centuries, Fourier’s theorem has become widely regarded
as a most useful formula for measuring and predicting recurring patterns—from
those characterizing bull and bear markets in the stock exchange to possible
meltdowns in nuclear reactors. Richard Feynman, the noted Nobel laureate in
physics, stated in his lectures (Feynman, Leighton and Sands, 1963) that
Fourier’s theorem is probably the most far-reaching principle of mathematical
physics.
As expressed by my colleagues at the University of California at Berkeley,
professors of physiology Russell and Karen DeValois, there is every reason to
expect this over-arching formulation to also play a key role in how we can
understand sensory processes: “Linear systems analysis originated in a striking
mathematical discovery by a French Physicist, Baron Jean Fourier, in 1822 . . . [,
which] has found wide application in physics and engineering for a century and
a half. It has also served as the principal basis for understanding hearing since its
application to audition by Ohm (1843) and Helmholtz (1877). The successful
application of these procedures to the study of visual processes has come only in
the last two decades” (DeValois and DeValois, Spatial Vision, 1988).

FFT: The Fast Fourier Transform


The mathematical technique that Fourier developed was published in final
form in 1822 as Théorie Analytique de la Chaleur (Analytical Theory of Heat).
Today, the once- controversial formula is the basis of the theory known as the
“Fourier series”—and with its inverse, the “Fourier transformation”—provides a
method whereby we can reduce any perceived configuration, any pattern, no
matter how complex, into a series of numbers representing intersections among
waveforms. Further, any such series can be restored to its original configuration
by performing the transform again, using its inverse.
What Fourier’s method provides us so beautifully is the ability to
decompose a configuration, as we experience it, into a series of component parts.
For instance, the first component might take in a broad sweep of a scene, of a
sound, or of an EEG, and describe it by a low frequency. The next component
may then focus on somewhat narrower aspects of the same configuration, and
describe it by a somewhat higher frequency. Each subsequent component
resolves the pattern into higher and higher frequencies. A series of a dozen
components usually succeeds in representing the experienced space-time
configuration in sufficiently fine grain to restore it faithfully when the inverse
transform is performed. (Recall that performing a Fourier transformation twice
returns the original configuration.) The successive components in the Fourier
equation accomplish an increasing resolution, providing a finer grain, such as
texture, to the configuration.
Fourier’s pioneering method has led to many modifications of his insight,
modifications that are grouped under the heading of orthogonal transformations,
that is, transformations whose components do not interact with each other. In
addition, non-linear modifications have been devised, such as that developed by
Norbert Wiener, professor of mathematics at the Massachusetts Institute of
Technology, for analyzing the EEG.
Furthermore, the goal of analyzing patterns into a series of component parts
can now be accomplished by statistics in which each component in the series is
represented by an “order.” Fourth order (that is four terms in the series) must be
used to reach the resolution necessary to represent a line or the texture of a
visual scene. Statistics, based on enumerating items or things, are usually
relatively easy to manipulate in calculations. Thus, statistical techniques are
taught to every student who aspires to use or to understand the fruits of scientific
enterprise. However, there are drawbacks to the teaching of statistics to the
exclusion of other analytical techniques: in the next chapter I discuss how
statistical analysis hides the forms behind the manipulations, the symmetries
better understood in spectral terms.
This deeper understanding is one reason why frequency aficionados
continue to tout the spectral domain. In addition, there is a most useful practical
application: transforming into the spectral domain (and back out again) makes
the computation of correlations, the basis of “imaging,” simpler, whether by
brain or computer. (The procedure uses “convolution,” a form of multiplication
of the spectral transformations of what needs to be correlated.)
I remember how we cheered the achievement of a computer program that
would perform the Fourier transformation, the Fast Fourier Transform (FFT),
while I was at the Center for Advanced Studies in the Behavioral Sciences at
Stanford, California, in 1958–59. As a result, its use has become commonplace
in medical image processing as in PET (positron emission tomography) scans
and fMRI (functional Magnetic Resonance Imaging.) These successes raise the
question: Might biological sensory functions also be based on spectral image
processing? The next sections show that, not only can the Fourier theorem be
usefully applied, but that its application resolves some longstanding problems in
the understanding of the neurobiology of visual perception.
A Retinal Image?
James Gibson, renowned for his work on sensory perception at Cornell
University, was adamant in his rejection of the widely accepted notion that an
image of a scene is displayed on the retina by the optics of the eye, a view taken
seriously by most scientists. This view suggests that the eye is like a classical
camera that is constructed so that the lens sharply focuses the scene onto the film
to produce a two-dimensional spatial image. The retina is conceived to be like
the film in the camera. This analogy would mean that our three-dimensional
perceptual experience must be constructed from this two-dimensional “retinal
image.”
Gibson pointed out that this conception is faulty if only because there is
constant movement in the relation between our eyes and the scene before us. His
students also regarded as significant the hemispheric shape of our retina: their
sophisticated view is that retinal excitation is described by cones of the
Riemannian geometry of curved space, not by Euclidian points and lines. So
why hasn’t their view been more generally accepted?
For me, the problem arose because, while at the University of Chicago, I
had seen a retinal image, an image that appeared on the retina of the eye of an
ox. Still, I felt that the Gibsonians were on to something important in placing
their emphasis on (Pythagorean three-dimensional triangular) cones of light
subtending degrees of arc on the retina. My discussions with Gibson did not help
to solve the problem, because he insisted that all of the “information” (that is, the
multidimensional patterns) was already there in the scene, and all the brain’s
visual system needed to do was to “resonate” to that information. I agreed in
principle but argued that we needed to know “the particular go” of how such
resonances would become established. Gibson had proposed a relational,
ecological approach to studying the information contained in the scene before
us; I countered that we need an internal as well as an external ecology to explain
the visual processing of the scene. Whenever I tried to bring brain facts into the
discussion, Gibson laughed and turned off his hearing aid. I accused him of
being the radical behaviorist of perception. It was all great fun, but did not
resolve the issue.

The Aha Experience


Resolution of the question “Is there a retinal image?” came in a flash while
I was crossing a busy street in Edmonton. I had been asked to present the
MacEachron lectures at the University of Alberta, Canada, lectures which were
subsequently (1991) published as a book, Brain and Perception. The fourth
lecture concerned processing by the optical structures of the eye. I had explained
to my audience that the lens of the eye performs a Fourier transform: on one side
of our lens is a space-time image as we experience it; on the other side of the
lens is a holographic-like spectral distribution. If indeed there is a space-time
retinal image, as I had seen in the ox eye, the input to the eye must be spectral.
This explanation fit nicely with David Bohm’s physics: as I mentioned earlier,
Bohm had noted that if we did not have lenses, the universe would appear to us
as a hologram.
However, Gibson’s intuition regarding a retinal image, and my uncertainty
about it, remained unresolved. As my hosts and I were crossing a very busy
street after the lecture, discussing the issue, the answer suddenly became clear
and we stopped short in the middle of the street: The retina is not a film. The
retina is a multilayered structure that is sensitive to single quanta, photons of
visible radiation. Quantum properties are multidimensional. Gibson may have
been wrong in emphasizing the idea that there is no image at all, but he was
correct in his intuition regarding visual processing as multidimensional from the
start.
The optical image is, as Gibson pointed out, actually an optical flow. In
addition, the image is somewhat “blurred,” diffracted due to the gelatinous
properties of the lens of the eye. (Technically, this blurring is called an “Airy
disk.”) Thus the optical image that appears at the surface of the retina is
somewhat spectral in configuration. The retinal quantum process completes the
transformation into the spectral domain.
Separating the optical image (or flow, the moving image) from the retinal
quantum process is in tune with Gibson’s intuition, which can now be developed
in its entirety as follows:

a. Radiant energy (such as light and heat) becomes scattered, spectral and
holographic, as it is reflected from and refracted by objects in the universe.
b. The input to our eyes is (in)formed by this spectral, holographic radiant
energy.
c. The optics of our eyes (pupil and lens) then perform a Fourier
transformation to produce a space-time flow, a moving optical image much
as we experience it.
d. At the retina, the beginning of another transformation occurs: the space-
time optical image, having been somewhat diffracted by the gelatinous
composition of the lens, becomes re-transformed into a quantum-like
multidimensional process that is transmitted to the brain.
Gabor’s Quantum of Information
The processes of the retinal transformation, and those that follow, are not
accurately described by the Fourier equation. I discussed this issue with Dennis
Gabor over a lovely dinner in Paris one evening, and he was skeptical of
thinking of the brain process solely in Fourier terms: “It is something like the
Fourier but not quite.” At the time, Gabor was excited about showing how the
Fourier transform could be approached in stages. (I would use his insights later
in describing the stages of visual processing from retina to cortex.) But Gabor
could not give a precise answer to why it was not quite how the brain process
worked.
Gabor actually did have the answer—but he must have forgotten it or felt
that it did not apply to brain processing. I found, after several years’ search, that
the answer “to not quite a Fourier” had been given by Gabor himself when I
discovered that J. C. R. Licklider, then at the Massachusetts Institute of
Technology, had referred in the 1951 Stevens’ Handbook of Experimental
Psychology to a paper by Gabor regarding sensory processing.
Missing from our Paris dinner conversation was Gabor’s development of
his “elementary function” which dated to an earlier interest. He had published
his paper in the IEE, a British journal. I searched for the paper in vain for several
years in the American IEEE until someone alerted me to the existence of the
British journal. The paper was a concise few pages which took me several
months to digest. But the effort was most worthwhile:
Before he had invented the Fourier holographic procedure, Gabor had
worked on the problem of telephone communication across the transatlantic
cable. He wanted to establish the maximum amount a message could be
compressed without losing its intelligibility. Whereas telegraphy depends on a
simple on-off signal such as the Morse code, in a telephone message that uses
language, intelligibility of the signal depends on transmission of the spectra that
compose speech. The communicated signals are initiated by vibrations produced
by speaking into the small microphones in the phone.
Gabor’s mathematical formulation consisted of what we now call a
“windowed” Fourier transform, that is, a “snapshot” (mathematically defined as
a Hilbert space) of the spectrum created by the Fourier transform which
otherwise would extend the spectral analysis to infinity. Neither the transatlantic
cable nor the brain’s communication pattern extends to infinity. The Gabor
function provides the means to place these communication patterns within
coordinates where the spectrum is represented on one axis and space-time on the
other. The Fourier transform lets us look at either space-time or spectrum; the
Gabor function provides us with the ability to analyze a communication within a
context of both space-time and spectrum.

From Retina to Brain


Licklider, in his 1951 Handbook of Experimental Psychology article,
suggests that:

If we could find a convenient way of showing not merely the


amplitudes of the envelopes but the actual oscillations of the array of
resonators, we would have a notation of even greater generality and
flexibility, one that would reduce under certain idealized assumptions
to the spectrum and under others to the wave form. The analogy to the
position-momentum and energy-time problems that led Heisenberg in
1927 to state his uncertainty principle . . . has led Gabor (1946) to
suggest that we may find the solution [to the problem of sensory
processing] in quantum mechanics.

30. Cross section of the eye showing biconcave lens

31. Biconvex lens processing wave fronts

32. Airy Disc

Note that Licklider is focusing on the oscillations of resonators (the hair


cells of the inner ear) and that he clearly distinguishes spectrum from space-time
wave forms as is performed by the Fourier procedure. The analogy to quantum
processes is prescient.
Licklider stated that the problems regarding hearing that he had discussed
might find a resolution in Gabor’s treatment of communication. That resolution
is given, by the Gabor elementary function, a “windowed,” constrained, Fourier
transformation. For me, the Gabor function provided the missing description to
understanding the transformation that is taking place not only in the auditory but
also in the visual process between the retina and the brain’s cortex.
As has often been noted, revolutions in science require many more steps
than are apparent when we are only aware of the final result: We move forward,
only to drop back a few paces when we forget what we have discovered. This is
true for individuals as well as for the community. I have on several occasions
found that I have backed into old ways of thinking that I thought I had
abandoned. My rediscovery of Gabor’s elementary function, which he himself
had momentarily forgotten while we were in Paris, proved to be a great leap
forward in how to think about sensory processing in the brain.
The example Gabor used was taken from music: the pitch of a single tone is
dependent on the frequency of vibration; the duration of the tone is measured in
time. The complete mathematical description of the tone must therefore include
both its frequency (and in case of harmonics, the spectrum) and its duration.
Gabor’s transformation provides simultaneous space-time and frequency
coordinates, whereas, when using Fourier’s transformation, we must choose
between space-time or a spectrum of frequencies. The non-spectral coordinate in
the Gabor function includes both space and time because a message not only
takes time to be transmitted but also occupies space on the cable.
In brain science, we also use spatial as well as temporal frequency
(spectral) coordinates to describe our findings. In addition, when we deal with a
spectrum of frequencies per se, we often substitute the term “spectral density” to
avoid a common confusion that Gabor pointed out in his paper: non-
mathematicians, even physicists, often think of frequencies as occurring only in
time, but when mathematicians deal with vibrations as in the Fourier transform,
frequencies are described as composing spectra and devoid of space and time.
In Gabor’s pioneering 1946 paper, he found the mathematics for his
“windowed” Fourier transform in Werner Heisenberg’s 1925 descriptions of
quantum processes in subatomic physics. In reference to Heisenberg’s use of
mathematics to describe subatomic quantum processes, Gabor called his
elementary unit a “quantum of information.” When, therefore, in the early
1970s, my laboratory as well as brain scientists the world over began to find that
the receptive fields of cortical cells in the visual and tactile systems could be
described as two-dimensional Gabor functions, we celebrated the insight that
windowed Fourier transforms describe not only the auditory but other sensory
processes as well.

33. OHM and Helmholtz’s Piano Model of Sensory Processing

The form of communication, a mental process, and the form of the


construction of cortical receptive fields, a material physical process, could now
be described by the same formalism. At this level of inquiry, an identity is
established between the operations of mind and of brain. This identity has led
ordinary people, as well as scientists and philosophers, to talk as if their brain
“thinks,” “chooses,” “hurts,” or is “disorganized.” In the language of philosophy,
this is termed misplaced concreteness: It is we the people who do the thinking,
choosing, hurting, and who become discombobulated. So it is important to
specify the level, the scale of inquiry, at which there is an identity of form (at the
receptive field level in the brain) and where mind (communication) and body
(brain) are different. More on this in Chapters 22–24.
Implications
Gabor warned against thinking that the “quantum of information” means
that communication is actually going on at the quantum scale. I have also
cautioned against assuming that the receptive fields we map in the sensory
cortex are quantum-scale fields. But when communication is wireless (or uses
optical fibers) the medium of communication is operating at the quantum scale.
The same may be true for the brain. I noted earlier that the retina is sensitive to
individual quanta of radiation. Therefore, for the past decade or more my
colleague, Menas Kafatos and I have explored the variety of scales at which
brain processes that mediate the organization of receptive fields could actually
be quantum-scale and not just quantum-like.
Thus, Mari Jibu and Kunio Yasui, an anesthesiologist and a physicist from
Notre Dame Seishen University in Okayama, Japan, and I have suggested that
the membranes of the fine fibers, the media in which the receptive fields are
mapped, may have quantum-scale properties. The membranes are made up of
phospholipids, oscillating molecules that are arranged in parallel, like rows of
gently waving poplar trees. The lipid, fatty water- repellent part makes up the
middle, the trunks of the ”trees,” and the phosphorus water-seeking parts make
up the inside and outside of the membrane. These characteristics suggest that
water molecules become trapped and aligned in the phosphorus part of the
membrane. Aligned water molecules can show super-liquidity and are thus
super-conductive; that is, they act as if they had quantum-scale properties.

34. Logons, Gabor Elementary Functions: Quanta of Information


35. Top row: illustrations of cat simple 2-D receptive field profiles. Middle row: best-fitting 2-D Gabor
wavelet for each neuron. Bottom row: residual error of the fit, which for 97% of the cells studied was
indistinguishable from random error in the Chi-squared sense. from: Daugman, J.G. (1990.)

Another interesting consequence of Gabor’s use of the “quantum of


information” is the very fact that the form of quantum theory becomes extended
to communication. My colleagues in quantum physics at George Mason
University in Northern Virginia claim that this means that the laws of quantum
physics actually cross all scales from the subatomic to cosmic. My reaction has
been: of course, since much of what we know comes to us via our senses and
thus through the “lenses” that process spectra such as those named “light” and
“heat.” But it also means that if quantum laws operate at every scale, then
despite their weirdness, these laws should have an explanation at an everyday
scale of inquiry. My suggestion is that weirdness originates in the coordinates
within which quantum laws are formulated: the creation of a computational
region that has spectrum for one axis and space-time for the other. Conceptually,
explanations seesaw from spectra (and more often their origins in waveforms) to
space-time and particles. We can sort out the weirdness by coming back to the
Fourier transform that clearly distinguishes space-time explanations from
holographic-spectral explanations.

Holographic Brainwaves?
At a recent conference I was accosted in a hallway by an attendee who
stated that he had great trouble figuring out what the brain waves were that
constituted the dendritic processing web. I was taken aback. I had not thought
about the problem for decades. I replied without hesitation, “Brain waves! There
aren’t any. All we have is oscillations between excitatory and inhibitory post-
(and pre-) synaptic potential changes.” I went on to explain about Gabor
“wavelets” as descriptive of dendritic fields and so forth.
It’s just like water. When the energy of a volley of nerve impulses reaches
the “shore” of the retina the pre-synaptic activation forms a “wave-front.” But
when activity at the brain cortex is mapped we obtain wavelets, Gabor’s “quanta
of information.” The relationship between the interference patterns of spectra
and the frequencies of patterns of waves is straightforward, although I have had
problems in explaining the relationship to students in the computational
sciences. Waves occur in space-time. Spectra do not. Spectra result from the
interference among waves—and, by Fourier’s insightful technique, the locus of
interference can be specified as a number that indicates the “height” of the wave
at that point. The number is represented by a Gabor function (or other wavelet.)
Frequency in space-time has been converted to spectral density in the transform
domain.
Wavelets are not instantaneous numbers. As their name implies wavelets
have an onset slope and an offset slope. Think of a musical tone: you know a
little something about where it comes from and a little as to where it is leading.
Note that we are dealing with dendritic patches limited to one or a small
congregation of receptive fields—like the bobbing of a raft in the water beyond
the shore. There was a considerable effort made to show that there is no overall
Fourier transform that describes the brain’s relationship to a percept—but those
of us who initially proposed the holographic brain theory always described the
transformation as limited to a single or a small patch of receptive fields. By
definition, therefore, the Fourier transformation is constrained, windowed,
making the transformed brain event a wavelet.
We now need to apply these formulations to specific topics that have
plagued the study of perception— needlessly, I believe. The concept that the
optics of the eye create a two-dimensional “retinal image” is such a topic.

Gabor or Fourier?
The form of receptive fields is not static. As described in the next chapter,
receptive field properties are subject to top-down (cognitive) influences from
higher-order brain systems. Electrical stimulation of the inferior temporal cortex
changes the form of the receptive fields to enhance Gabor information
processing— as it occurs in communication processing systems. By contrast,
electrical stimulation of the prefrontal cortex changes the form of the receptive
fields to enhance Fourier image processing—as it is used in image processing
such as PET Scans and fMRI and in making correlations such as in using FFT.
The changes are brought about by altering the width of the inhibitory surround
of the receptive fields. Thus both Gabor and Fourier processes are achieved,
depending on whether communication and computation or imaging and
correlations are being addressed.

And Back Again: Projection


We experience the world in which we navigate as being “out there,” not
within our sense organs or in our brain. This phenomenon is called “projection.”
I experienced an interesting instance of projection that anyone can repeat in his
own bathroom. I noted that the enlarging mirror (concave 7X) in my bathroom
reflected the ceiling light, not at the mirror’s surface as one would expect, but
instead, the image of the light was hovering in mid-air above the mirror. The
image seemed so real that I tried to grasp it, to get my hand behind it, only to
catch thin air.
Our experience of waveforms, as they create images reflected by mirrors,
refracted by prisms or gratings, often seem equally “hard to grasp.” But the
power that generates such waveforms can be experienced in a very real way, as
in a tsunami sweeping ashore. My research has been motivated by the
excitement of “getting behind” the images we experience—to try to understand
how the experience is created. Cosmologists spend their days and nights trying
to “get behind” the images they observe with their telescopes and other sensing
devices. Brain scientists are, for the most part, not yet aware of the problem.
In order to understand the process by which we perceive images, we need to
regard vision as akin to other senses, such as hearing. Harvard physiologist and
Nobel laureate Georg von Békésy began his classical experiments, summarized
in his 1967 book Sensory Inhibition, at the Hungarian Institute for Research in
Telegraphy in 1927. His initial experiments were made on the basilar membrane
of the cochlea, the receptor surface of the ear. Békésy began by examining how
the arrival of pressure waves becomes translated into a neural code that reaches
our brain. “The results suggested similarities to Mach’s law of [visual]
brightness contrast. At the time my attempt to introduce psychological processes
into an area that was held to be purely mechanical met with great opposition on
the part of my associates in the Department of Physics.
. . . And yet I continued to be concerned with the differences between
perceptual observations and physical measurements. My respect for
psychological observation was further enhanced when I learned that in some
situations, as in the detection of weak stimuli, the sense organs often exhibit
greater sensitivity than can be demonstrated by any purely physical procedure.
(Ernst Mach was a venerated Viennese scientist/philosopher active at the turn of
the 20th century.)
When I met Békésy at Harvard, he was frustrated by the small size of the
cochlea and was searching for as large a one as he could get hold of. He knew
that I was working (in Florida) with porpoises at the time, and he asked me to
bring him a porpoise cochlea if I came across one of the animals that had died. I
did, but Békésy wanted something still larger.
Békésy had noted at the time—just as we would later come to view the eye
as similar to the ear—that the skin was also like the ear in many respects. (The
mammalian ear is derived from the lateral line system of fish, a system that is
sensitive to vibrations transmitted through water.) Therefore, Békésy set out to
make an artificial ear by aligning five vibrators that he could tune to different
frequencies. He placed this artificial cochlea on the inside surface of the forearm,
and he adjusted the vibrations so that the phase relations among the vibrations
were similar to those in a real cochlea. What one felt, instead of the multiple
frequencies of the five vibrators, was a single point of stimulation. Békésy
showed that this effect was due to “sensory inhibition;” that is, the influence of
the vibrators on the skin produced “nearest neighbor” effects, mediated by the
network of nerves in the skin, that resulted in a single sensation. Changing the
phase relations among the vibrators changed the location on the forearm where
one felt the point.
Subsequently, in my laboratory at Stanford, a graduate student and I set to
examining whether the brain cortex responded to such individual vibratory
stimuli or to a single point as in the sensation produced in Békésy’s experiment.
We did this by recording from the brain cortex of a cat while the animal’s skin
was being stimulated by several vibrators. Békésy, who had moved to Hawaii
from Harvard (which belatedly offered him a full professorship after he received
his Nobel Prize) served on my student’s thesis committee; he was delighted with
the result of the experiment: The cortical recording had reflected the cat’s
sensory and cortical processing—presumably responsible for the cat’s
perceptions—not the physical stimulus that had been applied to the skin on the
cat’s lips.
On another occasion, still at Harvard, Békésy made two artificial cochleas
and placed them on his forearms. I was visiting one afternoon, and Békésy, full
of excitement, captured me, strapping the devices on my arms with great
enthusiasm. He turned on his vibrators and adjusted the phase relations among
the vibrations within each of the “cochleas” so that I experienced a point
sensation on each arm. Soon, the point sensations began to alternate from arm to
arm: when I felt a point in one arm, I did not feel it in the other. Interesting.
Békésy had concluded from this experiment that sensory inhibition must
occur not only in our skin but somewhere in the pathways from our skin to our
brain, a conclusion confirmed by the results of our cat experiments.
But this was not all. Békésy asked me to sit down and wait a while as the
point sensations kept alternating from one arm to the other. I began reading some
of his reprints. Then, suddenly, after about ten minutes, I had the weirdest
feeling: The sensation of a point had suddenly migrated from my arms and was
now at a location in between them, somewhat in front of me. I was fascinated.
I asked, how could this be? How can I feel, have a tactile sensation, outside
of my body? Békésy replied with a question: “When you do surgery, where do
you feel the tissue you are touching with your forceps and scalpel?” “Of course,”
I answered, “out there where the tissue is—but the forceps and scalpel provide a
solid material connection.” We all have this experience every time we write with
pen or pencil, feeling the surface of the paper through the mediation of the
implement.
But we also hear sounds without any obvious solid material connection
with the source that produces them: for instance, the loudspeakers in a stereo
system. I once heard the sound so realistically coming out of a fireplace flanked
by two stereo-speakers that I actually looked up the chimney to see if there was
another speaker hidden up there.
There are important advantages to our ability to project our perceptions
away from our bodies in this fashion. In another experiment in which he set out
to test the value of our stereo capability, Békésy plugged his ears and used only
one loudspeaker, strapped to his chest, to pick up sounds in his environment. He
tried to cross a street and found it almost impossible because he could not
experience an approaching car until it was almost literally upon him. The impact
of the vibrations on his chest from the approaching sound of a car hit him so
suddenly that he clutched his chest and was nearly bowled over.
We may experience the world we navigate as being “out there,” but the
apparatus that makes our experience possible is within our skin. My eyes and
brain enable me to see—but the location of what I see is projected out, away
from and beyond those bodily structures that make my seeing possible. Békésy’s
experiments demonstrated the laws of projection create an anticipatory frame, a
protective shell that surrounds us and allows us to choose an action before an
oncoming event overwhelms us.
36. Localization of Perception Between Fingers

While reading Békésy’s notes on his experiments, I saw that he had


scribbled some equations next to the text.
I asked him about them, and he answered that they were some new-fangled
math that he did not fully understand as yet. I felt that, even if he understood the
equations, I certainly would not. To my surprise, that understanding was to come
later: they were the equations describing Gabor’s application to sensory
processing of the mathematics of quantum physics.
My encounters with Békésy were among the most rewarding in my career:
they greatly enhanced my appreciation of the issues to be faced in deciphering
how we navigate our world.
The experimental results presented here showed that the perceptual process
is far from a simple input of “what’s out there” to the brain. Rather, by means of
the transformations produced by receptors, structures such as objects are formed
from a holographic-like background within which every-“thing” is distributed
“everywhere and everywhen.” The receptor then transforms these “objects”—in
the case of vision by virtue of the quantum nature of retinal processing—into a
constrained holographic-like process, a set of wavelets that Dennis Gabor called
“quanta of information.” Békésy’s experiments, as well as my own,
demonstrated that, in turn, these brain processes influence receptors as much as
does the input from the-world-out-there that we navigate. More on this in the
next chapter.

In Summary
In this chapter I summarize the hard evidence for why we should pay
attention to the transformations that our receptors and sensory channels make—
rather than thinking of these channels as simple throughputs. This evidence
provides a story far richer than the view of the formation of perception held by
so many scientists and philosophers.
Furthermore, when we separate the transformations produced by our lenses
and lens-like processes from those produced by our retinal processing, we can
demonstrate that the brain processing of visual perception is actually
multidimensional. For other senses I presented the evidence that “sensory
inhibition” serves a similar function. Thus there is no need for us to deal with the
commonly held view that brain processes need to supplement a two-dimensional
sensory input.
Finally, I place an emphasis on the data that show that perception has the
attribute of “projection” away from the immediate locus of the stimulation. We
often perceive objects and figures as “out there,” not at the surface of the
receptor stimulation.
A corollary of projection is introjection. I thus now venture to suggest that
introjection of perceptions makes up a good deal of what we call our ”conscious
experience.”
Chapter 6
Of Objects and Images

Wherein I distinguish between objects and images and the brain systems that
process the distinction.

Thus, by our movements we find it is the stationary form of the table in


space, which is the cause of the changing image in our eyes. We
explain the table as having existence independent of our observation
because at any moment we like, simply by assuming the proper
position with respect to it, we can observe it.
—Hermann von Helmholtz, Optics, 1909/1924

As Lie wrote to Poincaré in a letter in 1882: ‘Riemann and v.


Helmholtz proceeded a step further [than Euclid] and assumed that
space is a manifold or collection of numbers. This standpoint is very
interesting; however, it is not to be considered definitive.’ Thus, Lie
developed the means to analyze rigorously how points in space are
transformed into one another through infinitesimal transformations
—that is, continuous groups of transformations.
—Arthur I. Miller, Imagery in Scientific Thought, 1984
Movements
The end of the previous chapters left feature creatures with stick figures and
frequency freaks with a potential of wavelets. Stick figures are two-dimensional,
but the world we navigate is four-dimensional. Nor is that world composed of
wavelets; it is filled with objects. Nor is that world composed of wavelets filled
with objects. Early on I had the intuition that somehow movement might be the
key to unveiling the process by which we are able to perceive objects in a four-
dimensional world. My intuition was based on the observation that adjacent to—
and sometimes overlapping with—those areas of the brain that receive a sensory
input to the brain, there are areas that, when electrically stimulated, produce
movements such as turning the head and eyes, perking the ears, and moving the
head and limbs. Such movements are mediated by our brain’s circuitry, its
surface structure.
But how would our perception of objects be achieved through movement?
While teaching at Georgetown, two of my undergraduate students were to
provide the route by which I came to a first step toward answering this question.
One student kept insisting that he wanted to know “the how, the particular go of
it.” Another student in a different class had a condition called “congenital
nystagmus.” Nystagmus is the normal oscillation of our eyeballs. In people such
as my student, the oscillation is of much greater amplitude and is therefore quite
noticeable. I had had several patients with this harmless “symptom,” and I told
my student that she had nothing to worry about. This turned out to be a bigger
worry to her: she no longer had an ”ailment” that made her special. During class,
while I was lecturing on another topic, from time to time I also looked at this
student’s big brown oscillating eyes. I began to have the feeling that those eyes
were trying to tell me something. Shortly, while still lecturing, the first part of
the answer came: I recalled that the ability to perceive at all depends on
oscillatory movements.
Experiments have shown us that when the oscillation of the eyes is
nullified, vision disappears within 30 seconds. In the experiments, a small mirror
was pasted on the test subject’s sclera, the white part of the eyeball. This is not
as uncomfortable as it might seem. The sclera is totally insensitive to touch or
pain; it is when the cornea, the colored ring around the pupil of our eye, is
injured, that it hurts so much. A figure is then projected onto the mirror and
reflected from that mirror onto a surface displayed in front of the subject. Thus,
(when corrected for the double length of the light path to and from the eye) the
figure displayed on the surface in front of the subject mirrors the excursion of
the eyeball. Shortly, the subject experiences a fading of the figure, which then
disappears within about 30 seconds. Hence, as this experiment clearly
demonstrates: no oscillatory movement, no image.
These same results have been repeatedly obtained in experiments involving
other senses. Fine oscillations of our receptors or of the stimulating environment
around us are needed in order for us to see, to feel by touch, and to hear. These
oscillations—vibrations measured in terms of their frequencies—allow us to
experience images in the space-time that surrounds us.
The nystagmoid oscillations of the eyes act like a pendulum. The arc that a
pendulum describes moves back and forth across a point where the pendulum
comes to rest when it stops swinging. Current mathematical descriptions call
such a point an “attractor.” (Mathematically, the ideal point attractor is defined
as an essential character of a system of group-theoretical oscillations that in time
converge on it.) Thus the oscillations of the eyes form a point attractor in the
neuro-nodal deep structure of fine fibers that can then serve as a pixel. Pixels are
the units from which the images on your television screen are formed.
Forming a point attractor, a pixel, from oscillations is a transformation.
The transformation converts the frequency of oscillations in the spectral domain
to points in space-time. In short, the transformation is described by the (inverse)
Fourier equation. It is the oscillatory movements, because of their pendulum-like
character, that transform a distributed holographic-like brain process to one that
enables imaging. Converting from space-time to frequencies and back to space-
time is the procedure that is used to produce images by PET scans and fMRIs in
hospitals.

What’s the Difference?


Our experience of the world is initially with objects, not images. The
distinction between objects and images is fundamental to our understanding of
how we perceive the world around us. We can look at objects from various
perspectives to perceive various profiles, images, of the objects, but the imaging
is secondary to the perception of objects. To perceive objects, either they or we
make large-scale movements. Helmholtz made this distinction well over a
century ago: he stated that objects are forms that remain unchanged despite
movement; images are those perceptions that change with movement.
To demonstrate the distinction between objects and images, I conduct a
simple demonstration. I have a volunteer, a student or a person attending a
conference, close her eyes and extend her hand palm up. I then tap her palm with
a key. I ask what she experiences. Ordinarily, the answer is “a poking on my
palm.” Next I put the key into her palm, ask her to close her hand, moving her
fingers. Again I ask what she is experiencing. Invariably, the answer is “a key.”
Poking is imaged as happening “to me”; when the volunteer is moving her hand
and fingers, she experiences an object “out there.”
In the auditory mode, the distinction between image and object is the basis
of the difference between music and speech. We hear music; we speak speech.
Music is centered on the ear; language is centered on the tongue (lingua in
Latin). For instance, tones are images, profiles, which change as changes take
place in the context within which they occur. Such tones, including vowel
sounds, can be described as Gabor functions that embody what has come before
and anticipate what is yet to come. By contrast, our perception of consonants is
that they are like objects in that they remain stable over a wide range of
changing contexts. Simulations that have been designed with the intent to
produce gradual transitions between consonants such as “b” and “p” or “d” and
“t” show that we do not perceive such gradations in sound. Technically, this
phenomenon is known as “categorical perception.”
As higher-order constructions are composed, both music and language
contain images and objects. Still, the distinction between being touched by
music and actively forming speech is upheld because, at least in untrained
subjects, musical images, tunes, tend to be processed primarily with the right
hemisphere of the brain while ordinary language is more likely to be processed
by the left hemisphere.
Another example of the distinction between our perception of images and
of objects is provided by the sun’s radiant energy that warms and feeds us. When
that radiant energy affects us, we “name” it light or heat, but it is not such when
it occurs in outer space. It takes a considerable amount of active instrumentation
by telescopes and interferometers to demonstrate that radiation exists as such.
Physicists often use terms like “heat” or “light” to describe radiations of
different wavelengths—a shortcut that can lead to confusion, not only for
nonscientists, but, as well, even for the physicists who have used these labels. By
contrast, in psychophysics we use the term “luminance” to describe radiation
that is physically measured and therefore objective. We use the term
“brightness” to describe and measure what we perceive. Radiant energy must fall
on a receptor to be perceived as light or heat just as the oscillations of water that
mediate its energy must strike the shore to become the breakers with the strength
to topple us. In both cases— water and radiation—the patterns that make up the
perceived objects—breakers and light/heat—are considerably different from the
oscillating energies offshore or out in space.
An Act of Faith
We are able to perceive objects as the result of relative movement of
ourselves within the world we navigate. Thus, objects can be regarded as
constructions by our brain systems that are computing the results of those
movements.
When I use the term “construction,” I do not mean that what we navigate is
not a real world. When I hit my shin on the bed rail, I don’t doubt, no matter how
sleepy I may be, that the navigable world of bed rails is real. This lack of doubt
forms a belief that is based on our early experience.
I first understood how a belief is involved in our perception of an objective
world while at a conference held at Esalen, the famous West Coast spa and
retreat. The situation was somewhat bizarre, as the theme of the conference was
brain anatomy. During the conference, we were introduced to some of the New
Age exercises. One of these involved all of us stodgy scientists being
blindfolded and going around the campus feeling bushes, flowers, and one
another’s faces. It was a totally different experience from our usual visual
explorations and resulted in my becoming aware of the distinctiveness of
perceptions derived from different sensory experiences.
It immediately occurred to me that as babies, we experience our mothers
and others around us in different ways with each of our various sensory systems.
Each of these experiences is totally different. As we quickly learned at Esalen,
our visual experience of a person is very different from our auditory experience
of that person, and different still from our experience of touch or of taste!
Somewhere along the line, as we process these various inputs, we make a leap of
faith: all these experiences are still experiences of the same “someone.” In the
scientific language of psychology, this is called the process of “consensual
validation,” validation among the senses.
As we grow older, we extend this faith in the unity of our experiences
beyond what our various sensory explorations tell us, and we include what
others tell us. We believe that our experiences, when supported by those of
others, point to a “reality.” But this is a faith that is based on disparate
experiences.
The English philosopher Bishop George Berkeley became famous for
raising questions regarding this faith. He suggested that perhaps God is fooling
us by making all of our experience seem to cohere, that we are incapable of
“knowing” reality. Occasionally other philosophers have shared Berkeley’s view,
which is referred to in philosophy as “solipsism.” I suggest that, distinct from the
rest of us, such philosophers simply didn’t make that consensual act of faith
when they were babies.
An extreme form of consensuality is a psychological phenomenon called
“synesthesia.” Most of us share such experiences in milder form: red is a hot
color, blue is cool. Some thrilling auditory and/or visual experiences actually
make us shiver. Those who experience synesthesia have many more such
sensations—salty is cool; peppery is hot— to a much greater degree.
The usual interpretation is that connections between separate systems are
occurring in their brains, “binding” the two sensations into one—connections
that the rest of us don’t make. Instead, it is as likely that synesthesia is due to a
higher-order abstraction, similar to the process previously described where I
asked those students to raise their hands if they wear glasses and also blue jeans.
Eyeglasses and blue jeans are conjoined in their response. In my laboratory at
Stanford, we found that many receptive fields in the primary visual cortex
respond not only to visual stimuli but also to limited bandwidths of auditory
stimulation. Still other conjunctions within a monkey’s receptive field encoded
whether the monkey had received a reward or had made an error. Other
investigators have found similar cross-modal conjunctions, especially in motor
cortexes of the brain. A top-down process that emphasizes such cross-modal
properties could account for synesthesia such as the experience of bright sounds
without recourse to connecting, “binding” together such properties as hearing
and seeing from two separate brain locations.
Our faith arising from consensual agreement works well with regard to our
physical environment. Once, when I was in Santiago, Chile, my colleague, the
University of California philosopher John Searle rebutted the challenge of a
solipsist with the statement that he was happy that the pilot of the plane that
brought him through the Andes was not of that persuasion. The pilot might have
tried to take a shortcut and fly right into a mountain since it “really” didn’t exist.
When it comes to our social environment, consensuality breaks down more
often than not. Ask any therapist treating both wife and husband: one would
never believe these two inhabited the same world. The film Rashomon portrays
non-consensuality beautifully. In later chapters, I will examine some possibilities
as to how this difference between social and physical perceptions comes about.
I am left with an act of faith that states that there is a real world and that our
brain is fitted to deal with it to some extent. We would not be able to navigate
that real world if this were not so. But at the same time, well fitted as our brains
are, our experience is still a construction of that reality and can therefore become
misfitted by the operation of that brain. It is the job of brain/behavioral science to
lay bare when and how our experience becomes fitting.
In the section on communication we will look at what has been
accomplished so far. From my standpoint of a half-century of research, both
brain and behavioral scientists have indeed accomplished a great deal. This
accomplishment has depended on understanding how the brain makes it possible
for us to more or less skillfully navigate the world we live in. This in turn
requires us to understand brain functions in relation to other branches of science.
Thus, a multiple front of investigators has developed under the heading of
“cognitive neuroscience.” This heading that does not do justice to the breadth of
interest, expertise and research that is being pursued. For perception, the topic of
the current section of this book, the link had to be made primarily with physics.
In this arena, theory based on mathematics has been most useful.

An Encounter
Some years ago, during an extended period of discussion and lecturing at
the University of Alberta, I was rather suddenly catapulted into a debate with the
eminent Oxford philosopher Gilbert Ryle. I had just returned to Alberta from a
week in my laboratory at Stanford to be greeted by my Canadian hosts with:
“You are just in time; Gilbert Ryle is scheduled to give a talk tomorrow
afternoon. Would you join him in discussing ‘The Prospects for Psychology as
an Experimental Science’?” I was terrified by this “honor,” but I couldn’t pass
up the opportunity. I had read Ryle’s publications and knew his memorable
phrase describing “mind” as “the ghost in the machine.” I thought, therefore, that
during our talk we would be safely enveloped in a behaviorist cocoon within
which Ryle and I could discuss how to address issues regarding verbal reports of
introspection. But I was in for a surprise.
Ryle claimed that there could not be any such thing as an experimental
science of psychology—that, in psychology, we had to restrict ourselves to
narrative descriptions of our observations and feelings. He used the example of
crossing a busy thoroughfare filled with automobiles, while holding a bicycle.
He detailed every step: how stepping off a curb had felt in his legs, how moving
the bicycle across cobblestones had felt in his hands, how awareness of on-
coming traffic had constrained his course, and so forth. His point was that a
naturalist approach, using introspection, was essential if we were to learn about
our psychological processes. I agreed, but added that we needed to supplement
the naturalist procedure with experimental analyses in order to understand “how”
our introspections were formed. Though I countered with solid results of many
experiments I had done on visual perception, I was no match for an Oxford don.
Oxonians are famed for their skill in debating. Ryle felt that my experimental
results were too far removed from experience. I had to agree that in
experimentation we lose some of the richness of tales of immediate events.
About ten minutes before our time was up, in desperation, I decided to do
just what Ryle was doing: I described my experimental procedure rather than its
results. I compared the laboratory and its apparatus to the roadway’s curbs and
pavements; I enumerated the experimental hazards that we had to overcome and
compared them with the cars on the road; and I compared the electrodes we were
using to Ryle’s bicycle. Ryle graciously agreed: he stated that he had never
thought of experiments in this fashion. I noted that neither had I, and thanked
him for having illuminated the issue, adding that I had learned a great deal about
the antecedents —the context within which experiments were performed. The
audience rose and cheered.
This encounter with Gilbert Ryle would prove to be an important step in my
becoming aware of the phenomenological roots of scientific observation and
experiment. Ryle’s “ghost” took on a new meaning for me; it is not to be
excised, as in the extremes of the behaviorist tradition; it is to be dealt with. I
would henceforth claim that: We go forth to encounter the ghost in the machine
and find that “the ghost is us.”

Object Constancy
As noted at the beginning of this chapter, immediately adjacent to—or in
some systems even overlapping— the primary sensory receiving areas of the
brain, are brain areas that produce movement of the sensing elements when
electrically excited. Adjacent to and to some extent overlapping this movement-
producing cortex are areas that are involved in making possible consistent
perceptions despite movement. My colleagues and I, as well as other
investigators, have shown these areas to be involved in perceiving the size and
color as well as the form of objects as consistent (called “size, color and object
constancy” in experimental psychology.)
In our experiments we trained monkeys to respond to the smaller of two
objects irrespective of how far they were placed along an alley. The sides of the
alley were painted with horizontal stripes to accentuate its perceived distance.
The monkeys had to pull in the smaller of the objects in order to be able to open
a box containing a peanut. After the monkeys achieved 90% plus, I removed the
cortex surrounding the primary visual receiving area (which included the
visuomotor area). The performance of the monkeys who had the cortical
removals fell to chance and never improved over the next 1000 trials. They
always “saw” the distant object as smaller. They could not correct for size
constancy on the basis of the distance cues present in the alley.
The Form(ing) of Objects
Our large-scale movements allow us to experience objects. Groundbreaking
research has been done by Gunnar Johanssen in Uppsala, Sweden, and James
Gibson at Cornell University in Ithaca, New York. Their research has shown
how objects can be formulated by moving points, such as the pixels that we
earlier noted were produced by nystagmoid oscillations of the eyes. When such
points move randomly with respect to each other, we experience just that—
points moving at random. However, if we now group some of those points and
consistently move the group of points as a unit within the background of moving
points— Voilà! We experience an object. A film made by Johanssen and his
students in the 1970s demonstrates this beautifully. The resulting experience is
multidimensional and sufficiently discernible that, with a group of only a dozen
points describing each, the viewer can tell boy from girl in a complex folk dance.
Though some of the film is made from photographs of real objects,
computer programs generated most of it. I asked Johanssen whether he used the
Fourier transform to create these pictures. He did not know but said we could
ask his programmer. As we wandered down the hall, he introduced me to one
Jansen and two other Johanssens before we found the Johanssen who had created
the films. I asked my question. The programmer was pleased: “Of course, that’s
how we created the film!”
But the Johanssen film shows something else. When a group of points, as
on the rim of a wheel, is shown to move together around a point, the axel, our
perception of the moving points on the rim changes from a wave form to an
object: the “wheel.” In my laboratory we found, rather surprisingly, that we
could replace the “axel” by a “frame” to achieve the same effect: a “wheel.” In
experimental psychology this frame effect has been studied extensively and
found to be reciprocally induced by the pattern that is being framed.
This experimental result shows that movement per se does not give form to
objects. Rather, the movements must cohere, must become related to one another
in a very specific manner. Rudolfo R. Llinás provided me with the mathematical
expressions of this coherence in personal discussions and in his 2001 book, I of
the Vortex: From Neurons to Self. My application of his expressions to our
findings is that the Gabor functions that describe sensory input cohere by way of
covariation; that the initial coherence produced by movements among groups of
points is produced by way of contravariation (an elliptical form of covariation);
and that the final step in producing object constancy is the “symmetry group”
invariance produced by a merging of contravariation with covariation.
Symmetry Groups
Groups can be described mathematically in terms of symmetry. A
Norwegian mathematician, Sophus Lie, invented the group theory that pertains
to perception. The story of how this invention fared is another instance of slow
and halting progress in gaining the acceptance of scientific insights. As
described in the quotation at the beginning of this chapter, early in the 1880s
Hermann von Helmholtz, famous for his scientific work in the physiology of
vision and hearing, wrote to Henri Poincaré, one of the outstanding
mathematicians of the day. The question Helmholtz posed was ”How can I
represent objects mathematically?” Poincaré replied, “Objects are relations, and
these relations can be mapped as groups.” Helmholtz followed Poincaré’s advice
and published a paper in which he used group theory to describe his
understanding of object perception. Shortly after the publication of this paper,
Lie wrote a letter to Poincaré stating that the kind of discrete group that
Helmholtz had used would not work in describing the perception of objects—
that he, Lie, had invented continuous group theory for just this purpose. Since
then, Lie groups have become a staple in the fields of engineering, physics and
chemistry and are used in everything from describing energy levels to explaining
spectra. But only recently have a few of us returned to applying Lie’s group
theory to the purpose for which he created it: to study perception.
My grasp of the power of Lie’s mathematics for object perception was
initiated at a UNESCO conference in Paris during the early 1970s. William
Hoffman delivered a talk on Lie groups, and I gave another on microelectrode
recordings in the visual system. We attended each other’s sessions, and we found
considerable overlap in our two presentations. Hoffman and I got together for
lunch and began a long and rewarding interaction.
Symmetry groups have important properties that are shown by objects.
These groups are patterns that remain constant over two axes of transformation.
The first axis assures that the group, the object, does not change as it moves
across space-time. The second axis centers on a fixed point or stable frame, and
assures that changes in size—radial expansion and contraction—do not distort
the pattern.

What We Believe We Know


To summarize: We navigate a world populated by objects. Objects maintain
their form despite movements. What vary are images—that is, profiles of objects
that change with movement. But even our perceptions of images depend on
motion.
Minute oscillations of the relation between receptor and stimulus are
necessary for any perceptual experience to occur. These oscillations describe
paths around stable points—called “point attractors.” Point attractors are the
result of transformations from the oscillations that describe the sensory input in
the spectral domain, to space-time “pixels” from which images are composed.
When points become grouped by moving together with respect to other
points, the group is perceived as a form—that is, an object. The properties of
objects, the invariance of their perceived form over changes in their location and
apparent size are described by a particular kind of group called a “symmetry
group.”
A more encompassing picture of how we navigate our world can be
obtained by considering a corollary of what I have described so far. Recall that,
in introducing his suggestion to Helmholtz that he use groups, Poincaré noted
that “objects are relations.” Thus, Poincaré’s groups describe relations among
points, among pixels within a frame, a context that occurs in space and time.
Such contexts are bounded by horizons, the topic of the next chapter.
Chapter 7
The Spaces We Navigate and Their Horizons

Wherein I distinguish a variety of spaces we navigate and discern the boundaries


at several of their horizons.

When it was time for the little overloaded boat to start back to the
mainland, a wind came up and nudged it onto a large rock that was
just below the surface of the water. We began, quite gently, to sink. . . .
All together we went slowly deeper into the water. The boatman
managed to put us ashore on a huge pile of rocks. . . .

The next day when I passed the island on a bus to Diyarbakir there,
tied to the rock as we had left it, was a very small, very waterlogged
toy boat in the distance, all alone.
—Mary Lee Settle, Turkish Reflections: A Biography of a Place, 1991
Navigating the Spaces We Perceive
Have you considered just how you navigate the spaces of your world?
Fortunately, in our everyday living, we don’t have to; our brain is fine-tuned to
do the job for us. Only when we encounter something exceptional do we need to
pay attention. When we are climbing stairs, we see the stairs with our eyes. The
space seen is called “occulocentric” because it centers on our eyeballs, but it is
our body that is actually climbing the stairs. This body- centered space is called
“egocentric.” The stairs inhabit their own space: some steps may be higher than
others; some are wider, some may be curved. Stairs are objects, and each has its
own “object-centered” space. Space, as we navigate it, is perceived as three-
dimensional. Since we are moving within it, we must include time as a fourth
dimension. We run up the stairs, navigating these occulocentric, egocentric, and
object-centered spaces without hesitation. The brain must be simultaneously
computing within 12 separate co-ordinates, four dimensions for each time-space.
Cosmologists are puzzling the multidimensionality of “string” and “brane”
theory: why aren’t scientists heeding the 12 navigational coordinates of brain
theory?
Now imagine that the stairs are moving. You are in a hurry, so you climb
these escalator stairs. You turn the corner and start climbing the next flight of
stairs. Something is strange. The stairs seem to be moving but they are not, they
are stationary. You stumble, stop, grab the railing, then you proceed cautiously.
Your occulocentric (visual) cues still indicate that you are moving, but your
egocentric (body) cues definitely shout “no.” The brain’s computations have
been disturbed: occulocentric and egocentric cues are no longer meshed into a
smoothly operating navigational space. It is this kind of separation—technically
called “dissociation”— that alerts us to the possibility that the three types of
space may be constructed by different systems of the brain.

Eye-Centered and Body-Centered Systems


While in practice in Jacksonville, Florida, I once saw a patient who showed
an especially interesting dissociation after he’d been in an automobile accident.
Every so often, he would experience a brief dizzy spell, and when it was over, he
experienced the world as being upside down. This experience could last
anywhere from a few minutes to several hours, when it was ended by another
dizzy spell, after which his world turned right side up once more. This patient
was an insurance case, and I had been asked by the insurance company to
establish whether he was faking these episodes. Two factors convinced me that
he was actually experiencing what he described: his episodes were becoming
less frequent and of shorter duration; and his main complaint was that when the
world was upside down, women’s skirts stayed up around their legs, despite
gravity!
In the short time of this office visit I had failed to ask the patient what
proved to be a critical question: Where were your feet when the world was
upside down? Decades later I had the opportunity to find the answer to this
question.
The story begins more than a hundred years ago when George Stratton, an
early experimental psychologist, performed an experiment at Stanford
University. He had outfitted himself with spectacles that turned the world upside
down, just as the world had appeared to my patient. However, after continuously
wearing these glasses for a week during his everyday activities, Stratton found
that his world had turned right side up again.
Poincaré had stated that objects are relations; therefore, the perceived
occulocentric space, which is made up of objects, is relational, something like a
field whose polarity can change: Thus, “up” versus “down” is, in fact, a
relationship that adjusts to the navigational needs of the individual. With respect
to my question as to where one’s feet are located in such an upside-down/down-
side-up world, why not repeat Stratton’s experiment and ask whether
occulocentric and egocentric spaces become dissociated? My opportunity arose
when two of my undergraduate students at Radford University in Virginia
volunteered to repeat the experiment. One would wear the prism-glasses; the
other would guide him around until his world was again negotiable, right side
up.
Once the perceptions of the student wearing the glasses had stabilized, I had
my chance to ask the long-held question: Where are your feet? “Down there,”
my student said and pointed to his shoes on the ground. I placed my hand next to
his and pointed down, as he had, and he saw my hand as pointing down.
Then I stepped away from him and placed my hand in the same fashion,
pointing to the ground as I saw it. He, by contrast, saw my hand as pointing up!
As I brought my hand closer to his body, there was a distance at which he
became confused about which way my hand was pointing, until, when still closer
to his body, my hand was perceived as pointing in the same direction as he had
originally perceived it when my hand was next to his.
The region of confusion was approximately at the distance of his reach and
slightly beyond.
Occulocentric and egocentric spaces are therefore dissociable, and each of
these is also dissociable from object-centered spaces. After brain trauma,
patients have experienced “micropsia,” a condition in which all objects appear to
be miniscule, or “macropsia” where objects appear to be oversized. In these
conditions, occulocentric and egocentric spaces remain normal.
Conceived as fields, it is not surprising that the objects in these spaces are
subject to a variety of contextual influences. Some of these influences have been
thoroughly investigated, such as the importance of frames in object perception;
others, such as those in the Stratton experiment have been thoroughly studied,
but no explanation has been given as to how such a process can occur, nor has an
explanation been given for micropsia and macropsia.
I will venture a step toward explanation in noting that the Fourier theorem
and symmetry group theory offer, at the least, a formal scaffolding for an
explanation. The brain process that transforms the sensory input from the space-
time domain optical image to the spectral domain at the primary visual cortex,
and back to the space-time domain by way of movement, actually should end up
with up-down mirror images of the occulocentric space. This is mathematically
expressed in terms of “real” and “imaginary” numbers—that is, as a real and a
virtual image. Ordinarily we suppress one of these images—probably by
movement, but exactly how has not been studied as yet.
We noted in the previous chapter that symmetry groups have the
characteristic that the group, the object, remains invariant across expansion and
contraction. This means that we ordinarily adjust for what, in a camera, is the
zoom of the lens. In my laboratory, my colleagues and I were able to change the
equivalent of the zoom of receptive fields in the visual cortex by electrical
stimulation of other parts of the cortex and of the basal ganglia of the brain.
Patients experienced such a change in zoom during surgery at the University of
California at San Francisco when their brains were electrically stimulated in the
frontal and in the parieto-temporal regions of their cortex. Perhaps the
adjustment of the zoom, which in our experiments was effected through lateral
inhibition, becomes disturbed in patients who experience micropsia or
macropsia.
Occulocentric space has been studied more thoroughly than the “spaces” of
other senses, primarily because painters have been interested in portraying it.
Painters needed to portray a three-dimensional perspective on a two-dimensional
surface and to take into account the variety of constancies we perceive for
objects, depending on the distance they are from us. This brings us to the topic
of horizons, a topic that shows how our perceptions depend not only on the brain
systems that are involved but also on how these systems become tuned by the
culture in which we have been raised.
Nature and Nurture
At the dedication ceremonies of Brandeis University, Abraham Maslow,
Wolfgang Köhler and I were discussing perception. Köhler insisted that the way
we perceive is inborn. “If learning is involved, it isn’t perception,” he declared.
With regard to perception, Köhler was what is referred to in psychology as a
“nativist.” Today, it is hard to remember how entrenched this view was; so much
evidence has since been obtained that indicates just how what we learn
influences how we perceive. Much of this shift in viewpoint is due to the
research and writings of Donald Hebb that drew attention to how dependent our
current perception of perception is based upon what we have learned.
Hebb’s contribution came exactly at the right moment. During the mid-
1950s, sitting at my dining room table, Jerome Bruner wrote a landmark paper
entitled “The New Look in Perception.” Among other experiments, in this paper,
Bruner recounted the demonstration that how large we perceive a particular coin
to be—a quarter, for instance—depends on how wealthy or poor we are. Such an
everyday experience attests to how much the context of our perception—in this
case, what we have learned about the value of a coin—influences how we
perceive it.
These insights were anticipated in anthropology and sociology, where
studies of peoples of different cultures showed marked differences in their
perceptions. For instance, in Somalia there is no appreciation for the color red
but a great differentiation of the spectrum that we ordinarily classify as green.
The brain processes reflecting such differences are now accessible to study with
fMRIs and other brain-imaging techniques.
The argument can still be made that sensitivity to context is inherited. This
is largely correct. In fact, the whole discourse on what is inherited and what is
learned takes on a new significance when inheritance is considered as a potential
rather than a full-blown established capacity received at birth. The Nobel
laureate, behavioral zoologist Konrad Lorenz made this point with respect to
learning: he pointed out that different species have different potentials for
learning this or that skill.
Sandra Scarr, professor of psychology at the University of Virginia, has
gone a step further in suggesting a test for determining “how much” of a skill is
inherited: she has shown that, to the extent to which a perception or skill comes
easily—that is naturally—to us, it is, to that extent, inherited. Thus, our ability to
speak is highly inborn, but our ability to read is markedly less so. Though we all
talk, whether we learn to speak English or Chinese depends on the culture in
which we grow up. How fluent we may eventually become in Chinese or in
English also depends on learning, which in turn depends on the stage of
maturation of our brain when the learning is undertaken.

Mirrors
How we perceive becomes especially interesting, almost weird, when we
observe ourselves and the world we navigate in mirrors. Like author and
mathematician Lewis Carroll in the experiments he often demonstrated to
children, I have all my life been puzzled by mirrors. How is it that within the
world of mirrors right and left is reversed for us but up and down is not? How is
it that when I observe myself in a mirror, my mirror image is “in the mirror,” but
the scene behind me is mirrored “behind the mirror?”
Johannes Kepler proferred an answer to these questions in 1604. Kepler
noted that a point on the surface of a scene reflects light that becomes spread
over the surface of the mirror in such a way that when viewed by the eye, it
forms a cone behind the mirror symmetrical to the point. His diagram illustrates
this early insight.
I realized the power of the virtual image while reading about Kepler’s
contribution, lying on a bed in a hotel room. The wall at the side of the bed was
covered by a mirror—and “behind” it was another whole bedroom! An even
more potent and totally confusing experience occurred while I was visiting a
famous “hall of mirrors” in Lucerne, Switzerland. Speak of virtual horizons! I
saw my colleagues and myself repeated so many times at so many different
distances and in so many different directions that I would certainly have become
lost in the labyrinth of images had I not held onto an arm of one of my
companions.
On this occasion, I was given a reasonable “explanation” of the how of
mirror images. I was told that there had been an explanation presented in
Scientific American some time ago which was summarized thus: ordinarily we
have learned to look forward from back to front—so when we are looking in a
mirror we see what is in front of us, behind the mirror, as if we are coming at the
virtual image from behind it.
37. Kepler’s diagram of the reflection of a point (S) on a plane mirror (M—M’) as seen (I) as a virtual
image by a person’s eyes

Support for this explanation comes from an experience many of us have had
when we have tried to teach someone else how to tie a necktie (especially a bow
tie). If you face the person and try to tie the tie, the task is awkward if not
impossible. But if you stand behind the person, look forward into a mirror, and
tie his tie, the job is easy.
The above observations were made on a plane mirror. A concave mirror
presents another wonderful phenomenon. I have a 7X concave mirror in my
bathroom. Looking at close distance to my face, I see its reflection “in the
mirror.” But one day I noted that the bathroom’s ceiling light was floating in
midair, suspended over the bathroom sink. Investigating the source of the image
showed that when I covered the mirror with my hand the image disappeared.
The examples presented to us by mirrors show that what we see depends on
how we see. The next sections describe some of what we now know about the
“how” of seeing.

Objects in the World We Navigate


How we perceive objects is dependent on brain systems that process
invariance, that is, the property of a perception to be maintained despite relative
movement between the perceived object and the perceiver. An example is size
constancy. In my Stanford laboratory, during the 1970s, my colleagues and I
performed the following experiment with monkeys: They were taught to pull a
string that was attached to a box containing a peanut. There were two such
boxes, one large and one small. The boxes rolled along a rail within an alley
whose sides were painted with broad horizontal stripes that enhanced the feeling
of distance. Each box could be decoupled from its string so that its placement
could readily be exchanged. The monkeys were trained to pull in the box that
appeared to them to be the larger. They pulled in the larger box when it was
close by and also when it was way up the alley. “Larger” was larger to them at
both distances.

38. Image (I) produced from scene (S) reflected by a concave mirror

We then removed a part of each monkey’s cortex just in front of the primary
visual areas that receive the retinal input and those that regulate the oscillatory
movements of the eyes. The monkeys behaved normally on all visual tests
except one: now they would pull in the smaller box when it was close while the
larger box was far down the alley. At that distance, the larger box made a smaller
optical image on the monkeys’ retina. The intact monkeys were able to
compensate for distance; the monkeys that had been operated upon could not.
When I tell my students about this experiment, I point out that their heads
look the same size to me whether they sit in the front or the back row of the
room. The students in front did not seem to suffer from hydrocephalus (water-
on-the-brain); the ones in back, most likely were not microcephalic idiots.
Constancy in our perception holds within a certain range, a range that varies
as a function of distance being observed and the experience of the observer. The
range at which a particular constancy ceases forms a horizon, a background
against which the perception becomes recalibrated. A classroom provides such a
“horizon.” But out on a highway, automobiles all look pretty much the same size
over a longer range in terms of the distance in our field of vision. What
determines the horizon in this case is the layout of the road and its surroundings.
On a straight road in rolling hills, where distances are truncated by the lay of the
land, cars may look small compared to what they might look like at the same
distance on a U curve, where the entire road is visible. When there is an
unfamiliar gap in visibility, multiple horizons are introduced. The placement of
horizons where constancy breaks down is a function of the context within which
an observation is occurring.
Professor Patrick Heelan, who helps teach my Georgetown class at times,
has pointed out that when painters try to depict three dimensions onto a two-
dimensional surface, size constancy is limited to a certain range. Beyond that
range, other laws of perspective seem to take hold. Heelan is a theoretical
physicist and philosopher of science whose specialty is identifying how painters
construct perspectives that allow us to “see” three dimensions when a scene is
displayed on a two-dimensional surface.
On the basis of his 30-year study of paintings and of how we perceive the
cosmos, Heelan has concluded that overall visual space appears to us as
hyperbolic. For instance, as I mentioned earlier, when we observe objects by
looking into a concave mirror—the kind that has a magnification of 7X—objects
close to the mirror are magnified in it, but things far away seem to be projected
in front of the mirror between ourselves and it—and those objects appear to us
smaller than they are when we look at them directly.
Leonardo da Vinci observed that the optics of the eye and the surface of the
retina do not compose a flat plane but a hemisphere, thus the physiological
process that forms our perceptions cannot be a two-dimensional Euclidian one.
Rather, this physiological process forms the concavity of perceived configuration
of hyperbolic space. There is a change in the amount of concavity depending
upon the distance that is being observed: the closer we focus, the more the lens
of the eye becomes bulged; thus there is greater concavity. As our focus become
more distant, the lens flattens somewhat, which results ins less concavity. These
changes are like changing the magnification of the concave mirror from 7X to
5X and 3X.
Heelan has proposed that a description of cosmic space as hyperbolic
accounts for a good part of the “moon illusion” which occurs when we look at
the sun as well. When the moon, or sun, is at the earth’s horizon, it appears
enormous. Atmospheric haze can add a bit more to this illusion. But as the moon
rises, away from our experienced earth-space defined by its horizon, it appears to
be much smaller. This change in perception is identical to what we experience
when we look at the 7X concave mirror.
Furthermore, when my collaborator Eloise Carlton modeled the brain
process involved in viewing the rotation of figures developed by Roger Shepard,
my colleague at Stanford, the process consisted of mapping a geodesic path—
the shortest route around a sphere—as in crossing the Atlantic Ocean from New
York to London via Iceland. Mathematically, this is represented by a Hilbert
space of geodesics —a Poincaré group —which is similar to the mathematics
used by Heisenberg to describe quantum structures in physics and by Gabor in
describing the minimum uncertainty in telephone communications. It is the
description of processing by receptive fields that we have become familiar with
throughout these chapters.
39. Diagram of a horizontal plane at eye level with two orthogonals AB and CD, to the frontal plane AOC
(O is the observer): A’B’ and C’D’ are the visual transforms of AB and CD in a finite hyperbolic model of
visual space. M’N’ marks the transition between the near and the distant zones. Contours are modeled on
computed values. The mapping is not (and cannot be made) isomorphic with the hyperbolic shapes, but
some size and distance relationships are preserved.

This and similar observations raises the issue as to how much of what we
perceive is due to our perceptual apparatus and how much is “really out there.”
Ernst Mach, the influential early 20th-century Viennese physicist, mathematician
and philosopher, whose father was an astronomer, worried all his life about the
problem of how much of the apparent size of the planets, stars, and other visual
phenomena was attributable to the workings of our perceptual apparatus and
how much was “real.” Mach’s legacy has informed the visual arts and
philosophers dealing with perception to this day.
40. Frontal planes: composite diagram in four sectors: 1. physical frontal plane (FP); visual transform of
FP at distance d (near zone: convex); and 4. visual transform of FP at distance 10d (distant zone: concave).
d is the distance to the true point for the hyperbolic model. Contours are modeled or computed values for a
finite hyperbolic space.

41. Diameter of the pattern subtends an angle of 90 degrees with the viewer. B. The diagram is not a set of
images to be read visually, but a set of mappings to be interpreted with the aid of the transformation
equations: the mappings are not (and cannot be made) isomorphic with the hyperbolic shapes, but some
size relationships are preserved in frontal planes (or what physically are frontal planes).
In keeping with the outline presented in the Preface, the perceptions we
hold are a consequence of the territory we explore. Within our yards, and within
the context of portraying our living spaces onto a two-dimensional surface,
Euclidian propositions suffice. When we come to perceive figures that rotate in
space and time, higher-order descriptions become necessary. It is these higher-
order descriptions that hold when we look at what is going on in the brain. The
retina of our eye is curved, and this curvature is projected onto the cortical
surface. Thus, any process in the sensory and brain systems that allows us to
perceive an object’s movement must be a geodesic: a path around a globe.

The Horizons of Our Universe


Current dogma in cosmology holds that our observable universe began with
a “big bang” and is exponentially expanding toward—who knows what? But this
is not the only possible interpretation of our observations: Roger Penrose
proposes that the technique of “conformal rescaling” which he has so
successfully applied to configuring tiles and other perceptual formations can be
profitably applied to configuring the form of our observed universe. In doing so,
Penrose discerns not a big hot bang at the cosmological origin, but “a pattern
like a gentle rain falling on a still pond, each raindrop making ripples which
expand and intersect each other.” This description is almost word for word the
description Leonardo Da Vinci gave some centuries ago and is essentially a
description of a holographic process. A rescaling at the horizon of the observed
cosmos arrives at a similar holographic pattern.
Two conclusions can be drawn from Penrose’s reinterpretation of
cosmology:

1. It is our interpretation of observations that transforms a perceptual


occurrence into a form compatible with the world we navigate. This
interpretation is dependent on our human ability afforded by our brain
processes.
2. Behind the form we experience as the world we navigate is another, a
potential form that enfolds space, time and (efficient) causality: a
holographic form.
Chapter 8
Means to Action

Wherein I distinguish between the brain systems that process muscular


contractions, movements, and the actions that give meaning to navigating our
world.

The function of the sensory input giving rise to reflex activity . . . is


there to modulate the ongoing activity of the motor network in order to
adapt the activity (the output signal) to the irregularities of the terrain
over which the animal moves. . . . This sensory-motor transformation
is the core of brain function, that is, what the brain does for a living.
—Rodolfo Llinás, I of the Vortex, 2001
At Issue
Navigating the world we inhabit takes a certain amount of skill. The skills
we have considered so far are those that allow us to move our sensory receptors
in such a way that we can have meaningful perceptions of the world. Philosopher
Charles Peirce, being a pragmaticist, noted that what we mean by “meaning” is
what we mean to do. To account for our phenomenal experiences, I have added
to his statement that meaning also includes the following: what we mean by
meaning is what we mean to experience. In both cases “meaning” concerns
actions. Brain research performed in my laboratory, and in those of others, has
shown us that an act is not as simple as it may seem.
In the early 1950s, I came to study those brain processes that are involved
in the control of action. I was faced with a century-long tradition of controversy:
Were movements—or were muscles—represented in the brain’s motor cortex?
Anatomical evidence indicated that fibers from our muscles, and even parts of
those muscles, could be traced to the cortex. This kind of representation is also
found in the sensory cortex, where specific locations of receptors can be traced
to specific locations in the sensory cortex. We call such cortical re-presentations
of the geometry of receptors and effectors in humans “homunculi,” a term which
is discussed in this chapter.
A different view of representation emerges when we electrically stimulate
the motor cortex: now we observe contractions of muscle groups. Which groups
are activated by this stimulation depends on the position of the limb before
stimulation of the cortex is begun. For example, on one occasion, at the
Neuropsychiatric Institute of the University of Illinois at Chicago, Percival
Bailey and Paul Bucy were electrically stimulating the cortex of a patient, while
I sat beside the patient, reporting the resulting movement of this patient’s arm.
Exactly the same stimulation at the same location produced a flexion of the
patient’s forearm when his arm was extended, but it produced a partial extension
and rotation when his arm started out being flexed. The variations in movements
are centered on joints and are produced by transactions among connections
centered on the point of stimulation of the patient’s motor cortex.
These differences resulting from variations in experimental technique were
not recognized as such; instead, they led to that just-mentioned century-long
conflict: “Are muscles or movements represented in the brain’s motor cortex?”
My interest in this controversy was ignited quite by accident. It took me years to
resolve it for myself, yet part of the answer is simple: anatomically, muscles, and
even parts of muscles, are connected to the cortex. Physiologically, by virtue of
interactions among cortical nerve cells, functions are represented. But my
experiments led to yet another level of explanation: behaviorally, actions are
represented. The story of how I came to this realization, and how I then reached
an explanation of it, is another of those adventures that make discovery so richly
rewarding.
In 1950, my colleagues at Yale and I were looking at cortical responses
obtained when peripheral nerves were stimulated. At the time, oscilloscopes
were not yet available for physiological studies. Fortunately, one of my friends,
Harry Grundfest at Columbia University, had gotten the Dumas Company
interested in implementing his design for an oscilloscope, and after waiting for
about a year, I was the recipient at Yale of the second such instrument (the first
went to Grundfest at Columbia, of course). It took my collaborators and me a
week or so to find out how to make the oscilloscope work, but finally Leonard
Malis, a fellow neurosurgeon with a talent for engineering, helped initiate our
first experiment. While he and one of the graduate students, Lawrence Kruger,
were setting up, I was in the basement testing the behavior of monkeys in
another experiment. When I came on the scene, we had absolutely gorgeous
responses on the oscilloscope face: inch-high “spikes” obtained from the brain
whenever we stimulated the sciatic nerve of the monkey.

A Direct Sensory Input to the Motor Cortex


Once over the peak of my elation, I asked where the electrodes had been
placed on the brain. The polite answer I received was “On the cortex, you
dummy.” I asked where on the cortex—and went to see for myself. What was
left of my elation was punctured: the electrodes had been placed on the pre-
central gyrus—the motor cortex! The motor cortex was supposed to be an output
cortex, not one receiving an input from the sciatic nerve. At the time, and even
today, the established idea has been that the nervous system functions much as a
reflex arc: an input to a processing center from which an output is generated. I
held this same view at the time, so I was understandably upset: These spikes
must reflect some sort of artifact, an artificial, irrelevant byproduct of the
experimental procedure.
While still observing these spikes, I immediately phoned two friends,
experts on stimulating the brain cortex: two neurophysiologists, one at Johns
Hopkins University and the other at the University of Wisconsin, Clinton
Woolsey and Wade Marshall. Both stated that they had seen this result
repeatedly in their experiments using more primitive recording techniques.
Woolsey added that he had even published a footnote in the Proceedings of the
American Physiological Society noting that these “spikes” must be artifacts—
that is, not a real finding but an unwanted byproduct of the mechanics (such as a
short circuit in the electrical stimulating apparatus) of the procedure. I found this
strange: An artifact that is always present and in several laboratories? We were
all puzzled.

42. Diagram of essential connections of the motor system

Wade Marshall came to the laboratory to see for himself what we were
observing. He felt that the “artifact” might be due to a spread of current within
the cortex, starting from the sensory cortex to include the motor cortex.
Together, we used a technique called “spreading cortical depression” that is
induced by gently stroking the cortex with a hemostat. Using this procedure,
within a few seconds any neighboring cortically originated activity ceases. The
electrical stimulation of the sciatic nerve continued to produce a local brain
response, though the response was now of considerably lower amplitude than it
had been before stroking the cortex. The remaining response was subcortical,
initiated by nerve fibers reaching the cortex prior to our stroking the cortex,
which had amplified the response.
So, I asked: If the input from the sciatic nerve is real, what might be the
pathways by which its signals reach the cortex? Larry Kruger decided that this
would provide him an exciting thesis topic. So, I surgically removed a monkey’s
post-central cortex—the part of the brain known to receive the sensory input
from the sciatic nerve—and therefore a possible way station to the adjacent pre-
central motor cortex. The “artifact” was still there.
Next I removed a hemisphere of the monkey’s cerebellum, a portion of the
primate brain that was known to receive a sciatic nerve input and to provide a
pathway to the pre-central motor cortex. Again, there was no change in the
“artifact”—the spikes were still there. Finally, I surgically removed both the
post-central cortex and the cerebellar hemisphere. Still no change in the response
in the motor cortex to sciatic stimulation!
During these experiments, we discovered another puzzling fact. It seemed
plausible that some input to the motor cortex might originate in the muscles. But
we found, in addition, that stimulation of the sciatic nerve fibers originating in
the skin also produced a response in the motor cortex.
Larry Kruger settled the entire issue during his thesis research by tracing
the sciatic input directly to the motor cortex via the same paths in the brain stem
that carried the signals to the post-central sensory cortex.
The research that I’ve summarized in these few paragraphs took more than
five years to complete. So we had plenty of time to think about our findings.

What Must Be
What we call the motor cortex is really a sensory cortex for movement. This
conclusion fits another anatomical fact that has been ignored by most brain and
behavioral scientists. Notable exceptions were neurosurgeon Wilder Penfield and
neuroscientist Warren McCulloch, whose maps of the sensory cortex did include
the pre-central region. Their reason was that the anatomical input to the motor
cortex arrives there via the dorsal thalamus (dorsal is Latin for back, thalamus is
Latin for "chamber") that is the halfway house of all sensory input to the entire
brain cortex except the input for smell. The dorsal thalamus is an extension of
the dorsal horn of our spinal cord, the origin of input fibers from our receptors,
not of output fibers to our muscles.
Nineteenth century European scientists had labeled this part of the cortex
“motor” when they discovered that electrically stimulating a part of the cortex
produced movements of different parts of the subject’s body. When they mapped
the location of the cortical origins of all these stimulus-produced movements, the
result was a distorted picture of the body, a picture which exaggerated the parts
with which we make fine movements, such as the fingers and tongue. Such a
map is referred to in texts as a “homunculus” (a “little human”).
While Malis, Kruger and I at Yale were looking at the input to the cortex
surrounding the central fissure, Harvard neuro-anatomist L. Leksell was looking
at the output from this cortex. This output had been thought to originate
exclusively in the pre-central motor cortex. Now Leksell found this output to
originate as well from the post-central sensory cortex. Those fibers that originate
in our pre-central cortex were the largest, but the origin of the output as a whole
was not limited to these large fibers.
For me, all of these new discoveries about the input to and the output from
the cortex surrounding the central fissure called into question the prevailing
thesis of the time: that the brain was composed of separate input and output
systems. Today, this thesis is still alive and popularly embodied in what is
termed a “perception-action cycle.” However, in our nervous system, input and
output are in fact meshed. The evidence for this comes not only from the brain
research conducted in my laboratory, as detailed above, but also from discoveries
centered on the spinal cord.

The Reflex
During the latter part of the 19th century two physiologists, one English,
Charles Bell, and the other French, François Magendie, conducted a series of
experiments on dogs, cutting the roots of various nerves such as the sciatic
nerve. These nerve roots are located at the point where the nerves connect to the
spinal cord. Some of the fibers making up these nerves are rooted in the dorsal,
or back, region of the spinal cord, the others in the ventral, or ”stomach,” region.
When the dorsal fibers were cut off at the root, the dogs had no difficulty in
moving about but did not feel anything when the experimenters pinched the skin
of their legs with tweezers. By contrast, when the experimenters cut the ventral
roots, the dogs’ limbs were paralyzed, though the dogs responded normally to
being pinched. The results of these experiments are famously enshrined in
neuroscientific annals as the “Law of Bell and Magendie.”
In the decades just before and after the turn of the 19th to 20th centuries, Sir
Charles Sherrington based his explanation of his studies of the spinal cord of
frogs on the Bell and Magendie law. It was Sherrington who developed the idea
of a reflex—an “input-output arc,” as he termed it—as the basis for all our
nervous system processing. Sherrington clearly and repeatedly stated that what
he called an “arc” was a “fiction,” a metaphor that allowed us to understand how
the nervous system worked: Reflexes could combine in a variety of ways. For
instance, when we flex an arm, contraction of the flexion reflex must be matched
simultaneously by a relaxation of the extension reflex; when we carry a suitcase
both flexors and extensors are contracted.
These ideas are as fresh and useful today as they were a hundred years ago
despite the fact that Sherrington’s fiction, of an “arc” as the basis for the reflex,
has had to be substantially revised.
Unfortunately, however, today’s textbooks have not kept up with these
necessary revisions. On several occasions, I have written up these revisions at
the request of editors, only to have the publishers of the texts tell the editors that
the revisions are too complex for students to understand. (I have wondered why
publishers of physics texts don’t complain that ideas, such as Richard Feynman’s
brilliant lectures on quantum electrodynamics, are too complex for students to
understand.)
Sherrington’s “fiction” of a reflex as a unit of analysis of behavior was the
staple of the behaviorist period in psychology. My quarrel is not with the concept
of a reflex but with the identification of “reflex” with its composition as an arc.
The organization of the reflex, and therefore of behavior, is more akin to that of
a controllable thermostat. The evidence for this stems from research performed
in the 1950s that I will describe shortly. The change in conception was from the
metaphor of an “arc,” where behavior is directly controlled by an input, to a
metaphor of a “controllable thermostat,” where behavior is controlled by the
operations of an organism in order to fulfill an aim. This change has fundamental
consequences, not only for our understanding of the brain and of our behavior,
but also for society. Doling out charities is not nearly as effective as educating a
people, setting their aims to achieve competence.

How It All Came About


The evidence that produced such a radical shift in the way we view our
place in the world was gathered during the 1950s. Steven Kuffler, working in
John Eccles’s laboratory in Canberra, Australia, had shown that about one third
of the fibers in the ventral roots of peripheral nerves—the roots that convey
signals to muscles from the spinal cord— end, not in contractile muscle fibers,
as had been believed, but in the sensory receptors of the muscles. These are the
receptors that respond to stretching of the muscle to which they are attached. The
receptor is therefore activated not only to external stimulation that results in
stretching the muscle but also to signals coming from the spinal cord. This
means that the brain can influence the muscle receptor.
Thus there is no simple way to gauge whether the receptor stimulation is
the consequence of input to our receptors from outside our bodies or from input
to the receptor originating inside the body. When we tense our muscles while we
are standing against a strong wind, the muscle receptors are influenced in the
same way as when we are tense while taking a test. Only some higher-order
determination of the origin of the tension would tell us whether the tension was
produced externally or internally.
During the remainder of the 1950s, scientists demonstrated this same sort of
internal control of receptor function over all receptors except those involved in
vision. Powerful internal top-down controls originating in the brain were
identified and their effect on the operations of the receptors in the skin, the nose
and the ear were studied. We came to know that the body is more than a receiver;
it is a receiver that is adjustable to the needs of the body. George Miller, Eugene
Galanter and I decided to write a book that showed the relevance of this revised
view of receptor function to the field of psychology. Plans and the Structure of
Behavior is based on the theme that the unit of behavior is not a reflex arc but an
input to a “test-operate-test-exit” sequence, a TOTE, which is also the unit that
makes up a computer program.

43. Summary diagram of the gamma (y) motor neuron system. Redrawn after Thompson, 1967.
Biological Control Systems
The history of the invention of the thermostat helps make the biological
control systems more understandable. During the 1920s and 1930s, Walter
Cannon at Harvard University, on the basis of his research, had conceptualized
the regulation of the body’s metabolic functions in terms of a steady state that
fluctuated around a baseline. We get hungry, eat, become satiated, metabolize
what we have eaten, and become hungry once more. This cycle is anchored on a
base which later research showed to be the organism’s basal temperature. He
called this regulatory process “homeostasis.” A decade later, during World War
II, Norbert Wiener, who had worked with Cannon, further developed this idea at
the Massachusetts Institute of Technology, where he used it to control a tracking
device that could home in on aircraft.
After the war, engineers at Honeywell applied the same concept to track
temperature: they created the thermostatic control of heating and cooling
devices. In the 1950s, the concept of a thermostat returned to biology, becoming
the model for sensory as well as metabolic processing in organisms.
In the process of conducting experiments to find out where such tracking—
such controls on inputs—originate, neuroscientists found that, except in vision,
control can be initiated not only from structures in our brain stem but also by
electrical stimulation of higher-order systems within the brain.
Because our visual system is so important to navigation, I set my Stanford
laboratory to work to determine whether or not our brain controls our visual
receptors as had been demonstrated to be the case for muscle receptors, touch,
smell and hearing. My postdoctoral student Nico Spinelli and I tested this
possibility by implanting a micro-electrode in the optic nerve of cats, and then,
at a later date, stimulating the footpads of the awake cats, using the end of a
pencil. One of us would stimulate the cat’s paw while the other supervised the
electrical activity recorded from the cat’s optic nerve. We obtained excellent
responses as shown on our oscilloscope and computer recordings.
We next went on to use auditory stimuli consisting of clicking sounds.
Again we were able to record excellent responses from the cat’s optic nerve. We
were surprised to find that the responses in the optic nerve arriving from tactile
and auditory stimulation came at practically the same time as those we initiated
by a dim flash. This is because the input to the brain from the tactile and
auditory stimulation is much faster than the visual stimuli that have to be
processed by the retina. There is so much processing going on in the fine fibers
of the retina (recall that there is no nerve impulse rapid transmission within the
retina) that the delay in optic stimulation reaching the optic nerve directly is
equal to the delay produced by processing of tactile and auditory stimuli in the
brain. We wondered whether this coincidence in timing might aid lip reading in
the hearing impaired.
When the cat went to sleep, the responses recorded from the optic nerve
ceased. Not only being awake but also not being distracted turned out to be
critical for obtaining the responses. A postdoctoral student, Lauren Gerbrandt,
came to my Stanford laboratory and, excited by what we had found, tried for
over a year to replicate our findings in monkeys. Sometimes he obtained the
responses from the optic nerve; other times he did not. We were upset and
frustrated at not being able to replicate an experimental result that we had
already published.
I urged Gerbrandt to choose some other experiment so that he would have
publishable findings before his tenure in the laboratory was up. He asked if it
would be OK to continue the visual experiments in the evening, and I offered to
assist him. He got his wife to help him bring the monkeys from their home cages
to the laboratory. The results of the experiments were immediately rewarding.
He obtained excellent responses every time, results that I came to witness. But
this run of good luck did not last. His wife became bored just waiting around for
him and asked if it would be all right for her to bring a friend. Now the responses
from the optic nerve disappeared. I came in to witness what was happening and
after a few nights the answer became clear to us: We asked the women, who had
been chatting off and on, to be quiet. The responses returned. Having established
the cause of the variability in the experimental results, we moved the experiment
to the room where the original daytime experiments had been performed. Now
we found good optic nerve responses whenever the adjacent hall was quiet.
When someone in high heels went by, or whenever there was any commotion,
the responses disappeared. We had found the cause of the “artifact.”
By recording the electrical activity of subcortical structures during the
experiment, we were then able to show that the distracting noise short-circuited
cortical control at the level of the thalamus, the halfway house of the visual input
to the cortex. Thus, we had not only demonstrated cortical control over visual
input but had also shown the thalamus to be a necessary brain component
involved in the control of the feedback circuitry.

Controlling the Thermostat


The change from viewing the fundamental unit of behavior as an arc to
viewing it as a thermostat-like feedback inspired George Miller, Eugene Galanter
and me to write Plans and the Structure of Behavior (1960), a book still regarded
as seminal to the cognitive revolution in psychology. An anecdote suggests why
this is so.
I presented the Sechenov lectures at the University of Moscow during the
Soviet period in the 1970s. I. M. Sechenov, for whom the lectures were named,
had done for Russian brain science what Sherrington had done in establishing
the reflex as the basic unit of behavior. During the discussion period after my
lecture, I was asked, “Professor Pribram, did we hear you correctly, did you say
you did not believe that the reflex exists?” I answered without hesitation, “No, I
did not say that. I presented evidence that the organization of the reflex is not an
arc but is more like that of a thermostat.”
After the session was over, my host and friend, Alexander Romanovitch
Luria, the famous Soviet neurop sychologist, put his arm around my shoulder
and said, “Good work, Karl. If you had said that you did not believe in the
reflex, you would never have been welcome in the Soviet Union again!” What
Luria, Leontiev, the head of Psychology at the Russian Academy of Sciences,
and I were trying to accomplish was to move Soviet science from a view based
exclusively on Pavlovian conditioning to a stance in which control operations
informed and formed the brain, behavior and society. This view had been
developed into a field of study called “cybernetics.” Some years later—in 1978
—Leontiev, in his welcoming address to the International Psychological
Congress in Moscow, framed our ideas as being based on reflections rather than
on reflexes.
Cybernetics (the art of steering) deals with the way we navigate our world.
The brain is conceived as the organ that enables us to steer a steady course that
we have decided to set for ourselves. Contrary to the tenets of mid-19th century
psychology in the Soviet Union, Europe and elsewhere, we are not totally at the
mercy of our environment. It is not an input-output world that we are
conditioned to travel. We choose. The world is meaningful because we mean to
choose where, when and how we navigate.
My books Plans and the Structure of Behavior (1960) and Languages of the
Brain (1971) were huge successes in the Soviet Union. Once, the lady in charge
of the floor in the hotel where I was staying said she had to see me rather
urgently. I wondered what infraction of the many rules I had been guilty of. “No,
no. All is well. Would you please autograph my copy of Languages of the Brain,
which I enjoyed so much, before I go off duty?”
By contrast, in America, Plans and the Structure of Behavior received
“interesting” reviews. One reviewer remarked that we had written the book in
California and had probably suffered sunstroke. Another reviewer was more
generous: “Most people do not become senile until they are at least 60,” he
stated. “These authors aren’t even 40.”
But one reviewer was helpful: Roger Brown, a noted Harvard linguist,
pointed out that our purpose was to show that humans were not just robots at the
mercy of the environment but sentient organisms, at least somewhat in control of
their fate. Brown indicated that we had perhaps taken a first step, but had not
succeeded in our mission.
George Miller, Gene Galanter and I had thought that by modeling human
planning on a second (the first being the thermostat) metaphor—computer
programming—that we had done more than take a first step: Both plans and
programs are created before they are actually implemented, and both run their
course unable to change as a consequence of being interrupted. If interrupted,
they must be started again from the beginning. But the sensitivity that a great
pianist shows when he covers over a small mistake even as it occurs, or the
accomplished actor who can cover not only his own missed cues but those of
other actors as well, are not the characteristics of computer programs.
Accomplished humans do not have to start their performances all over again
when some small mistake can be covered by a change in the specifics of the
performance. Something was indeed still missing from our proposal that plans
are identical to computer programs.

Controlling the Thermostatic Control


Over the next decade, a few of us—Hans Lukas Teuber, professor of
psychology at MIT; Horst Mittlestedt of the Max Planck Institute at Munich;
Ross Ashby of the University of London; and I—discussed this problem on
many occasions, and we finally discovered the missing ingredient.
A home thermostat would be too inflexible if it did not have a knob or
wheel to set the thermostat’s control (technically its set-point) to the temperature
we desire. A controllable thermostat allows us to regulate the temperature of a
room according to our needs. When the sun sets, the walls of our houses become
colder. We begin to radiate the warmth of our bodies toward those walls instead
of having the walls radiate warmth to us. We can change the setting of our
thermostat according to our need. A control such as this operates by adjusting
the separation between two wires that close a circuit when they touch. Closing
the circuit (a digital—off or on—implementation) turns on the heater or air-
conditioner. Ordinarily the separation between wires is maintained and regulated
by the amount of heat in the room because the wires expand when heated. The
control wheel or knob adds a second factor working in parallel with the heat
sensitivity of the wires: an “external” factor—the control wheel—operates in
parallel to an “internal” factor, the amount of heat in the room.
Roger Sperry and Lucas Teuber named this additional control a “corollary
discharge.” Actually, the controlling process must slightly precede, that is
anticipate, the initiation of movement in order to be effective. Soon, such
anticipatory controls were found in the brain. Using microelectrodes, the
corollary discharge was seen to originate in the frontal cortex, coincident with
the initiation of voluntary eye movements. The controls operate on a group of
cells in the brain stem that are adjacent to those involved in actually moving the
eyes. Shortly, another experimental result showed that the initiation of all
voluntary movement is anticipated by a “readiness potential”—that is, by
activity on the medial surface of the frontal cortex just in front of the motor
cortex. Using direct current recording, an even earlier electrical change was
recorded in the more anterior parts of the frontal lobe.

44. The TOTE servomechanism modified to include feedforward. Note the parallel processing feature of the
revised TOTE.

The neuroscience community had thus progressed from conceiving of the


elementary unit of behavior as a reflex arc to thinking of it as a thermostat-like,
programmable process that is controllable from separate parallel sources. Even
the biological homeostatic process, the inspiration for the thermostat, is now
known to be rheostatic (rheo is Latin for “flow”), a programmable, adjustable
process.
Each step of this progression in our thinking took about a decade. But as we
shall soon see, even these steps were not enough to fully account for our intrinsic
abilities not only to navigate but also to construct the world we inhabit. Now we
are finally ready to examine the brain systems that implemented these changes in
our views.

Muscles, Movements and Actions


I return therefore to the controversy that initiated my interest in the motor
cortex of the brain: Are muscles or movements represented in the pre-central
motor cortex? To find out, I devised a belt with a cuff within which one arm of a
monkey could be restrained. Four monkeys so outfitted were each trained to
open the lid of a box. The lid had a slot through which a metal loop protruded. A
large nail attached to a leather thong was passed through the loop. The monkey
had to pull the nail out of the loop and raise the lid in order to obtain a peanut. I
timed the duration of the sequence of responses, and my students and I filmed
the movements with a time-lapse camera that allowed us to inspect the
movements in detail.
After each monkey had reached a stable performance, I surgically removed
the entire motor cortex on the side opposite to the hand the monkey was using. I
expected that the sequence of movements would be severely disturbed. It was
not. All monkeys showed some clumsiness —it took over twice as long for them
to retrieve the peanuts —but our film showed no evidence of the monkeys’
muscle weakness or any impairment in the sequences of their muscle
contractions (that is, of their movements).
As a control procedure, we had originally filmed the monkeys in their home
cage, climbing the sides of the cage and grabbing food away from other
monkeys. Frame-by-frame analysis of these same acts performed after surgery
showed no change in patterns or timing of movements, nor any muscle
weakness.
With two additional monkeys, I performed the same experiment after they
had been trained to open the box with each hand separately. From these
additional monkeys I removed the appropriate cortex on both sides of the brain
—both times with the same result.
I need to note again an important point about the surgery. The cortex was
removed with a fine tube through which suction could be applied. The amount of
suction was controlled by a small thumbhole near the held end of the tube. The
gray matter, composed of the cell bodies of neurons, can be removed with this
instrument without damaging the underlying stiffer white matter, which is
composed of nerve trunks that connect different parts of the cortex to each other
and to subcortical structures. The damage produced by removing only the gray
matter is local, whereas damage to the white matter produces more serious
disabilities because more distant regions become involved. The brain damage
produced in patients by strokes or accidents always includes the brain’s white
matter.

The Act
I came to understand the distinction between movement and action as a
result of these experiments with monkeys, experiments that were undertaken at
the same time as those in which we stimulated the sciatic nerve. The motor
cortex turned out to be not only a sensory cortex for action—an act turned out to
be more of a sensory accomplishment than a particular set of movements.
Behaviorally, the motor cortex is not limited to programming muscle
contractions or movements. This cortex is involved in making actions possible.
An act is a target, an attractor toward which a movement intends. I illustrate this
for my classes by closing my eyes, putting a piece of chalk between my teeth,
and writing with it on the blackboard. It matters not whether I use my right hand
or my left, my feet or my teeth—processes in my motor cortex enable what
becomes written on the board. What is written is an achievement that must be
imaged to become activated. Such “images of achievement” are the first steps
leading to action, whether literally, as in walking, or at a higher, more complex
level as in speaking or writing.
The answer to the initial question that started my research on the motor
cortex can be summarized as follows: anatomically, muscles are represented in
the motor cortex; physiologically, movements around joints are represented
there. Behaviorally, our cortical formations operate to facilitate our actions.
How?
For ethologists—zoologists studying behavior, as for instance Konrad
Lorenz and Niko Tinbergen—the study of behavior is often the study of
particular movements, the fixed patterns of a sequence of muscle contractions
made by their subjects. Their interest in neuroscience has therefore been on the
spinal and brain stem structures that organize movements.
By contrast, my interest, stemming from the experiments on the motor
cortex, has been in line with the behaviorist tradition of psychology, where
behavior is defined as an action. An act is considered to be the environmental
outcome of a movement, or of a sequence of movements. B. F. (Fred) Skinner
remarked that the behavior of his pigeons is the record made when they pecked
for food presented to them according to a schedule. These records were mainly
made on paper, and Skinner defined behavior as the paper records that he took
home with him. They were records of responses, and Skinner was describing the
transactions among these responses: he described which patterning of responses
led to their repetition and which patterns led to the responses being dropped out.
The interesting question for me was: What brain processes are entailed in these
transactions?
An anecdote highlights my interest. The United Nations University hosted a
conference in Paris during the mid-1970s. Luria, Gabor and other luminaries,
including Skinner, participated. In his address, Skinner admitted that indeed he
had a theory, something that he had denied up to that time. He stated that his
theory was not a stimulus-response theory nor was it a stimulus-stimulus theory;
his theory was a response-reinforcement theory. Furthermore, he stated,
reinforcement was a process. I felt that he had given the best talk I’d ever heard
him give and went up to congratulate him. Skinner looked at me puzzled: “What
did I say that makes you so happy?” I replied, “That reinforcement is a process.
The process must be going on in the brain.” I pointed to my head. Skinner
laughed and said, “I guess I mustn’t say that again.” Of course he did. In 1989, a
year before he died, he wrote:

There are two unavoidable gaps in the behavioral account: one


between the stimulating action of the environment and the response of
the organism and one between consequences and the resulting change
in behavior. Only brain science can fill those gaps. In doing so it
completes the account; it does not give a different account of the same
thing.

Reinforcement Revisited
Skinner’s contribution to the nature of reinforcement in learning theory has
not been fully assimilated by the scientific and scholarly communities: Skinner
and his pupils changed our conception of how learning occurs from being based
on pleasure and pain to a conception based on self-organization, that is, on the
consequences produced by the behavior itself. Pavlov’s classical conditioning,
Thorndike’s “law of effect,” Hull’s “drive reduction” and Sheffield’s “drive
induction”—even the idea that interest is some sort of drive—are all different
from what Skinner’s operant conditioning revealed. In my laboratory, a
colleague and I raised two kittens from birth to demonstrate the self-maintenance
and self-organizing aspects of behavior. We nurtured one kitten with milk on its
mother’s breast; we raised a littermate on the breast of a non-lactating cat and
fed the kitten through a stomach tube. After six months, there was practically no
difference in the rate of sucking between the two kittens. Mothers who have tried
to wean their infant from sucking on his thumb or on a pacifier might feel that
such an experimental demonstration was superfluous.
The change in conception of reinforcement from being based on pleasure
and pain to being based on performance per se is important for our
understanding of human learning. Rather than rewarding good behavior or
punishing bad behavior, operant behaviorism suggests that we “educate” (Latin
e-ducere, “lead” or “draw out”) a person’s abilities— that we nourish the
performances already achieved and allow these performances to grow by
leading, not pushing or pulling. In these situations, pleasure is attained by
achievement; achievement is not attained by way of pleasure. I did not have to
look at my score sheets to find out whether my young male monkeys were
learning a task: their erect penises heralded their improvement—and conversely,
when their performance was lagging, so was their penis.
My good students have used a technique that I used in college. We review
and reorganize our notes about once a month, incorporating what we have
learned that month into our previous material. A final revision before the final
exams relieves one of useless night-long cramming. I say useless because
crammed rote learning needs much rehearsal—as in learning multiplication
tables—to make a lasting impression. What has been learned by cramming is
well forgotten by the next week.

More to Be Explained
The results of my experiments on the motor cortex, and the conclusions
based on them, were published in the 1950s while I was still at Yale. Our results
were readily accepted and even acclaimed: John Fulton, in whose laboratory I
conducted these studies, called the paper “worthy of Hughlings Jackson,” the
noted English neurologist whose insights about the functions of the motor cortex
were derived from studying epileptic seizures.
My conclusions were that the removals of motor cortex had produced a
“scotoma of action.” In vision, a scotoma (from the Greek scotos, “darkness”) is
a dark or blind spot in the visual field. In Chapter 6 we reviewed the evidence
that some patients can reasonably well navigate their blind visual field despite
being consciously “blind” within that dark spot or field. The scotoma of action
produced by my removals of the motor cortex had a similar effect: the monkeys’
difficulty was restricted to a particular task, though they could still perform that
task with some loss of skill, as demonstrated in film records of their behavior
and by the longer time it took them to reach the peanut in the box. In my
experiments the “scotoma of action” referred to a particular task rather than to a
particular area of perceived space as in vision—and not to a difficulty with any
particular muscle group or a particular movement.
Today a great deal has been learned regarding the specific targeting of an
action. Much of the experimental and clinical research has been performed on
“visuomotor” achievements such as grasping an object. The brain cortex that has
been shown involved lies somewhat behind and in front of the cortex
surrounding the central fissure, the cortex involved in the experiments on
primates reviewed above. This may well be because in primates the central
fissure has moved laterally from its position in carnivores, splitting the
visuomotor related cortex into parietal and frontal parts.

The How of Action


Despite the acclaim given to the results of my early experiments, they gave
me a great deal of trouble: I could not envision a brain process that would
encode an act rather than a movement. Lashley had noted the problem we must
face when he described the instinctive behavior of a spider weaving a web.
Essentially, the same pattern is “woven” in various environments that may be as
diverse as the regular vertical posts on our porch or the forked branches of trees.
Technically, this phenomenon is called “motor equivalence” and the issue
troubled us because we could not imagine any storage process in the brain that
might result in virtually identical behaviors under many diverse circumstances.
In humans, motor equivalence is demonstrated when we write the same message
on a horizontal or on a vertical surface, or even in the sand with the toes of our
left foot. Many of the actual movements involved are different.
Some seven years passed after the publication describing our experiments
on the motor cortex. During that time, I continued to be at a total loss for an
explanation. I kept searching for a clue as to how a “scotoma of action” might
have been produced by my experiments.
A series of fortunate circumstances provided the answer. In the mid-1970s,
while I was lecturing at the University of Alberta, interaction with the
university’s professors provided a most enriching experience. These discussions
were enhanced by talks about realism with Wolfgang Metzger, an influential
psychologist from Germany who sharpened my views on how our brains
contribute to navigating the reality of the world we perceive. Much of this
chapter is framed within the conclusions I reached at that time.
As noted in Chapter 5, James Gibson, professor at Cornell University
famous for his experiments and ideas on visual perception, visited Alberta for
prolonged periods while I was there, and we discussed in depth a number of
issues regarding his program that came to be called “ecological perception.”
Gibson was calling attention to various aspects of the environment, such as
horizons and shadows, that determine our perceptions. In terms of Metzger’s
realism, I agreed with all that Gibson was saying; but at the same time I also
insisted that specific brain processes were necessary to “construct” what we
perceive: that ecology extended inside the organism as well as outside.
There was good reason for Gibson’s stance. Cornell had been the central
bastion of introspective psychology in America. Introspective psychology was
founded on the technique of asking people what they saw, heard, felt or thought.
Behaviorism reacted against introspective psychology because, using the
technique of asking for answers, psychologists had great difficulty in
confirming, between subjects and even in the same subject, answers to questions
asked. This was no way to construct a science of psychology. Gibson, who had
been a pupil of E. B. Tichner, the towering figure and promulgator of
introspection, had rebelled against introspection just as thoroughly as had those
psychologists who were studying behavior. Gibson’s experimental approach to
perception displayed figures on oscilloscope screens and specified the changes in
the display that resulted in changes of perception. Shortly, we will see the power
of this approach in resolving the issue about the very functions of the motor
cortex that were puzzling me.

A Fortuitous Return to the Motor Cortex


It was Sunday morning of my last week of lectures in Alberta, and I had
been asked to talk about the “motor systems” of the brain. I agreed and added
that there were some problems that perhaps the participants could help me
resolve.
I’m an early riser. What to do on a Sunday morning? I started to make notes
for my lecture: “The Motor Cortex.” Twenty minutes later I threw the blank
sheet into my wastebasket. Another blank sheet: “The Motor Systems of the
Brain.” Another twenty minutes. Into the basket. After an hour or so, I gave up.
As I was leaving in a week, I could usefully pack books that I would not need
anymore and ship them home.
While packing, I came across a translation from Russian of a book by
Nikolas Bernstein, whose work had been highly recommended to me by
Alexander Romanovich Luria at a dinner in Paris. I had skimmed the Bernstein
book at least twice and had found it uninteresting. With regard to brain function,
Bernstein’s insight was that whatever the brain process, it was nothing like the
process of our experience. Not exactly a new thought for me. But I was to see
Luria soon—so what could I say to him? I sat down to read the book once more
and tried to pay special attention to the behavioral experiments that Bernstein
had carried out. There might be something there that I had missed.

45. (a) Subject in black costume with white tape. (b) Cinematograph of walking. Movement is from left to
right. The frequency is about 20 exposures per sec. Reprinted with permission from N. Bernstein, The Co-
ordination and Regulation of Movements. © 1967 Pergamon Press Ltd.

Eureka!
In Bernstein’s experiments, he had dressed people in black leotards and
filmed them against black backgrounds, performing tasks such as hammering a
nail, writing on a blackboard, or riding a bicycle. He had taped white stripes onto
the arms and legs of the experimental subjects. In later experiments, Bernstein’s
students (and, as previously discussed, Gunnar Johanssen in Sweden) placed
white dots on the joints. After developing the film, only waveforms made by the
white tapes could be seen. Bernstein had used a Fourier procedure to analyze
these waveforms and was able to predict each subsequent action accurately!!!
I was elated. This was clear evidence that the motor system used essentially
the same procedures as did the sensory systems! My seven years’ hunt had
ended. I gave my lecture on the brain’s motor systems the next day and was able
to face Luria with gratitude the following week.
The Particular Go of It
Next, of course, this idea had to be tested. If processing in the motor cortex
were of a Fourier nature, we should be able to find receptive fields of cortical
cells that respond to different frequencies of the motion of a limb.
An Iranian graduate student in the Stanford engineering department, Ahmad
Sharifat, wanted to do brain research. He was invaluable in upgrading and
revamping the computer systems of the laboratory as well as in conducting the
planned experiment.
A cat’s foreleg was strapped to a movable lever so that it could be passively
moved up and down. We recorded the electrical activity of single cells in the
motor cortex of the cat and found many cells that responded primarily, and
selectively, to different frequencies of motion of the cat’s limb. These motor
cortex cells responded to frequencies much as did the cells in the auditory and
visual cortex. The exploration had been successfully completed.

Fixed Action Patterns


My experimental program had resolved the issue of what the primate motor
cortex does. I had also been able to show how the doing, the action, is
accomplished. But, as yet, I had not created a precise formal description of what
actions do for us. The solution to this problem came from the work of Rodolfo
Llinás, professor of neurophysiology at the medical school of New York
University.
Sherrington had had the wisdom to note that his “reflex arc” was a fiction, a
wisdom not shared in current texts. One reason for Sherrington’s insightful
approach might have been a critique by the English neurologist Graham Brown
(no relation to Roger Brown). Brown observed that dogs showed no difficulty in
walking despite having their dorsal roots severed. In today’s terms, their
locomotion appeared to be “preprogrammed.”
Rodolfo Llinás, in his book I of the Vortex, expresses a view
complementary to mine regarding what the brain does. Llinás takes the view
expressed by Graham Brown, instead of that proposed by Sherrington as his
starting point. Brown had emphasized the fact that cutting the sensory roots of
peripheral nerves leaves an animal’s movements intact. Llinás calls the units of
behavior based on Brown’s insight “fixed action patterns.” The virtue of such
patterns is that they anticipate “the next step” as they become actualized. As I
walk, my next step is anticipated as I lift my foot during the current step. This
aspect of behavior cannot be accounted for by Sherrington’s reflex arc but can
readily be handled by the change of metaphor to the controllable thermostat.
But fixed action patterns encounter their own set of difficulties: I helped
fund a series of experiments in which sensory roots were cut over a much greater
extent than in Graham Brown’s experiments. In those experiments, though gross
movement was still possible, it was seriously impaired and learning a skill was
absent. At a larger scale, fixed action patterns by themselves do not account for
our behavior.

A Generative Process
At these higher levels, Llinás brings in sensory processing. Recall that, in
order to account for our ability to perceive images and objects, movements are
necessary: that in vision, nystagmoid movements are necessary to perceiving
images; and that more encompassing movements are necessary for us to perceive
objects. The result of movement is to make some aspects of the sensory input
“hang together” against a background, as in Johanssen’s grouping of dots. This
hanging together, or grouping of sensory inputs, is technically called
“covariation.” Llinás brings in covariation among sensory patterns to fill out his
view.
Furthermore, Llinás also describes the encompassing organizations of fixed
action patterns, our movements, as “hanging together” much as do sensory
patterns. As described in Chapter 6, the hanging together of action patterns is
described as “contravariation.” The difference between covariation in sensory
patterns and contravariation in action patterns is that contravariation implies the
anticipatory nature of action patterns.
When covariation and contravariation are processed together, the result
generates our perception of objects (Chapter 6) and our accomplishment of acts.
Technically the “coming together” of covariation and contravariation produces
invariances. Objects are invariant across different profiles—that is, across
different images of the sensory input—while acts are invariant across different
movements that compose an act.
Finally, the research on cortical processing of visuo-motor acts
accomplished by Marc Jeannerod and comprehensively and astutely reviewed in
Ways of Seeing by him and Pierre Jacob, provides the specificity required by the
actions guided by their targets, their images of achievement. These experiments
deal with invariants developed by the transactions between actions and the
images upon which they intend to operate.
Viewing acts as invariants composed across movements and their intended
achievements is the most encompassing step we have reached in our
understanding: brain processes that enable meaningful actions—we have moved
from reflex arc, through controllable thermostat, to what can be called a
generative organization of the brain/mind relationship.
This generative organization can be understood as being produced by
“motor inhibition” by analogy to the process of sensory inhibition. In Chapter 5,
I reviewed Békésy’s contribution on sensory inhibition that showed how our
perceptions are projected beyond ourselves. For the generative motor process,
we can invoke motor inhibition to internalize (introject) our images of
achievement, the targets of our actions. When, for instance, we write with our
pens in our notebooks or type on our computer keyboards, we produce an
achievement. This achievement must be internally generated in some fashion,
and motor inhibition can fulfill this role. Massive motor inhibition is produced
by the cellular interactions among their fine-fiber connections in the cerebellar
hemispheres: Purkinge cells release GABA, which produces a feedforward
inhibition. Other cells (basket and stellate) can inhibit the Purkinje cells. Thus
multiple stages of inhibition can sculpt the target—form the achievement.

Imitation and Imagination


In our book Plans and the Structure of Behavior, we refer to several
different sorts of Plans. Some we call “tactics.” John Searle has called these
“intentions-in-action.” In primates, including humans, such tactics are controlled
by systems involving the “motor and sensory cortex” of the pre- and post-central
gyrus (discussed in Chapter 5). We also identified the anterior, the pre-frontal
cortex, as being involved in the initiation of longer-range “strategies,” Sear-le’s
“prior intentions.” Current brain-imaging techniques are validating these earlier
conclusions that were based on evidence obtained from brain injuries and
electrical recordings and stimulations. Intentions are attractors, targets—
“images of achievement.”
For the past decade, scientists have been documenting the existence of
“mirror processing” in the brain. These experiments have shown that many of
the same brain cells and brain regions are active under several related
circumstances: when an action is undertaken; when that action is imagined but
not undertaken; and when the person or animal is watching someone else
perform the same act. These results are extremely important for comprehending
how we understand each other—for empathy and for our ability to anticipate
each other’s intentions.
Initially these experimental results were restricted to Broca’s area—the part
of the frontal lobe of the brain that was believed to be involved in the expression
of speech— and to a separate area in the inferior parietal (back) part of the brain.
But as my colleagues and I had demonstrated back in the 1960s, by comparing
the arrangement of thalamo-cortical connections between carnivores and
primates, these two parts of the brain, front and back, are really one, and have
been split apart in the primate brain by the intrusion of the primary sensory-
motor cortex that surrounds the central fissure. More important, mirror
processing has been shown to occur in other parts of the frontal cortex,
depending on what is being mirrored or imagined—for instance, a skill or an
emotional attitude.
As with most of today’s brain research, such important results fill an aspect
of the brain/behavior/experience gap by way of correlations. In themselves, they
do not provide a possible “mechanism” for how these brain processes operate.
The evidence (reviewed in the present chapter) that leads to the conclusion that
acts are based on environmental targets does provide such an explanation.
Intentions, imitations and imaginings all share the attribute of projection; that is,
they adapt our actions to “realities” in the environment by actively selecting and
incorporating sensory input. The research that explored the basis for “images of
achievement” therefore goes a step further than the discovery of “mirror
neurons” in specifying a possible (probable?) process by which such “mirrors”
can be achieved.
When I first visited the Soviet Union, I imagined what living in a
communist country would be like. I found, much to my surprise, that the Soviets
actually dealt with “filthy lucre” (money) and even had savings accounts! This
and other, less dramatic surprises changed my trans-acting the daily process of
getting from one place to another, of obtaining revenues from translations of my
books, and of paying bills at restaurants, incorporating the financial aspects of
the Soviet system into my actions. After a while, I could imagine and project in
further transactions what my hosts would be requiring. Note that this sequence is
not a perception-action sequence; it is an incorporation of perception to modify
ongoing action and an incorporation of action into an ongoing perception.
The attractor, the target, the achievement, in these situations is imagined. In
other situations the target is imitating another person as babies do when playing
peek- a-boo. The Fourier-like process, in the richness of its processing and
storage capacity, furnishes a viable (neuro-nodal) medium within which the brain
can proficiently organize the skills that form our actions, imaginings and
imitations.

Speaking
The discovery that our human brain’s motor cortex encodes achievement—
not movement per se—has signifi-cant import for our understanding of
language.
One theory of how our speech develops suggests that a baby’s babbling
gradually, through practice, allows the emergence of words and sentences. This
is called the “motor theory of language production.” Speaking, when regarded as
a skill, makes such a theory plausible. However, linguists have expressed a
considerable amount of skepticism, based on their observations of the
development of language in infants.
I vividly remember discussing this issue with Roman Jakobson, dean of
classical linguistics at Harvard, with whom I worked repeatedly in venues as far
apart as Moscow, the Center for Advanced Studies in Palo Alto, and the Salk
Institute in LaJolla, California. Jakobson pointed out to me what is obvious: a
child’s development of his ability to understand language precedes by many
months his ability to speak. Also, once the ability to speak emerges, and a
caretaker tries to mimic, in baby talk, the child’s erroneous grammatical
construction, the child will often try to correct the adult. These observations
indicate that the child has a much better perceptual grasp than a motor
performance ability in the use of language.
The motor theory of language can be reinstated, however, when we
conceive of language as a “speech act,” as John Searle, the University of
California at Berkeley philosopher, has put it. According to my research results
reviewed in this chapter, the motor systems of the brain encode “acts,” not
movements. In this regard, speech acts become achievements, targets that can be
carried out with considerable latitude in the movements involved. Perception of
the target is the essence of the execution of an act, including the act of speaking.
Babies learn to target language before they can comprehensibly speak it. The
outdated “motor theory” of speech needs to be reformulated as an “action
theory” of speech.
Linguist Noam Chomsky and psychologist George Miller each have called
our attention to the fact that we can express (achieve) as speech acts the same
intended meaning in a variety of grammatically correct ways. Chomsky has
developed this observation as indicating that there is a deep, as well as a surface,
structure to language.
I extend Chomsky’s insight to suggest that the variety of languages spoken
by those who know several languages provides evidence for the existence of an
ultra-deep language process. This ultra-deep structure of language processing
occurring in the brain does not resemble the surface expressions of language in
any way. I will consider this view more fully in Chapter 22.
In Summary
Our brain processes are not composed of input-output cycles. Rather, they
are composed of interpenetrating meshed parallel processes: the motor systems
of our brain work in much the same way as do our sensory systems. Our sensory
systems depend on movement to enable us to organize the perception of images
and objects. Conversely, our brain’s motor systems depend upon imaging the
intended achievement of our actions.
Objects remain invariant, constant, across the variety of their profiles—the
variety of their images. Actions remain invariant across the variety of
movements that can produce them. Speech acts, intended meanings, are
invariant not only across a variety of expressions but also across a variety of
languages.
A World Within
Chapter 9
Pain and Pleasure

Wherein the rudiments of the “form within” that initiate pain and pleasure are
shown to derive from stem cells lining the central cavities of the brain. Pain is
shown to be homeostatically controlled. And a much-needed physiological
explanation for the generation of pleasure is developed.

We’re having a heat wave,


A tropical heat wave,
The temperature’s rising,
It isn’t surprising
She certainly can can-can.
—Marilyn Monroe singing “Heat Wave” in There’s No Business Like Show
Business, 1954. Lyrics by Edward Holland, Jr., Lamont Herbert Dozier and Brian
Holland.
So far, I have dealt with our direct relationship with the world we navigate.
Targets and intentions characterize this relationship, which is executed by output
to muscles and mediated by input from sensory receptors. I have also noted that
this relationship occurs within a context. The context comes in two forms: the
inner world of our bodies and the social world of communication and culture.
The questions asked in this chapter deal with the inner world of our bodies:
What is the brain physiology of pain? And of persistent suffering? What is the
brain physiology of pleasure? There were few answers to these questions when I
began my research. In 1943, I asked Percival Bailey, the classifier of brain
tumors with whom I had received a fellowship, what I might read. His brief
answer was: “There is nothing.”
In the mid-1950s, Jerry Bruner invited me to teach a session in his Harvard
class. He started the class by asking me what my research goals were. I answered
that I would like to know the brain underpinnings of emotion as clearly as (I
thought I knew) those that organize our perceptions. I had already begun the
relevant research at Yale, but those results had not become integrated into my (or
the establishment’s) understanding. For the most part the terms “feelings,”
“emotions,” and “motivations” were used interchangeably. “Pain” was identified
with mild electric shock that produced aversive behavior and “pleasure” with
morsels of food that evoked approach. The neurophysiology of pain was in
doubt, and no one even dared think about what the brain physiology of pleasure
might be.
I did pay heed to William James’s visceral theory of emotions despite
Walter Cannon’s and Karl Lashley’s critiques; I discussed the theory with Wilder
Penfield, who stated that he had occasionally produced heart-rate changes when
he electrically stimulated the region of the anterior insula of the brain. I knew of
von Economo’s and also of Papez’s proposals that the hippocampal-cingulate
cortex circuit was anatomically suited to deal with the circularity and persistence
of emotions. And there were the experimental results of Paul Bucy and Heinrich
Klüver who showed that bilateral removal of the entire temporal lobe produced
tame monkeys. But during the mid-1940s, when my research began, none of this
had been put together in any orderly way—nor had we evidence to test the
anatomical proposals of von Economo and Papez, or to take the Klüver-Bucy
findings further.
The following chapters chart the voyage of discovery that I undertook to
clarify, for myself and for the scientific and lay community as a whole, what the
relationship between feelings, emotions, motivations, pain and pleasure might
be. The course was not always straightforward: there were storms and calms. In
order to recount these in any readable fashion, I start with pain and pleasure.
The stories that have unfolded about how these discoveries were made, and
to which I have had the opportunity to contribute, are as exciting as those that I
read as a child. Paul DeKruif’s Microbe Hunters told about the discoveries made
by scientists who were my father’s contemporaries and whose work helped him
classify bacteria and fungi. Also the explorations of Admiral Byrd, whose book
on Antarctica I read at least a dozen times when I was ten years old, made me
wonder what comparable territory was left for me to explore: that territory
turned out not to be “out there,” but within our heads.
I was especially pleased, therefore, when Paul MacLean, who had been my
colleague at Yale, dedicated to me the translation into English he had inspired, of
a book written by Ramón y Cajal, one of the eminent pioneers to describe the
shape of brain cells. The dedication read: “To Karl Pribram, Magellan of the
Brain.”
The inner world within which targets and intentions are pursued is referred
to as “motivation.” Sometimes the term refers simply to the fact that we are alive
and eagerly navigating our world. A more important use of the term
“motivation,” which is developed in the next chapter, is akin to its use in music
where “motif” describes a pattern that forms the theme that unites the entire
enterprise.
In describing the brain organizations that form our world within, I will
follow the theme, the “motif” that permeates this book: I compare contexts that
are shaped with those that are patterned. Essentially, the “world within” is
“shaped” by its chemistry; brain processing provides its “patterns.”

Shaping the World Within


Organic chemistry is a chemistry of shapes. Bonds, shaped by valences
(possible combinations), connect parts of molecules whose form determines their
function. Large molecules, such as proteins, can change configuration with the
displacement of an atom, such as hydrogen, to alter the protein’s function.
As a prelude to tackling the brain processes that deal with pleasure and
pain, I will describe a most interesting proposal regarding shape in
“membranes,” the substrate on which the patterns of pain and pleasure operate.
As in the case of perception and action, we are faced with a surface and a deep
process: but for the world within it is the chemistry that furnishes the deep
process and the patterns dealing with feelings and choice that form the surface
structure.
46. Changes in the conformation of an amino acid under different contexts provided by H20. Languages of
the Brain, 1971

A Membrane-Shaping Process
Even water changes its properties by changing its form. A simple but
revolutionary example of the importance of shape in chemical processing comes
from a suggestion as to how water, when its H2O molecules become aligned,
becomes superconductive. My colleagues and I, as well as others, such as I.
Marshall in his 1989 article “Consciousness and Bose-Einstein Condensates,”
have proposed that the membranes of cells such as those of the stem cells lining
the cavities of the brain can organize the configuration, the shape, of water. The
membranes are made up of phospholipids. The lipid, fatty part of the molecule is
water repellant and lies at the center of the membrane, giving it structure; the
phosphorus part forms the outer part of the membrane, both on the inside and
outside, and attracts water. The phospholipid molecules line up in parallel to
compose the linings of the membrane. Gordon Shepherd, professor of
physiology at Yale University, has described the water caught in the interstices
of the aligned phosphorus part of the membrane as being “caught in a swamp.”
My colleagues, anesthesiologist Mari Jibu, physicist Kunio Yasue, the Canadian
mathematician Scott Hagen and I ventured to suggest that the water in the
swamp becomes “ordered;” that is, the water molecules line up much as they do
in a meniscus at a surface. Ordered water forms a “superliquid”: superliquids
have superconducting properties. Such properties are not constrained by the
resistance produced by ordinary materials.
In our work, we speculated that these membrane processes might serve as
substrates for learning and remembering. As such, they may do this, in the first
instance, by ordering the membranes involved in the most basic components of
learning and remembering: the processing of pain and pleasure. These are the
membranes of stem cells lining the central cavities of the brain that actually
provide a most effective substrate for experiencing pain and pleasure.

47. Phospholipid membrane of dendrites (From Neurobiology by Gordon M. Sheperd)

Core-Brain Pattern Sensors


In describing the context within which we navigate our world, scale is an
important consideration. Overall, at a large scale, as long as we are alive, as
animals we strive to move. Movement has consequences. Some consequences
make us move more, some make us move less. Movement itself uses energy, and
one of the ways we can sense the depletion of energy is that we feel hungry.
When we are hungry, we move more; having eaten, we move less.
We describe changes in the depletion and restoration of our energy in terms
of the body’s metabolism. Tracking metabolism requires sensors, sensitive to
various metabolic patterns. Some rather exciting experiences introduced me to
these sensors; they line the most interior parts of the brain.
In the early 1940s, as his resident in neurosurgery, I was assisting Paul
Bucy and his mentor, Percival Bailey (who had worked for a decade and a half
with Harvey Cushing, the first neurosurgeon in America) at the Neuro-
psychiatric Institute of the University of Illinois at Chicago to perform a delicate
surgical procedure. Under local anesthesia, we had opened the back of the our
patient’s skull to gain access to the auditory nerve with the purpose of curing her
Ménière’s disease, a persistent ringing of the ear accompanied by occasional
vertigo (spinning dizziness). Cutting into the auditory nerve reduces and often
entirely abolishes the intolerable symptoms. The brain stem (an extension of the
spinal cord that enters into the skull) and part of the cerebellum were exposed to
view. During surgery it is customary to keep the exposed brain wet by squirting
it with a dilute salt solution. The concentration of salt is the same as that in the
body fluids, including the cerebrospinal fluid that bathes the brain and spinal
cord.
On this occasion, every time we squirted the liquid onto the brain the
patient, who, was awake, would complain. The brain itself is insensitive; only on
the rare occasions when it becomes necessary to pull on blood vessels does the
patient feel uncomfortable. So we were surprised that the patient had felt
anything at all. We thought perhaps there was an aberrant blood vessel in the
way, but there wasn’t. Squirt. Again the patient complained. Another squirt and
the patient felt nauseated: we certainly did not want her to retch, which would
put pressure on the exposed brain and might dislocate parts of it.
The squirt was clearly the cause of her discomfort. We asked our scrub
nurse whether she was certain that the liquid she had given us to squirt was the
correct one. She assured us that it was: she had personally found doubly distilled
pure sterile water for the surgery. “DISTILLED WATER! How could you?”
Unless the correct amount of salt is in the solution, distilled water is an irritant to
delicate tissue such as the stem-cell membranes lining the ventricles. Recall that
these membranes are dependent for their function on the ordering of their
“water,” which is imbedded in the appropriate saline solution.
The offending distilled water was quickly replaced, and the surgery
proceeded without any further hitch and with excellent postoperative results.
Bucy, Bailey and I sat down after the operation and discussed what had
happened. The brain is insensitive; the water might have caused a bit of local
swelling if we had persisted in bathing the brain with it but only if we had
removed a fine layer of covering tissue, the pia. Inadvertently, we had discovered
that the inside lining of the brain is sensitive. So often in science discoveries are
made serendipitously, by chance. Actually, I have always felt that the real
purpose of an experimental laboratory is not so much to “test conjectures or
hypotheses” but to furnish the opportunity to make unexpected and unintended
observations.
48.

Bailey recalled that he had recently seen a patient who had been shot in the
head. The bullet was lodged in one of his ventricles (“little stomachs”) filled
with cerebrospinal fluid. The patient’s complaint: Sometimes he would wake in
the morning in a foul mood. At other times he would awaken cheerful. He had
learned that he could change his mood by positioning his head: If when he first
arose, he hung his head face down over the side of the bed, he would become
cheerful in a few minutes; when he tilted his head back, his black mood would
return. Sure enough, X rays showed that by tilting his head forward or backward,
the patient could change the location of the bullet, which would “drift” inside the
ventricle.
The question was whether or not to try to remove the bullet. The answer
was no, because the surgery would require cutting into the ventricle to get the
bullet out. Even minute temporary bleeding into the ventricle is extremely
dangerous, usually followed by a long period of unconsciousness and often
death. Bleeding elsewhere in the brain, once stopped, causes little if any injury.
The brain tissue repairs itself much as would such an injury in the skin.
Bailey remarked that the symptoms shown by these two patients indicated
that our ventricles, which are lined with stem cells with their undifferentiated
membranes, could well be sensitive. The nervous system of the embryo develops
from the same layer of cells as does our skin, the surface part of the layer folding
in to become the innermost layer of a tube. It should not, therefore, have
surprised us that this innermost part of the brain might have sensitivities similar
to those of the skin.
With the patient’s permission, we tested this idea in a subsequent surgical
procedure in which one of the ventricles of the brain lay open and therefore did
not have to be surgically entered. When we put slight local pressure on the
exposed ventricle, the patient felt it as an ache in the back of his head; when we
squirted small amounts of liquid, a little warmer or colder than the normal body
temperature, onto the lining of the ventricle, the patient also experienced a
headache. (Pressure and temperature are the two basic sensitivities of our skin.)
We then proceeded with the intended surgical procedure, cutting the pain tract in
the brain stem—a cut that the patient momentarily experienced as a sharp pain in
the side of his head—with excellent postoperative results. (Actually the patient’s
head jumped up when he experienced the pain, a jump that fortunately made
Bucy cut in just the right place. But we decided that next time the actual cut
would be done under general anesthesia.)
Percival Bailey had classified brain tumors on the basis of their
development from this innermost layer of cells lining the ventricles, the
“ependyma,” stem cells that have the potential of forming all sorts of tissue.
Bailey had spent a whole year with me and another resident, peering down a
microscope that was outfitted with side-extension tubes so we could
simultaneously see what he was seeing. He would tell us fascinating stories
about his work in Spain studying with Hortega del Rio and how Ramón y Cajal,
Hortega’s chief, was annoyed with Hortega for studying stem cells instead of
nerve cells. In Chapter 2 of my 1971 book Languages of the Brain, and in
Chapter 9 of this book, I summarized what I had learned—bringing to bear what
Bailey had taught us on how our brain can be modified to store experiences as
memories.
49.

The Core-Brain
At this juncture, a brief exploration of functional brain anatomy will help.
The names of the divisions of the nervous system were established over the
years prior to World War II. During these years, the nervous system was studied
by dividing it horizontally, as one might study a skyscraper according to the
floor where the inhabitants work. The resulting scheme carried forward the basic
segmental “earthworm” plan of the rest of the body (in many instances, in my
opinion, inappropriately as the cranial nerves are not organized according to the
segmental pattern of the rest of the body).
The tube we call the spinal cord enters our skull through the foramen
magnum (Latin for the “big hole.”) Within the skull, the brain stem becomes the
hindbrain. As we move forward, the hindbrain becomes the midbrain. Still
further forward (which is “up” in upright humans) is the forebrain. The forebrain
has a part called the thalamus (Latin for “chamber”), which is the way station of
the spinal cord tracts to the brain cortex (Latin, “bark,” as on trees.) The most
forward part of the brain stem is made up of the basal ganglia. During the
development of the embryo, cells from the basal ganglia migrate away from the
brainstem to form the brain cortex.

A New View of the Brain


After World War II, techniques became available to study the nervous
system from inside out. Most skyscrapers have a core that carries pipes for water
and heat, a set of elevators, and an outer shell where people work. At UCLA
Horace (Tid) Magoun and Don Lindsley studied the midbrain by electrically
destroying its core. In Montreal, Wilder Penfield and Herbert Jasper studied the
core of the thalamus by electrically stimulating it. Concurrently, at Yale, in John
Fulton’s laboratory, using neurosurgical techniques, I was able to reach the inner
aspects of the cortex, exploring them by electrical and chemical stimulation, and
by making surgical removals.

50. The segmental pattern of the human body


51. The segmental pattern of the spinal cord and peripheral nerves

These new methods, pursued on three fronts, produced new insights into
what we came to call the core-brain. From the midbrain forward there is actually
a continuous band of cells that had earlier been divided into separate structures.
In the midbrain, the gray matter (brain cells) around the central canal connecting
the ventricles, is called the “periaqueductal gray” because, at that point, the canal
inside the brain stem forms an aqueduct that leads from the brain stem ventricles
(that we found to be so sensitive) to the ventricles of the forebrain (where the
bullet had lodged in Bailey’s patient.) Shortly, we’ll see how important this gray
matter is to our understanding of the way we process pain. The periaqueductal
gray extends seamlessly forward into the hypothalamic region (which is the
under part of the thalamus, its upper part being called the dorsal thalamus.) The
gray matter of the hypothalamic region, in turn, seamlessly extends forward into
the septal region where the brain’s thermostat is housed.
The sensors that control the variety of behaviors that regulate metabolism
are located in this band of cells bordering the ventricles. The gray matter that
makes up the band regulates and makes possible the modification of a variety of
behaviors such as drinking, eating, motor and sexual activity.
Control is homeostatic; that is, the behavior is turned on and turned off
according to the sensitivity of the sensors. This sensitivity is set genetically but
can be modified to some extent by our experience.
52. (From Brain and Perception, 1991)

53. The median forebrain bundle (afferent and efferent) highlighting the concentric pattern of the brain. 52.
(From Brain and Perception, 1991)

The thermostats that control the temperature of our buildings work in this
fashion. A sensor in the thermostat usually consists of two adjacent metal strips
or wires that expand and touch, closing an electrical circuit as the room
temperature increases. When the temperature in the room drops, the metal
shrinks, opening the electrical circuit. Opening and closing the circuit can turn
on an air-conditioning unit or a furnace. The homeostats in the core-brain turn
on, and turn off, behaviors such as drinking and eating, which are controlled by
their respective sensors. Roughly, drinking is controlled by the amount of salt in
the blood and eating is controlled by the amount of sugar in the blood. More on
these topics shortly.
Ordinarily the loci of control over these behaviors are attributed to
processing “centers.” There is separation between these loci of control, but
within the controlling region there is considerable distribution of function. Here
are some examples:
The cells located around the central canal in the hindbrain control
breathing, heart rate and blood pressure. When I arrived at Yale one of the
graduate students wanted to know whether the “respiratory center” on one side
of the hindbrain controlled breathing on the same side or the opposite side of the
body. Under general anesthesia, he made a cut into the “respiratory center” of a
rat’s brain, but nothing happened to the animal’s breathing. Perhaps the cut was
inadequate. A week later he made another cut; deeper. Again nothing happened.
After a week, yet another cut, and still no change in breathing. He then tried
cutting into the breathing “center “ on the other side of the brain stem. By this
time he was surprised that the animal had survived so much surgery; his cuts led
a zigzag path through the length of the hind-brain. Conclusion: control over
breathing is distributed over a region among cells that do not form a “center.”
By using a variety of chemical tagging and electrical stimulation
techniques, brain scientists have found that such distribution of function is
characteristic of the entire band of gray matter surrounding the ventricles. In the
hypothalamic region of rats, stimulation of one point would result in wiggling of
its tongue. Stimulation of an adjacent point led to swallowing, the next adjacent
point to penile erection, and the next to swallowing again. The cells in the
hypothalamic region that regulate the amount we eat are strung out almost to the
midbrain, and those that are involved in regulating our basal temperature extend
forward into the septal region. To talk of “centers” in the brain within which all
cells are engaged in the same process is therefore misleading. Always there are
patches of cells and their branches that are primarily devoted to one function
intermingled to some extent with those devoted to another function. Thus, there
are systems devoted to vision within which there are cells that are tuned to parts
of the auditory spectrum. With respect to the regulation of internal body
functions, processes such as chewing and swallowing often intermingle with
cells that regulate ovulation and erections. Such an arrangement has the virtue of
providing flexibility when new coalitions among functions are called for.

Odd Bedfellows
We return now to the sensitivities shared by the tissue of the spinal canal
and ventricles with those of the skin. The sensitivities of the skin can be
classified into two sorts: touch and pressure make up one category, while pain
and temperature make up the other. These two categories of sensitivities are
clearly separated in our spinal cord. The nerves that convey the sensations of
touch and pressure run up to our brain through the back part of the cord; those
that convey pain and temperature run through the side of the cord. This
arrangement allows surgeons to cut into the side of the cord in those patients
who are experiencing intractable pain, in order to sever the patients’ “pain and
temperature” fibers without disturbing their sensation of touch. There is no way
to isolate pain from temperature fibers, however. Thus, cutting into the spinal
cord and severing the pain/ temperature nerves eliminates both pain and
temperature sensibility. As noted above, the human body, its vertebrae and
nervous system, is formed in segments, like those of an earthworm, thus
sensibility is lost only in the segments below the cut but not above it. This
segmentation becomes manifest when shingles develop: the itch and pain are
usually restricted to one body segment.
54. Example of core-brain receptor sites in rat brain: striped areas indicate uptake of labelled estrogen
(female sex hormone) molecules. Lateral structure showing uptake is amygdala. (From Stumpf, 1970)

Pain and temperature—what odd bedfellows! The story of how I came to


understand how these fellows became bedded together follows in the remainder
of this chapter. To begin, brain scientists were puzzled for a long time as to how
we sense pain—because we possess no receptors that are specific to sensing
pain. For touch, pressure, cold and hot there are specialized nerve endings that
act as receptors—but even for these sensations the specialized receptors are
unnecessary: we can feel these categories of sensation in parts of the body where
such specialized receptors are absent. John Paul Nafe, an eminent professor of
physiology working at Florida State University during the mid-20th century,
suggested, on good evidence, that the nerve endings in the skin could be
stimulated in two ways: the first is produced by depressing the skin, hence
stretching the skin’s nerve net laterally, which gives rise to the sense of touch or
pressure, depending on the magnitude of the stretch. A second gradient from
depth to surface—established by contraction and dilation of the skin’s blood
vessels—provides the input for sensing temperature. Nafe proposed that when
the depth-to-surface gradient is activated from outward to inward, as when
radiation comes from a heat source such as the sun, we feel warm; conversely, as
we radiate out to our environment, activating the gradient from inward to
outward, we feel cool.
But at extremes the distinction between sensing pain and sensing
temperature disappears: with exposure to extreme hot or cold environmental
situations, feelings of pain and even pleasure merge.
For example: I had an experience shared by many Arctic and Antarctic
explorers. I was trekking with my companion to my cottage in deep drifts of
snow, having had to park my car about a mile away from home. I get asthma in
cold weather, and I was wheezing away. I soon felt tired and decided to rest for a
moment, so I lay down in the snow. What a relief! I soon felt nice and warm and
cozy despite an external temperature well below zero, and would have dropped
off to sleep if my companion had not noticed that I had stopped. She kicked me
and got me up and going.
At the extremes, our experience of warmth and cold become
indistinguishable. Possibly this effect is related to the control of pain by our
endorphins (endogenous morphine-like substances), the same chemicals that
account for “second wind” during extended exertion. I’ll have more to say on
this topic shortly.

The Search for “Pain in the Brain”


Where, within the brain, do the spinal cord nerve tracts that transmit our
sensations of pain and temperature end up? One tract ends in the parietal lobe of
the brain, the same place where the nerves that transmit our sensations of touch
and pressure end up. Knowing this, in a few cases surgeons removed the cortex
of the parietal lobes in an attempt to rid patients of intractable pain. This failed to
work. On the other hand, when the most forward parts of the frontal lobes were
removed or cut into, as in the lobotomy procedure, the intrusive persistence of a
patient’s pain, the suffering, always disappeared.
During the late 1940s, I came to the conclusion that persistent sensations of
pain/temperature must reach the frontal lobes by way of a tract separate from the
one that reached the parietal lobe. I therefore undertook a series of studies (the
last of which was published in 1976) to discover, not only the pathways by
which pain sensations reached the frontal cortex of our brain, but how that cortex
controlled our experience of pain. An immediate difficulty to be faced was the
fact that I did not use painful stimulation in my animal experiments. I resolved
this by assuming that, if I studied temperature, pain’s odd bedfellow, I would be
a long way toward unlocking the secret of frontal lobe control of pain.
Pain is not a single sensation, even at the skin. There is “fast” pain and
“slow” pain. Fast pain is discrete as when we feel a pinprick. We can tell where
we were pricked and when the prick occurred. By contrast, slow pain is hard to
locate in either place or time. It takes us longer to feel the slow pain, thus the
name “slow.” The difference between fast and slow pain is due to a difference in
the size of the nerve fibers that transmit the sensation from the skin to the spinal
cord. The nerve fibers that transmit fast pain are large; the nerve fibers that
transmit slow pain are very small. Interestingly, the small nerve fibers also
mediate temperature sensations, fibers that become even more intertwined with
pain fibers in the spinal cord.
The difference between the two types of pain is classically demonstrated in
patients who have a form of syphilis called tabes dorsalis, which can become
manifest long after the initial infection. These patients have lost their fast pain
but retained their slow pain because their large nerve fibers have been damaged
where they enter the spinal cord.
The distinction between fast and slow pain is maintained in the brain stem
and the brain: the fast pain tracts end in the parietal lobe of the brain, which left
unsettled the issue as to whether slow pain reaches the frontal lobe. A clue as to
where these slow pain-related systems might be located is that in the brain stem,
surrounding the connection between the ventricles, lies a system (the
periaqueductal gray) that is sensitive to morphine-like substances, the
endorphins, which suppress the experience of pain.
My research therefore had to concentrate on finding evidence that forebrain
systems might be involved in the experiencing of pain—and to use the relation
between pain and temperature as the tool for such a search.
I initially trained monkeys that had been used in a variety of other studies to
home in on the parts of the brain that might be involved in sensing temperature. I
taught the monkeys to choose the colder of two test tubes to receive a peanut. I
placed the test tubes either in buckets of ice or of hot water, randomly varying
the cold and the hot between trials. I found that monkeys with damage to the
bottom of the frontal lobe and the medial surface of the temporal lobe were
impaired in making the choice. However, the failure to choose was not equally
severe in all the monkeys and making the choice took a variable period of time.
With the help of a postdoctoral student and others with technical expertise
in the laboratory—in the acknowledgments in the publication we noted that
since the experiment had taken over five years to complete, the whole laboratory
had had a hand in its execution at one time or another—we set out to repeat my
earlier experiments with better control over our stimuli: a metal plate was cooled
or heated between trials in a random order. The monkeys were trained to press
one or the other of two panels: one if the plate felt cold, the other if the plate felt
warm. As a control, an identical visual task was set up in which the monkeys
were shown a “+” or a “#” and had to press one panel when they saw a + and the
other panel when they saw the #. We then placed electrodes on the surface of the
parietal lobe and on the bottom of the frontal cortex and the medial part of the
temporal lobe—the areas that had given promise of being involved in the earlier
study.
The experiment gave excellent results: when we disrupted brain function by
stimulating the frontal and temporal cortex and the relevant pathways to these
parts of the brain, the monkeys completely failed to make the temperature choice
but performed perfectly on the visual one.
They again performed well on the temperature choice when the stimulation
was turned off. Nor did stimulation of the parietal lobe have an effect on the
performance of the monkeys, a result that supported the findings of other studies
that had shown that only very fine differences in temperature—not gross
differences as used in our study were affected by surgical resection of the
parietal cortex of monkeys. We concluded that perhaps slow pain, as indicated
by its odd bedfellow temperature, reached the frontal cortex and not the parietal
cortex of the brain.
While I was attaining some insight into the brain systems entailed in
controlling the persistence of intractable “slow pain”—the pain that is alleviated
by frontal lobe surgery—postdoctoral fellows in Don Hebb’s department at
McGill University were discovering what constitutes the essential brain process
involved in our experience of pleasure.

Pleasure: Self-Stimulating the Brain


In the 1950s, Peter Milner and James Olds, working in Don Hebb’s
laboratory at McGill in Montreal, Canada, stumbled on an exciting experimental
result. They wanted to study whether they could influence the rate of learning by
electrical stimulation of a part of the brain stem. The rat was to learn to push a
hinged platform placed on the floor of its cage whenever the brain stimulus was
turned on. Within a few minutes the experimenters were surprised to see that the
rat was pushing the platform without the experimenters turning on the electrical
stimulation to its brain. A short circuit had occurred that hooked up pushing the
platform with the electrical stimulation. As Milner and Olds were searching for
the short circuit, they noticed that the rat was pushing the platform more and
more often. Serendipitously, they had discovered that the rat would push a
platform in order to electrically stimulate its brain. The brain stimulation was
“rewarding,” and perhaps felt “pleasurable,” to the animal.
When Milner and Olds looked at the brain of the rat to see where they had
implanted their electrode, they found that it was much farther forward than they
had planned. Instead of being in the brain stem, the electrode had been placed in
a tract that courses along the sides of the centrally placed ventricle of the brain.
A few more experiments showed this tract and the surrounding tissue to be the
locations from which electrical self-stimulation of the brain could be obtained.
Some months after Milner and Olds made their discovery, Jim Olds, his
wife, and I were having lunch together in Montreal. I took the opportunity to
suggest that it would be important to map all of the brain sites from where self-
stimulation of the brain could be obtained. I was eager to know whether these
sites might include sites in the olfactory (by now called the limbic) system of the
brain. Jim, who had obtained his degree in sociology from Harvard and was
delving into brain science for the first time, said that he was not up to such a
tedious task. His wife Nicki, however, shared my view that the mapping would
be important. She had a doctorate in philosophy and in the history of science and
she was eager to get into the laboratory. With Jim’s and my encouragement, she
volunteered to do the experiments.

55. Places in a rat’s brain where self-stimulation is elicited. (From Brain and Perception, 1991)
The results of her mapping showed that sites of self-stimulation were
located not only in the tract adjacent to the core ventricle gray matter but also in
what is the rat’s olfactory brain. Further, the amount of effort the rat would
expend (how fast and for how long the rat would push the panel) on self-
stimulating was a function of the three systems that I had recently outlined in a
1954 paper, “Functions of the ‘Olfactory’ Brain.”
The discovery of self-stimulation by Olds and Milner was celebrated as
locating the “pleasure centers” in the brain. When a psychiatrist in New Orleans
electrically stimulated these brain sites in humans, the patient stated that he or
she was experiencing a pleasant feeling. One patient repeatedly professed deep
love for the psychiatrist who had turned on the stimulus—but the feeling was
present only during the stimulation!
By this time I had arrived at Yale, in John Fulton’s Department of
Physiology, where another set of experiments was under way in which an
experimenter turned on —and left on—an electrical stimulus in the same brain
locations as in the Olds and Milner experiments. The rats would turn off this
ongoing electrical stimulus by pushing a panel, but the electrical stimulation
would turn itself on again after a brief pause. In this experiment, the rat would
dutifully keep turning off the stimulation. This was the mirror image of the turn-
on self-stimulation effect. The location of these turn-off effects was the same as
those that produced the turn-on effect.
I suggested to our graduate student who was doing the experiment that the
stimulation be adjusted so that, in the turn-on experiment, the stimulus stays on
after being turned on, to see if the rat would turn it off. It did. And when the
electrical stimulation remained off, the rat turned it on again. Jokingly we called
the graduate student’s thesis “Sex in the Brain!”
On the basis of this result, we can interpret the self-stimulation experiments
as “biasing,” that is, modifying the context within which the naturally occurring
pleasure is processed. Nicki Olds showed that this is an excellent explanation as
she explored the interaction between self-stimulation with food and water intake
and sexual behavior. She showed that “pleasure” is ordinarily self-limiting,
although, under certain conditions, it can become “addictive.” Jim Olds pointed
out that the danger of potential addiction is why there are so many social taboos
that aim to constrain the pursuit of pleasure. (I’ll have more to say about the
conditions that lead to addiction in Chapter 19.)
Ordinarily, pleasure has an appetitive phase that ends in satiety. The finding
of how the brain process works to produce an appetitive and a satiety phase has
important clinical implications. Pleasure is self-limiting unless you trick the
process, as in bulimia, by short-circuiting it: the bulimic person artificially
empties his stomach so that the signals that usually signify satiety, such as a full
stomach and absorption of fats, are disrupted. Is the bulimic’s short-circuiting
process a parallel to that which occurs in self-stimulation of the brain? Or, as in
anorexia, can culture play the role of “keeping the stimulation on” in the role of
a person’s self image so that the person’s brain is always set to experience only a
“turn-off” mode. We need to find noninvasive ways by which we can change the
settings in the brains of the persons with these eating disorders.
Evidence that further associates pleasure with pain came from experiments
that showed pain to be a process in which the experience of pleasure can be the
antecedent of the experience of pain. The experimental findings that led to this
conclusion are taken up shortly.
To summarize what we have covered so far in this chapter: Our world
within is formed by stimulation of a variety of sensors lining, and adjacent to,
the central canal of the nervous system, a canal in which the cerebrospinal fluid
circulates. In warm-blooded animals, including humans, the sensors are linked
to one another through the regulation of behaviors such as drinking, eating and
motor activity that regulate our basal temperature: thus the explanation for the
odd pairing of pain and temperature in our spinal cord and brain is that the
regulation of temperature is the basis of “how” we come to experience pleasure.
The following are examples of what to me were exciting discoveries of the
sensitivities and their regulation of pleasure and pain by these “core-brain”
sensors.

Brain and the Regulation of Thirst


I remember that while I was in college, I asked my father what makes us
thirsty, other than dryness of the mouth and tongue? How do we know
immediately just how much to drink, how many swallows to take to replenish
our thirst—for instance, after a tennis match?
The discovery of how the brain regulates thirst is one of those delightful
sagas that occasionally intrude among the more disciplined scientific enterprises.
There is a disorder known as diabetes insipidus during which the person urinates
huge amounts—and, as a consequence, is continually thirsty. This disorder is
caused by an imbalance in the secretion of a pituitary hormone. The question
arises: What ordinarily controls the secretion of this hormone?
In the late 1880s, my father had done his medical school thesis on the
control of the pituitary gland by cells in the hypothalamic region. When I first
found this out, I was surprised that the hypothalamic control over the pituitary
secretions was already known. But the specific manner of how this control
operated was not known even in the mid-1940s. After World War II, Pier
Anderson and his colleagues at the Technical University of Stockholm in
Sweden devised a simple experiment that revealed the “how” of the
hypothalamic control of the pituitary gland in the production of thirst. Using a
goat, they placed a small tube into the ventricle just above the pituitary gland. In
this location, a network of blood vessels connects the hypothalamic region with
the pituitary gland. Anderson took a pinch of table salt and poured it down the
tube. The goat rushed to a nearby fountain and drank and drank and drank and
would have burst its belly had not Anderson been prepared to put another tube
into its stomach. Further research showed that the amount of drinking was
directly proportional to the concentration of salt in the ventricle.
I had to see this for myself, so on a visit to Stockholm I made an
appointment with Anderson. There was the fountain in the center of a square
surrounded by stables. I made friends with one of the goats who had a tube in his
tummy. A minute amount of salt was inserted into another small tube in the
goat’s head. The goat and I trotted off together to the fountain and the goat drank
and drank, with the water pouring out of the tube in his stomach. Many carefully
done experiments showed the “how” of the thirst process and how the exact
quantity of water drunk is determined by the proportion of salt in the sensors
lining the ventricle surrounding the tissue above the pituitary. Here was a simple
demonstration of the sensitivity and the power of control that the core-brain
sensors exert over our behavior—and, I believe, over our feelings, if my reading
of the goat’s urgency in getting to the fountain is correct.

Masochism: A “Thirst” for Pain


Another major event in my understanding of brain function came with the
discovery of the endorphins. Surprise: pain turned out to be processed in much
the same way as pleasure. As I previously noted, we have no specific receptors
for pain. But pain can be produced by excessive stimulation, of whatever sort,
even by a very bright light or by an overly loud sound. This is true also of
excessive stimulation of the tracts in our spinal cord and brain stem that relay
signals from our body and face. Pain is coordinate with excessive stimulation
when it overwhelms our ordinarily ordered sensory processes.
No one has found a “center” for pain in the brain. However, during the
1970s, experiments with rats showed that when the cells of the periaqueductal
gray matter are electrically stimulated, pain produced by a peripheral stimulus,
such as pinching the skin, is turned off. At about the same time, several
laboratories reported that morphine acts selectively on the sensors in the
aqueduct adjacent to the periaqueductal gray. Next, experimenters showed that a
hormone called endorphin, whose effects were almost identical to those of
morphine, is secreted by our pituitary gland and that the amount of secretion is
sensed and regulated by cells in the hypothalamic region and in the
periaqueductal gray. These endorphins account for protection against the pain
produced by pinching the skin, as in the experiments during electrical
stimulation of the periaqueductal gray.
Further research demonstrated that the stimulation and the endorphins
influenced the part of the spinal cord where sensory input from the pinching
becomes organized. The theory, and its confirmation that predicted this result, is
known as the “gating theory,” popularized by Patrick Wall, professor at MIT and
later at the University of London, and by Ronald Melzack, professor at McGill
University in Montreal. The theory states that our ordinary sensations are
patterned and that, when excessive stimulation overrides the patterned sensory
input, pain is experienced.
The results of this research demonstrate that pain is controlled
homeostatically, exactly as thirst, hunger and temperature are controlled! (In
fact, excitations of the brain that result in the secretion of endorphins also
produce goose bumps and the sensations of chills and thrills.) When we are tired,
our endorphins are low and small injuries can seem to hurt a lot. When we get
our second wind during running, our endorphins kick in. During the 1960s and
70s, I told my students that it is stupid to buy drugs on the street when you can
generate your own homemade supply by exercising.
An incident highlights the novelty of these discoveries about endorphins
and the homeostatic regulation of pain. Science can provide answers to puzzles
that often stump philosophers. This may hardly seem to be news to most of us—
but needs to be mentioned because a number of philosophers and even some
scientists have stated that the mind/matter relationship is one that only
philosophers can resolve—that no matter how many experiments we do, such
experiments, or the theories we base on them, will be irrelevant to the issue of
the relationship between mind and brain. On one occasion, after my colleague
Sir John Eccles had given a splendid talk, a noted philosopher got up and made
remarks to this effect. Eccles and I were both furious. It was lunchtime, and
Eccles took me by the hand as we marched out and said, “Karl, we simply must
do our own philosophy.” Which we then did.
Shortly after the discovery of endorphins, I attended a meeting on the topic
of philosophy in medicine at the medical school of the University of
Connecticut. One philosopher delivered a very long paper in which he pointed
out how unsolvable the topic of pain is: “If one defines pain behaviorally, in
terms of escape or avoidance, what about masochism?” For over an hour he
harped on that same theme over and over. I was the discussant of the paper and
based my critique on the recent discoveries of the endorphins: One of my
students had just completed a study involving sado-masochism, demonstrating
that, at least in her sample groups, there was never any real pain experienced.
Rather the experience was an almost-pain, more like an itch. The participants in
her groups were so well attuned to one another that they never went beyond that
threshold to a stimulation that produced actual pain. True, this is not the
stereotype one gets from some of the stories written by the Marquis de Sade.
However, other, more contemporary tales, as for instance Story of O, do
resemble the careful analysis that my student came up with.
When I start to scratch a mosquito bite, histamine has already been secreted
at my local nerve endings at the site of the bite. The histamine affects the local
small blood vessels to produce an itch, almost like a slight burning sensation. I
scratch until it hurts, then I stop scratching. I’m thirsty: I drink until I feel full. I
eat until I feel sated. The parallel between the appetitive-consummatory cycle in
homeostatically controlled experiences and those operating in our experiencing
of pain is evident. Scientific experiment, leavened by philosophical reasoning,
provided an answer that philosophy alone could never have attained.
The world within is governed by homeostatic patterns. These patterns are
not static but change with circumstance; thus, Waddington coined the term
“homeorhetic,” which has found favor. Whether homeo-static or homeorhetic,
the essence of the patterns is a regular recurrence of an appetitive-satiety, a “go”
and “stop,” cycle. Both pleasure and pain partake of this pattern.
Much excitement, which I shared, was generated during the 1950s as these
insights about homeostasis and homeorhesis were gained. As I described in
Chapter 5, the principles that were initially formed with regard to the world
within were shown to be equally valid for sensory processing. All senses were
shown to be controlled by feedback from the brain—with the exception of
primate vision, which I set my laboratory to discover over the next decades. As
noted earlier, a result of our excitement was that George Miller, Eugene Galanter
and I wrote a book published in 1960, Plans and the Structure of Behavior,
which related our insights to experimental psychology and to how computer
programs are organized.
Go- and stop-homeorhetic cycles are regulated by higher-order
modulations, anticipated in Plans and the topic of the next chapter.

In Summary
1. The brain is lined with a layer of stem cells that is sensitive to the same
stimuli as is the skin from which they are derived in the embryo: pressure
and temperature—and, in addition, to changes in the salinity of the
cerebrospinal fluid bathing them.
2. Surrounding the stem-cell layer are control systems of brain cells that, when
electrically excited, turn on and turn off behavior. These systems have been
interpreted to provide comfort (pleasure) and discomfort (pain) to the
organism so stimulated, and this interpretation has been confirmed by
verbal reports from humans stimulated in this fashion.
3. A stimulus is felt as painful when any pattern of stimulation is
overwhelming and disorganized.
4. Endogenous chemicals, endorphins, act as protectors against pain. When
activated, these chemicals produce thrills and chills.
5. A stimulus is felt as pleasurable when (in warm-blooded animals) it
activates the metabolic processes anchored in the regulation of temperature.
Finding temperature regulation to be the root of pleasure accounts for the
previously un- understood intimate intermingling of the pain and
temperature tracts in the spinal cord.
Chapter 10
The Frontolimbic Forebrain: Initial Forays

Wherein I describe my initial discoveries that defined the limbic systems of the
brain and the relation of the prefrontal cortex to those systems.

Some strange commotion


Is in his brain: he bites his lip and starts:
Stops on a sudden, looks upon the ground,
Then lays his finger on his temple; straight,
Springs out into fast gait; then, stops again,
Strikes his breast hard; and anon, he casts
His eye against the moon: in most strange postures
We have seen him set himself.
—Henry VIII, Act III, Scene 2.
Quoted by Charles Darwin, The Expression of the Emotions in Man and Animals,
1872
The Yale Years
1948 was a critical year for me. I took and passed the exams that certified
me as a member of the American Board of Neurological Surgery and was
appointed assistant professor in the Departments of Physiology and Psychology
at Yale University. I thus achieved my long-sought entrance into Academe
through the offices of John Fulton, the head of the Department of Physiology.
My relationship with Fulton began during the late 1930s while I was a
student at the University of Chicago. The first edition of Fulton’s classic text on
the Physiology of the Nervous System was published and made available the
weekend before my final exam in a course in physiology given by Ralph Gerard.
I was mesmerized by Fulton’s text: it laid out what was then known, so clearly
and thoroughly, that I could not put the book down. On Monday, Gerard posed a
single question for the final: “Discuss the organization of the nervous system.” I
filled blue book after blue book, basing my answer on Fulton’s text, until the
time allotted was up. Gerard told me that it was the best answer he had ever
received on any examination. Fortunately, he had not yet seen the Fulton text.
My next encounter with Fulton was in 1943, while I was an intern-resident
in neurosurgery with Paul Bucy. We often discussed the scientific aspects of the
surgical operations we were engaged in, and Bucy told me that if I ever wanted
to do research, Fulton’s department at Yale would be the place to go.
Therefore, while in practice and working in Karl Lashley’s laboratory in
Jacksonville, Florida, I expressed my desire to get to Yale (which owned the
laboratory). Lashley wrote letters of recommendation to the dean—and received
no answer. (Lashley at the same time was trying to get Nico Tinbergen, who later
received the Nobel Prize, placed at Harvard, with similar negative results.) And I
was asking Bucy and Percival Bailey at the Illinois Institute of Neurology and
Psychiatry to write letters of recommendation. Bailey’s reply was succinct as
was his wont: “Karl, if you are thinking of obtaining a job like mine, your
chances are slim as I intend to hang onto the one I have.” (He changed his mind
a decade or so later when Warren McCulloch moved permanently to MIT, and I
was offered his research position at the Illinois Institute. But by that time my
laboratory was already well established at Stanford.)
In 1948, I took the opportunity to visit Fulton personally. Fred Mettler at
Columbia University had invited me to New York to discuss his frontal lobe
topectomy project. I had had a ward of patients assigned to me upon whom to
perform frontal lobotomies. As a well-trained surgeon, I held to the edict:
“Primum non nocere”—“First do no harm.” I was administering behavioral tests
to the patients, and they were improving, given the personal attention involved.
(More on this in Chapter 16.) I was therefore interested in Mettler’s project, and
he was graciously inviting me to see what was going on.
I decided that while in New York I would visit Fulton in nearby New Haven
and had made an appointment to meet him in the Yale Medical School Library. I
drove up to Yale, approached the hallowed halls and felt like genuflecting as I
entered the rotunda leading to the library. Fulton greeted me warmly, and we
talked about his experiments with the chimpanzees Becky and Lucy upon whom
he had performed frontal lobectomies. I told him of my results at the Yerkes
Laboratory and that they did not corroborate his findings. He fetched two large
volumes of protocols, gave them to me to read, and said he’d meet me for lunch.
What I found is described in detail in Chapter 16. It agreed with what I had
found but totally disagreed with the interpretation that had been presented to the
scientific community. I met Fulton for lunch. He asked if I had found what I
wanted—my answer was yes—but he asked nothing about the particulars of
what I had found. I remember little else of the meal after he next asked, “Would
I like to come work with him?” !!!!
A few months later, I received a telegram stating that he had received the
funding from the Veterans Administration he had been waiting for, and could I
start in two weeks? I finished what I could and started at Yale the day before
Fulton left for London. The rest has become, as they say, history.

Electrical and Chemical Stimulation of the Limbic


Forebrain
In the previous chapter, I detailed the form that brain processes take in
generating our feelings of pain and pleasure. Closely related are the brain
processes that deal with the autonomic nervous system that regulates our visceral
and endocrine functions. In this chapter, I describe my discoveries—while I was
at Yale University in John Fulton’s Department of Physiology—of the brain
systems that deal with those visceral body functions and their presumed
relationship to emotional and motivational behavior.
A persistent theme in describing how our body participates in the
generation of emotions and motivations was formulated by William James.
James suggested that when we perceive an emotionally exciting sensory event,
the brain sends a message to our body, which responds to the stimulus by
evoking different bodily (especially visceral) reactions. The emotion is due to
our sensing these body reactions. His formulation is known as the “James-Lange
theory.” Lange had proposed that the body’s blood vessels provide the signals
that we interpret as an emotion. James suggested that not only blood vessels but
also viscera, such as the gut and heart, generate these signals. Occasionally,
James also mentions the muscles as originating such signals, and more recently
Antonio Damasio, professor of neurology at the University of Southern
California, in a 1994 book, Descartes’ Error, called attention to the variety of
signals arising in the body that contribute to our feelings. He calls these signals
“somatic markers.”
One of the reasons I was attracted to Fulton’s laboratory at Yale was that the
laboratory had built an electrical stimulation device with which brain
stimulations could produce changes in blood pressure, heart and respiratory rate,
and other visceral changes. I had tried unsuccessfully to produce such changes
with an old alternating-current device called a “Harvard inductorium.” Penfield
told me that he had had some success in changing heart rate using the
inductorium to stimulate the cortex on the upper, medial surface of the temporal
lobe, but that he could not reproduce this result very often. The Yale apparatus
put out “square-wave” pulses, and the duration of the pulses could be controlled.
When that duration was adjusted to a millisecond, the visceral changes produced
by cortical stimulation were invariably produced.
The studies accomplished at Yale using this square-wave stimulator had
shown that visceral responses could be obtained by stimulations of the cingulate
gyrus and the cortex at the base of the frontal lobe. I devised a method that
exposed not only these areas of the cortex but that of the “insular” cortex buried
in the lateral fissure between the frontal and temporal lobes and the adjacent
cortex of the front end of the temporal lobe surrounding the amygdala as well.
Using these techniques, assisted by a graduate student, Birger Kaada, and a
postdoctoral neurosurgical student, Jerome Epstein, I was able to map a
continuous region of cortex extending from the temporal lobe to the cingulate
cortex. Electrical stimulation of this cortex produced changes in blood pressure,
heart, and respiratory rate. We recorded these changes by means of a long needle
hooked up to a rotating set of drums over which we stretched paper we had
“smoked” using a smoking candle or gas burner. After obtaining a record, we
shellacked the paper and waited until the next morning for it to dry. We called
the region from which we obtained our responses the “mediobasal motor cortex”
to distinguish it from the classical motor cortex on the lateral surface of our
brain. Today, a more memorable name would be “the limbic motor cortex.”
56. The Limbic Motor Cortex

I next turned to using chemical stimulation of the brain. During my


residency in neurosurgery, I had assisted Percival Bailey, Warren McCulloch and
Gerhard von Bonin at the Neuropsychiatric Institute of the University of Illinois
at Chicago in their chemical stimulations of the monkey and chimpanzee brain,
stimulations they called “strychnine neuronography.” Neuronography outlined
brain systems that were intimately interconnected, clearly separating these from
the surrounding regions. But Bailey, McCulloch and von Bonin mapped only the
lateral surface of the brain. Using my newly devised techniques to provide
access to the medial aspects of the brain, I therefore went to work, assisted by
Paul MacLean, who had joined my project in Fulton’s department at Yale, to
map these hitherto inaccessible regions of the brain.
Neuronography demonstrated the division of the limbic forebrain into the
two systems: the amygdala system and the hippocampal system, each with
intimate connections to a part of the frontal cortex. We were able to clearly
demarcate limbic from non-limbic regions; but, as in my behavioral studies, the
“limbic” cortex included neocortical additions in the primate, additions that were
not present in the cat.

57. This figure, depicting the lateral and inferomedial aspects of a monkey’s brain shows diagrammatically
the distribution of regions and segments referred to in the text. Dotted stippling denotes frontotemporal;
rosroventral striations denote medial occipitotemporal; vertical striations denote medial parieto-occipital;
horizontal straitions denote medial frontoparietal; dorsorostral straiations denote medial frontal.

The other important result of our neuronographic studies was that, though
there were major inputs to the hippocampus from the adjacent isocortex, there
were no such outputs from the hippocampus to adjacent non-limbic cortex. The
output from the hippocampus is to the amygdala and accumbens, and through
the subcortical connections of the Papez circuit. MacLean, always adept at
coining names for the results of our experiments, suggested that this one-
directional flow of signals from isocortex to the limbic cortex indicated a
“schizophysiology” in that the “limbic systems,” though open to influence from
the rest of the cortex, worked to some extent independently from the isocortex—
much as James Papez had suggested. Papez was most pleased with our results
when MacLean and I recounted them on a visit to his laboratory.

Emotions and the Limbic Brain


How then are the limbic parts of the brain involved in organizing our
emotions? My initial inquiries indicated that there was some considerable
substance to the views formulated by William James and James Papez, but at the
same time, I was at a loss when I tried to answer how these systems worked in
organizing our emotions. An answer had to come from carefully analyzing the
composition of what constitutes our emotions and motivations, and relating the
constituents to particular brain systems. When I wrote my book Brain and
Perception, I found that I needed to consider the entire brain, not just the
sensory input systems, to determine how our brain organizes our perceptions.
The same situation applies in studying emotions. For instance, we can
describe the changes in behavior following removal or electrical stimulation of
the amygdala, but when we attach names to the emotions that accompany those
changes in behavior we need to use the parts of the brain that are involved in
producing names, parts of the brain considerably removed from the limbic
systems. Thus, to attribute the taming of a monkey to a loss of “fear” may or
may not be correct, as we will see in the next chapter. A realistic view of the
relationship between emotion and motivation to the organization of brain
systems, including those called limbic, can be attained only when all the facets
of the behavioral and brain systems are considered.

The Yale-Hartford Laboratories


In 1951, three years after Fulton asked me to come to Yale, he was asked to
head the medical school’s library, which he had founded and nourished, and to
give up the Department of Physiology. Though I could continue to teach at Yale,
most of the physiology department’s laboratory space was now assigned to
biochemistry. I therefore moved the bulk of my laboratory to a new research
building at a nearby mental hospital, the Hartford Retreat, renamed the Institute
of Living. I needed financial support to make the move.
David Rioch, psychiatrist and neuroscientist, as well as a dear friend, had
just become director of the Army’s Research Laboratories at the Walter Reed
Hospital in Washington, DC, and was, at that time, bringing together a group of
superb investigators to shed light on the complexities of the
brain/emotion/motivation relationship. When we started we knew practically
nothing about the limbic forebrain, certainly nothing about the relation of the
prefrontal cortex to the limbic systems and what these parts of the brain might
have to do with emotion and motivation. Rioch had worked on hypothalamic
systems that, at the time, were considered the “head ganglion of the autonomic
nervous system,” which by definition excluded cortical contributions to visceral
and emotional processing. It was my work with Bucy, supported by Fulton, that
had overturned that earlier conception.
The immediate question Rioch was addressing was set by troops who could
hardly make it to their battle station in the Korean war, but when, in an
experimental setting, were told that orders had been changed and to reverse their
course to go for R and R in Japan, would cheer and sing and march double time.
What chemistry or brain processes accompanied such a change in emotion and
motivation?
It was Rioch who funded my initial grant applications to pursue my
research. I asked Rioch if the Army might provide some $2,000 for automated
behavioral testing equipment. Rioch replied that this would be impossible. I
must ask for at least $20,000 since it would cost $2,000 just to do the paper work
in processing my grant application. I asked for, and received the $20,000. For
almost a decade, this money, plus additional funding, supported a promising and
productive group of graduate students, not only from Yale but also from
Harvard, Stanford, the University of California at Berkeley, and McGill
University in Canada. The laboratory at the Institute became a Mecca for
students intensely interested in the relationship between brain and behavior.

Functions of the Olfactory Brain


In 1952, with the help of my then Yale graduate student Larry Kruger, I
published a paper titled “Functions of the Olfactory Brain.” We did not, at the
time, use the term “limbic systems” for this constellation of structures—that
formulation was to be achieved shortly. I received over 2,000 reprint requests for
that paper.
We were able to confirm the division of the “olfac-tory brain” (beyond the
olfactory sensory structures per se) that had been attained with chemical
neuronography into two distinct segments on the basis of the results of electrical
stimulations: 1) those parts that receive a direct input from olfactory structures
(which lie just behind the nose and receive an input from the receptors in the
nose) and 2) a set of structures that receives input from the earlier segment. The
first segment is composed of a system that centers on the amygdala, which has a
cortical component; the second segment is composed of the circuit that includes
the hippo-campal and cingulate cortexes.
In our paper, Kruger and I noted that in primates, including humans, a band
of new six-layered neo-cortex immediately surrounds the allocortex in the
neighborhood of the amygdala and the cortical systems adjacent to it. In our
classification we were following the lead of Filomonov, a Russian neuro-
anatomist whom I visited in Moscow to learn firsthand why he had included
these newer components in what he called “juxt-allocortex” (juxt, “next to”).
Juxt-allocortex, more recently renamed “peri-allocortex” is new cortex,
appearing for the first time in primates. As a result, the use of the name
“neocortex” to distinguish its functions from those of the allocortex is
inappropriate.
This detail is important because the popular idea that our brains are made
up of old cortex that organizes more primitive behaviors such as emotions and
motivations—as opposed to new cortex that serves cognitive functions— doesn’t
hold up. Thus, many of our social institutions that are based on this emotions-
versus-cognitions dichotomy badly need revamping. I’ll have much more to say
about this in Chapter 17.
My behavioral, electrical and chemical stimulation experiments undertaken
at Yale corroborated the anatomical considerations that had led to the inclusion
of juxtallo-cortical formations as intimate parts of the allocortex adjacent to it.
The second segment of the olfactory brain was the hippocampal-cingulate
system. This system had been singled out in the 1920s and 1930s by the eminent
Viennese neurologist Constantin von Economo and by neuro-anato-mist James
W. Papez in Columbus, Ohio, as generating our emotional feelings and their
expression. Their reasoning was based purely on anatomical connections: Fibers
from the hippocampus connect to the septal and hypothalamic region, which in
turn connects to the thalamus, whose fibers reach the cingulate cortex which
then connects back into the hippocampus. This circuit is now named after Papez.
The Papez circuit was thought to be involved in emotions because our emotions
often seem to go round and round like a self- sustaining circuit. The behavioral
idea is a good one: during the 1970s, I used to give lectures on “The Hang-up
Theory of Emotions”—that emotions are like getting stuck in a loop. But
providing evidence for the anatomy-based belief that the Papez circuit is
involved in generating emotions has proved to be considerably more elusive.
My research on monkeys, which I had started at the Yerkes Laboratory of
Primate Biology in Florida and was continuing at Yale, showed that the
amygdala, not the Papez circuit, is involved in taming and changes in sexual
behavior that follow a temporal lobe removal. Nor did hippocampal removals in
humans produce any effects on experiencing or expressing their feelings.
Furthermore, removals of the hippocampus have devastating effects on a specific
kind of memory, effects that I’ll take up below and in Chapter 15.
However, in totally different sets of experiments the hippocampus has been
shown to be intimately involved in processing stress by regulating the pituitary-
adrenal hormonal system, a finding that needs to be incorporated in any attempt
to understand the role of the limbic systems in organizing emotions.

Anatomical Fundamentals
A brief review of the brain anatomy relevant to the processes organized by
the frontolimbic forebrain can be helpful at this point:
We can visualize the central nervous system as a long tube which is
fabricated during our embryonic development from the same layer of cells that
elsewhere makes up our skin. This tube is lined with stem cells and within the
tube flows the cerebrospinal fluid. The lower part of the tube makes up our
spinal cord. Where the tube enters our skull through a big hole—in Latin,
foramen magnum— the tube becomes our brain stem. The hind part of the brain
stem is called the “hindbrain”; the mid part of the brain stem is the “midbrain”;
and the forward part is the “fore-brain.” The most forward part of the forebrain
is made up of the “basal ganglia.”
The basal ganglia can be divided into three components: An upper, a
middle, and a limbic (from the Latin, limbus, “border”). The upper component is
made up of the caudate (Latin, “tailed”) nucleus and the putamen (Latin,
“pillow”). The middle component is the globus pallidus (“pale glob”). The
limbic component is made up of the nucleus accumbens (Latin, “lying down”)
and the amygdala (Greek, “the almond”).
The brain cortex makes up two hemispheres that have developed in the
embryo by a migration of cells from the basal ganglia. Six layers of cells migrate
from the upper basal ganglia to make up most of the cortex technically called the
“iso” (Latin for “uniform”) cortex. A more common name for this expanse of
cortex is “neocortex” (new cortex), but as I described earlier in this chapter, this
more common name is misleading in some of its usages. The remainder of the
cortex is called the “allo” (Latin for “other”) cortex. The allocortex is formed
when cells from the amygdala and other limbic basal ganglia migrate. These
migrations sometimes do not occur at all; but for the most part, they make up a
three-layered cortex. Sometimes, however, there is a doubling of three layers as
in the cingulate (Latin, “belt”) cortex.

Mistaken Identities
The limbic basal ganglia and their allo-cortical extensions are called
“limbic” because they are located at the medial limbus (Latin, “border”) of the
hemispheres of the forebrain. When I started my research in the late 1940s, this
term was used mainly for the cingulate cortex lying on the upper part of the
inner surface of the brain’s hemispheres. But also, Paul Broca, the noted 19th-
century neurologist, had discerned that the allocortex formed a ring around the
inside border of the brain’s hemispheres. However, until the 1950s, the
allocortex was generally referred to by neuro-anatomists as the “olfactory” or
“smell” brain because it receives a direct input from the nose.
Today, what was then called the “olfactory brain” is called the “limbic
system” and “everyone knows” that our emotions are organized by our limbic
brain. But what constitutes what neuroscientists call the limbic system varies to
some considerable extent, and the relationship of the various parts of the limbic
system to emotion is, at times, considerably distorted. Thus the popular view has
built-in difficulties. The amygdala is only one structure among those loosely
thought of as the limbic forebrain. In fact, originally the amygdala was not
included at all in the limbic circuitry. Rather, two sources converged to be
baptized by Paul MacLean as the “limbic systems.” As noted above, one source
was Paul Broca who had discerned a rim of cortex around the internal edge of
the cerebral hemispheres to be different from the rest of the cerebral mantle. This
cortex appeared paler and was later shown to have only three layers rather than
six. He called this rim of cortex ”le grande lobe limbique.”
58. Upper and limbic basal ganglia

59. Cortical projection from the basal ganglia

The other source, described by J. W. Papez, consisted of the hippocampus,


its downstream connections through the septal region to the mammillary bodies
of the hypothalamus, thence to the anterior nuclei of the thalamus which is the
origin of a projection to the cortex of the cingulate gyrus, which had been
labeled the “limbic” cortex. The cingu-late cortex, in turn, feeds back into the
hippocampus. Papez reasoned that emotions tend to go round and round and this
anatomical circuit does just that.
MacLean’s limbic system encompassed Broca’s cortex and Papez circuitry,
and originally did not include the amygdala. It was not until MacLean joined my
project in John Fulton’s department at Yale that it became imperative on the
basis of my results to include the amygdala in any circuitry that presumably
organizes our emotions. This inclusion totally changed what had hitherto been
thought of as the limbic forebrain.
The Prefrontal Cortex
In addition to experimentally defining the limbic systems, my research
established the essential relationship of the anterior frontal cortex, usually called
the prefrontal cortex, to those limbic systems. The finding, noted earlier, that a
mediobasal limbic motor cortex surrounds the prefrontal cortex suggested that
such a relationship existed. However, the definitive study came from my analysis
of the thalamo-cortical input to the prefrontal cortex. The thalamus, the halfway
house of sensory input to the cortex, is a three-dimensional structure. The brain
cortex is essentially a two-dimensional sheet of cells and their connecting fibers.
Anatomical projections that connect thalamus and cortex must therefore “lose,”
enfold, one dimension. This means that one thalamic dimension disappears in its
projection to cortex. For most of the forebrain, the dimension “lost” is what in
the thalamus is a medial to lateral dimension; front-to-back and up-and-down
dimensions are maintained in the projection so that the thalamic organization and
the cortical terminus of the thalamic input remain essentially unchanged.
Not so for the thalamic projections to the limbic and prefrontal cortex. The
thalamic dimension “lost” in the projection is the front-to-back dimension. For
instance, a long file of cells reaching from the front to the back of the thalamus
projects to a point on the tip of the prefrontal cortex in primates.
This result defines an essential difference between the frontolimbic
forebrain and the posterior convexity of the brain. As the next chapters will
show, this difference is as important for processing the brain/behavior
relationship as that between the right and left hemispheres of the brain.

In Summary
1. The idea that the neocortex is responsible for organizing our cognitive
reasoning, while the older cortical formations are solely responsible for
generating our baser emotions and motivations, does not hold up. In
primates, including us, the neocortex adjacent to the older cortex is very
much involved in organizing the visceral and endocrine processes that
presumably underlie (or at least accompany) our emotions and motivations.
This finding calls into question the presumed primitive nature of our
emotions and motivations.
2. The olfactory systems of our brain, now called the “limbic systems,” are
composed of two distinct systems. One system is based on the amygdala, a
basal ganglion, the other on the hippocampus.
3. The basal ganglia of the brain can be divided into three constellations: an
upper, a mid and a limbic. This chapter reviewed evidence that the limbic
basal ganglia—the amygdala, accumbens and related structures—are
involved in organizing visceral and endocrine responses presumably critical
to emotional experience and behavior.
4. But direct evidence of the involvement of limbic basal ganglia in emotions
had, as yet, not been obtained. The next chapters describe evidence that
leads to the conclusion that the limbic basal ganglia are involved in
emotions and that the upper basal ganglia—the caudate nucleus and the
putamen—are involved in organizing motivational behavior.
5. The hippocampus, an intimate constituent of the Papez circuit, plays an
important role in the chemical regulation of the pituitary/adrenal/ hormonal
response to stress. Electrical and chemical stimulations showed that there is
a large input from other parts of our cortex to the hippo-campus but that its
output goes mostly to other limbic system structures.
6. However, the effects of damage to the Papez hippocampal circuit are much
more drastic on certain aspects of memory than on emotion or motivation.
This poses the question of how the response to stress, and therefore the
Papez circuit, relates to memory and, in turn, how memory relates to
emotion and motivation.

By 1958, when I moved to Stanford University, I had achieved a good grasp


of the neuroanatomy and electrophysiology of what came to be known as the
limbic forebrain and had written several well-received papers reviewing my
findings. However, the relationships of the different limbic structures to behavior
remained rudimentary. The research that I undertook to begin to understand
these relationships is reviewed in the next chapter.
Chapter 11
The Four Fs

Wherein the experimental analysis of the brain systems involved in fighting,


fleeing, feeding and sex, the “Four Fs,” are recounted.

First I shall do some experiments before I proceed further, because my


intention is to cite experience first and then with reasoning show why
such experience is bound to operate in such a way. And this is the true
rule by which those who speculate about the effects of nature must
proceed.
—Leonardo da Vinci, c. 1513 from Fritjof Capra, “The Science of Leonardo”
The brain processes that organize our feelings do not “feel like” those
feelings. In early stages of science, we expect that our brains and bodies, as well
as our physical universe, resemble what we experience every day. Thus we are
pleased when we are able to correlate a specific aspect of an experience with a
specific brain system. But when we examine the patterns that make up our
experience and those that make up that brain system, the match between patterns
is not evident. Transformations occur between the patterns that initiate our
experiences and those that process the experiences. The brain processes that
organize our feelings are not as we “picture” them.
The television process provides a good example of the difference between
what we experience and the processes that make the experiences possible. At the
beginning of the process is a scene and at the end is a picture of that scene. But
that is not all there is to television (tele is Greek for “far” or “far off”). We need
to consider the steps from the broadcasting studio, where a scene is transformed
into a code that can be transmitted, to the decoding transformation at the
receiver; and then the transformation that activates a screen, in order to
understand how the picture on the screen is generated so that it corresponds to
the scene in the studio. For the brain/behavior/experience relationship it is the
same: we need to specify the transformations that make the relationship possible.
In the mid-20th century, the zeitgeist in experimental psychology was
behaviorist: any terminology that referred to our subjective experiences—terms
such as “fear,” “joy,” “hunger”—was taboo. At the time I too, heartily approved
of these restrictions in terminology and decided to employ only “operational”
descriptions to characterize my results of the experimental analysis of the taming
and the other effects in monkeys following the removal of the amygdala.
In the 1920s and 1930s, Walter Cannon had described the outcome of his
hypothalamic stimulations as resulting in “fight and flight” responses. For
symptoms that followed removal of the amygdala—the taming, the increase in
sexual mounting, and repetitiously putting things in their mouths—I added two
responses to Cannon’s “fight and flight” to make up “Four Fs”: Fighting,
Fleeing, Feeding and Sex.
This characterization immediately became popular and still is. In the mid-
1950s, when I gave a series of lectures on this topic at the University of
Michigan, my initial audience was a mere 100 students and faculty. By the end
of the week, we had to move the lectures to an auditorium seating 1,000, and all
the seats were filled.
For the 1960 Annual Reviews of Psychology, I wrote up the essence of what
had been discovered in my laboratory during the 1950s at Yale University
regarding these Four F’s. A few skirmishes with the editors ensued: the four-
letter-word revolution at Berkeley had not as yet occurred. (Of course, for the
uptight, the fourth F could stand for fornication.) Fortunately for me, the Annual
Reviews were published in Palo Alto, near my Stanford laboratory, and by
personally supervising the final draft just prior to the typesetting of the paper, I
was able to have the last word.
While at Yale, I had aimed to seriously quantify the behaviors that made up
the Four Fs. For instance, I wanted to know what caused those monkeys whose
amygdalas had been removed to put everything into their mouths. Could it be
that they had lost their sense of taste and therefore were sampling everything?
These monkeys would eat hot dogs, whereas normal monkeys, who are not fond
of meat, might take a bite and stop eating. In order to test the monkeys’ ability to
taste, Muriel Bagshaw, a Yale medical student, and I set up a series of drinking
flasks, each with a greater dilution of quinine (aka tonic) water and let the
monkeys choose which flasks they would drink from. Using this technique, we
showed something that at the time had not been discovered: the part of the brain
involved in discriminating taste was located in the cortex buried within the
lateral fissure of the anterior part of the temporal lobe near, but not in, the
amygdala.
In another set of experiments, we used rats. John Bro-beck, a colleague of
mine at Yale who later became head of the Department of Physiology at Johns
Hopkins University, was an expert in making electrolytic lesions (a fancy term
for burning a hole) in the hypothalamic region—which is connected to the
amygdala by way of two tracts—lesions which resulted in the rats eating and
eating and eating. Bro-beck was looking at changes in the rats’ metabolism that
were related to the excessive eating behavior. He was the perfect collaborator for
me to study the possible metabolic “causes” of changes in eating behavior
following the effects of the temporal lobe removals on eating. We were joined by
a postdoctoral fellow from India, Bal Anand, who many years later would
become the head of the All India Institute of Medical Sciences.
At Yale, we three set up our experiment. The results were a disaster
mitigated by an unexpected silver lining. First, I tried to remove the rats’
amygdala, the part of the medial temporal lobe that I’d been removing in
monkeys. Rats have very small brains—no temporal lobes, and their amygdalas
are spread out and therefore difficult to remove without damaging other brain
structures. Surgery didn’t work, so Brobeck went to work with his electrolytic
lesions. Now the rats refused to eat at all!—just the opposite of what I’d found in
my earlier tests with monkeys.
For the behavioral part of our study, pre-operatively I set up drinking flasks
for the rats, just as Bagshaw and I had done for monkeys: The rats loved the
quinine (tonic) water and several of them drank themselves sick. I also provided
the rats with a choice between their daily ration and one that looked like (and
had the consistency of) their usual food but was made up of axle grease and
sawdust. The rats loved the new “food” and made themselves sick gorging on it.
Once they had become sick, whether from quinine or axle grease, they never
touched either again. Thus they would make a poor control group for this
experiment. Furthermore, the removal of the amygdala might make the rats
deathly ill because they would not stop eating or drinking, not because they had
lost their sense of taste. Just making rats sick was not our intention. These tests
were obviously not going to work for our experiment.
This set of preliminaries to undertaking our experiments had taken well
over a year, and Anand soon would have to return to India. There were no results
from all our work. I suggested that Anand do an anatomical study before
returning to India, to see how many of Brobeck’s electrolytic lesions had hit their
target: the amygdala. Brobeck was not enthusiastic: He’d been doing electrolytic
lesions in the hypothalamic region for years, first at Northwestern University and
then at Yale, and had almost always produced the expected results: rats that
overate. But why not section the brains of the rats anyway, to locate the sites of
the electrolytically produced lesions? That way Anand would learn the technique
and would have something to show for his time at Yale.

Surprises
The anatomy was done on both the overeating rats and the ones that had
stopped eating. Surprise #1: Hardly any of the lesions that had produced the
overeating were in the expected location (the ventromedial nucleus) in the
hypothalamic region. They were in the vicinity, but not “in,” the nucleus. This
confirmed, for me, a long-held view that a nucleus isn’t a “center,” a local place,
at all: the cells that make up the “nucleus” are spread over a fairly long distance
up and down the hypothalamic region. This was later ascertained by a stain that
specializes in picking out the cells of this nucleus.
Surprise #2: The electrodes that Brobeck had aimed at the amygdala had
gone far afield—the lesions made with these electrodes were located halfway
between those that had been aimed at the hypothalamic region and those that had
been aimed at the amygdala. Thus, Anand and Brobeck had obtained an
important result: they had discovered a new region, a region that influenced
eating behavior, which they dubbed the “far-lateral” hypothalamic region. The
results were published in the Journal of Neurophysiology with an introductory
note acknowledging my role in the experiment, and they were quoted for many
years to come.
But the story does not end here. Shortly after the publication of this paper,
Philip Teitelbaum, then a young doctoral student at Johns Hopkins, became
intrigued with the “stop eating” effect and got his rats to eat and drink enough to
tide them over the initial period after surgery. He did this by giving the rats sugar
water and chocolate candy. At a meeting, he reported the results as having been
produced by lesions in the far-lateral nucleus of the hypothalamus and I stood
up, noted his excellent work, and went on to point out that the locus of the
electrolytic lesions in the Anand and Brobeck experiments (for which I’d
supervised the anatomy) and presumably in Teitelbaum’s, were located in white
matter consisting of nerve fibers, not of cells, and that there really was no such
“far-lateral hypothalamic nucleus.” At that time we didn’t know where those
nerve fibers originated or ended. Teitelbaum has told me on several occasions
since that he was totally shaken by my comment.
He persisted nonetheless, and we know now, through his efforts and those
of others, that these fiber tracts connect places in the brain stem with the basal
ganglia; that they are an important part of the dopamine system which, when the
system becomes diseased, accounts for Parkinsonism.
Anand established the meaning of these results. He went back to India and
pursued the research he had started at Yale using electrical recordings made from
the regions he had studied there. In his India experiments, he found that as a rat
ate, the electrical activity of its medial region progressively increased while that
of its lateral region progressively decreased. During abstinence, the opposite
occurs: the rat’s lateral region becomes progressively more active while the
medial region becomes progressively more quiescent. His experiments
demonstrated that there is a seesaw, reciprocal relationship: The medial region
serves a satiety process; the lateral region serves an appetitive process. This
reciprocal relationship applies to drinking as well as to eating.

Hit or Run?
The experimental results of our studies of “fighting” and “fleeing” behavior
were equally rewarding. At Yale we set up several colonies of young male
monkeys and selected one of these colonies in which the monkeys clearly
formed a dominance hierarchy. We had developed quantitative measures of
dominance based on procedures used by Rains Wallace, a friend of mine who
was chief of the research institute representing the insurance companies in
Hartford, Connecticut, where I lived at the time. Wallace had found that, in order
to quantify social behavior that is almost always determined by a variety of
factors (in the insurance case, factors involved in injury or death)— one must
choose an “anchor.” The anchor chosen must be the most obvious or the most
easily measurable behavior. Then other behaviors that might influence the factor
we are studying (in our case, dominance) can be rated with respect to the
behavior we have chosen as an anchor. As an anchor in our experiment with the
dominance hierarchy of monkeys, I chose the number of peanuts each monkey
picked up when the peanuts were dropped into their cage one by one. We also
used the order in which each monkey grabbed the peanut, followed by other
“subsidiary” measures such as facial grimace, body posture of threat, and actual
displacement of one monkey by another. We filmed all of this so that we could
study the films frame by frame.
I removed the amygdalas (of both hemispheres of his brain) of the most
dominant monkey in the colony. As expected, he gradually fell to the bottom of
the hierarchy, but it took about 48 hours for his fall to be completed. Other
monkeys were challenging him, and he seemed not to know how to meet these
challenges.
60. A: dominance hierarchy of a colony of eight preadolescent male rhesus monkeys before any surgical
intervention. B: same as A after bilateral amygdalectomy had been performed on Dave. Note his drop to the
bottom of the hierarchy. (From Pribran, 1962)

Next, I operated on the monkey who had replaced the first as the most
dominant. The result regarding the change in the order of dominance was
essentially the same. To be certain that our results would be generally applicable,
I then operated on the monkey that had originally been third in the hierarchy.
Surprise! He became more aggressive and maintained his dominance. Of course,
my colleagues accused me of sloppy surgery. We had to wait a few years (while
the monkeys were used in other experiments) before an autopsy showed that my
removal of the amygdala was as complete in the third monkey as it had been in
the first two monkeys I had operated upon.
When my Yale collaborators, Enger Rosvold, Alan Mirsky and I were later
reviewing our films of the monkeys’ interactions after their surgery, we found
that in the case of the third monkey no challenges to his dominance had
occurred. The next monkey in line—monkey #4—was a peacenik, happy to get
his share of food and water without troubling the other monkeys.

Brain Surgery and Social Context


Over and over, my experience with removals of the frontal and limbic parts
of the brain had demonstrated, as in this instance, that such vulnerability to
social context occurs during the immediate postoperative period. Dramatic
changes in behavior can be produced immediately postoperatively by social
input. Lawrence Weiskrantz—at that time a Harvard graduate student in
psychology, doing his dissertation experiments in my laboratory—and I
informally tested this observation: After surgery, I approached the monkey in a
docile fashion, always keeping my head below his level, and withdrawing when
he approached. The monkey became his old belligerent rhesus self and even
more so— grimacing and attacking an approaching figure or hand. By contrast,
using another monkey who had had his amygdala removed in the same fashion,
Weiskrantz behaved firmly and assertively. His monkey became docile, and we
all had him eating out of our hand.
61. C: same as A and B, except that both Dave and Zeke have received bilatral amygadalectomies. D: final
social hierarchy after Dave, Zeke, and Riva have all had bilateral amygdalectomies. Note that Riva fails to
fall in the hierarchy. Minimal differences in extent of locus of the resections do not correlate with
differences in the behavioral results. The disparity has been shown in subsequent experiments to be due to
Herby’s nonagressive “personality” in the second position of the hierarchy. (From Pribram, 1962.)

Our Yale dominance experiment has been quoted over the years in many
social psychology texts. The significance of the results became especially
important when the Assistant District Attorney of the State of California was
heard to suggest that removal of the amygdala might be a treatment of choice for
the hardened violent criminals populating our jails. I called him to alert him to
our results—that the criminals might become even more vicious, unless the
postoperative social situation was very carefully established, and that we didn’t
really know how to do this. Luckily, the matter was quickly dropped like the hot
potato it was. It was the mid-1960s and I was concerned that someone might
recommend brain surgery for the “revolutionary” students on our campuses.
Brain surgery does not remove “centers” for aggression, fear, sex, etc.
Ordinarily brain processes enable certain behaviors and, once engaged, help to
organize and reorganize those behaviors.
There is a difference however when our brain becomes injured and “scars”
form, for instance when scars produce temporal lobe epilepsy: some patients
occasionally have uncontrollable episodes during which they become violent,
even to the extent of committing rape or murder. Once the seizure is over, the
patient returns to being a docile and productive person. If the patient recalls the
period of his seizure—though most do not—or when evidence of how he
behaved is shown to him, he is terribly upset and remorseful, and will often
request surgical relief if the seizures have become repetitive.

The Fourth F
As we were studying at Yale the effects of amygdalectomy on dominance
and on eating and drinking similar experiments were being performed at Johns
Hopkins University in the Department of Physiology and in the Army Research
Laboratories at Walter Reed Hospital in Washington, DC. In addition, both of
these research groups were studying the effects of amygdalectomy on sexual
behavior in cats. At Hopkins, there was no evidence of change in sexual
behavior after the surgical removal. By contrast, at the Army Laboratories, as in
Klüver’s original report of his and Bucy’s experiments on monkeys in Chicago,
“hypersexuality” followed the removal of the amygdala in cats.
I paid a visit to my friends in the Department of Physiology at Hopkins and
looked at the results of their surgical removals, which were identical in extent to
mine. I looked at their animals—cats in this case rather than monkeys. The cats
were housed in the dark basement of the laboratory in cages with solid metal
walls. Anyone planning to handle the cats was warned to wear thick gloves.
When the cats were looking out through the front bars of the cage, the
experimenters would blow into their faces to test for taming: the cats hissed.
Attempts to grab the cats resulted in much snarling, clawing and biting.
I then traveled 30 miles from Baltimore to Washington to visit my friends at
the Walter Reed Medical Research Center, where they were also removing the
amygdalas from the brains of cats. The brain surgery had been well carried out
and appeared, in every respect, identical to what I had seen at Hopkins. But the
situation in which these cats were housed was totally different. These cats were
kept in a large compound shared with other cats as well as with other animals—
including porcupines! The animals were playing, wrestling, eating and drinking.
My host, who was in charge of the experiments, approached one of the cats who
cuddled to him. To my consternation, he then tossed the cat into a corner of the
compound, where the cat cowered for a while before rejoining the other animals
in their activities. “See how tame these cats are? The experimenters at Hopkins
must have missed the amygdala.” By now, the cats had resumed their mounting
behavior, including a very patient and forgiving porcupine who kept its quills
thoroughly sheathed while a “hypersexed” tabby tried to climb on top of it.
I wrote up this story of the Baltimore and Washington cats for the Annual
Reviews of Psychology, again emphasizing that it is the social setting and not the
extent or exact placement of the brain removal that is responsible for the
differing outcomes we find after the surgery.
I was able to add another important facet to the story: experiments that were
done at the University of California at Los Angeles on a set of “Hollywood
cats.” This time, the experimenters had studied normal cats. They were placed in
a large cage that took up most of a room. There were objects in the cage,
including a fence at the rear of the cage. The behavior of the cats was monitored
all day and all night. The UCLA experimenters found what most of us are
already aware of: that cats go prowling at night, yowling, and, (what we didn’t
know) mounting anything and everything around! After the brain surgery this
nocturnal behavior took place during the day as well. Testosterone levels were
up a bit. However, in our experiments with monkeys at Yale, we had shown that
while testosterone elevations amplify sexual behavior such as mounting once it
has been initiated, these elevated levels do not lead to the initiation and therefore
the frequency of occurrence of sexual behavior. Today we have the same result
in humans from Viagra.
The Hollywood cats thus showed that the removal of the amygdalas
influenced the territory, the boundaries of the terrain within which sexual
behavior was taking place. Although an increase in sexuality due to an increase
in hormone level might have played a role in amplifying sexual activity, the
increase in that behavior might also have resulted in the modest increase in
sexual hormone secretion. By far, the more obvious result was a change in the
situation—day vs. night—in which the cats were doing It: For ordinary cats,
night is the familiar time to practice the fourth F. For the cats who had been
operated upon, every situation was equally familiar and/or unfamiliar. It is the
territorial pattern, the boundary which forms the context within which the
behavior occurs, that has been altered by the surgery, not the chemistry that fuels
the behavior.

Basic Instincts
In light of all these results, how were we to characterize the Four Fs? On
one of my several visits with Konrad Lorenz, I asked him whether the old term
“instinct” might be an appropriate label for the Four Fs. The term had been
abandoned in favor of “species-specific behavior” because if behavior is species
specific, its genetic origins can be established. I had noted during my lectures at
Yale that if one uses the term instinct for species-specific behavior, human
language is an instinct. This usage of the word is totally different from its earlier
meaning. Nevertheless, Steven Pinker, professor of linguistics at MIT, has
recently published a most successful book entitled The Language Instinct.
My suggestion to Lorenz was that we might use the term “instinct” for
species-shared behaviors. What makes the studies of birds and bees by
ethologists like Lorenz so interesting is that we also see some of these basic
patterns of behavior in ourselves. Falling in love at first sight has some
similarities to imprinting in geese, the observation that Lorenz is famous for. Sir
Patrick Bateson of Cambridge University, who, earlier in his career, was one of
my postdoctoral students at Stanford, has shown how ordinary perception and
imprinting follow similar basic developmental sequences.
Lorenz agreed that my suggestion was a good one, but felt that it might be a
lost cause in the face of the behaviorist zeitgeist. His view was also expressed by
Frank Beach in a presidential address to the American Psychological Association
in a paper, “The Descent of Instinct.”
Thus, the Four Fs, though they have something to do with emotional (and
motivational) feeling, can be described behaviorally as basic instincts or
dispositions. These dispositions appear to be a form of a hedonic- (pleasure and
pain) based memory that, as we shall shortly see, directs attention in the process
of organizing emotions and motivations. As an alternative to “basic instincts,”
the concept “dispositions” captures the essence of our experimental findings
regarding the Four Fs. Webster’s dictionary notes that dispositions are
tendencies, aptitudes and propensities. Motivations and emotions are
dispositions.
The brain processes that form these hedonically based dispositions are the
focus of the next chapters.
Chapter 12
Freud’s Project

Wherein I take a detour into the last decade of the 19th century to find important
insights into emotional and motivational coping processes.

One evening last week while I was hard at work, tormented with just
that amount of pain that seems to be the best state to make my brain
function, the barriers were suddenly lifted, the veil was drawn aside,
and I had a clear vision from the details of the neuroses to the
conditions that make consciousness possible. Everything seemed to
connect up, the whole worked well together, and one had the
impression that the thing was now really a machine and would soon go
by itself.
—Sigmund Freud, letter to Wilhelm Fliess, October 20, 1895
As I was moving from Yale to Stanford in the late 1950s, I had just
disproved Köhler’s DC theory of the brain as responsible for perception, and I
had arrived at an impasse regarding the “how” of the Four Fs. I had put up a
successful behaviorist front: One of my Yale colleagues, Danny Friedman, asked
me why I’d moved to Stanford. He truly felt that I’d come down a step or two in
Academe by moving there. He remarked, “You are a legend in your own time.“
If that was true, it was probably because I gave up a lucrative career as a
neurosurgeon for the chance to do research and to teach. My initial salaries in
research were one tenth of what I was able to earn in brain surgery. But I was
accomplishing what I wanted through my research: I had been accepted in
academic psychology and had received a tenured academic position, held jointly
in the Department of Psychology and in the Medical School at Stanford.
At that time, “The Farm,” as Stanford is tenderly called, really was that.
What is now Silicon Valley, a place of “concrete” technology, was then Apricot
Valley (pronounced “aypricot”) with miles of trees in every direction. Stanford
Medical School’s reputation was nonexistent: that’s why a whole new faculty,
including me, was put in place when the school was moved southward from San
Francisco to the Stanford campus in Palo Alto. Our first concern at the Medical
School: How will we ever fill the beds at the hospital? A heliport was established
to bring in patients from afar and advertisements were sent out to alert central
and northern California residents that we were “open for business.” How
different it was then from when, 30 years later, I would become emeritus and
leave California to return to the East Coast.
I had succeeded Lashley as head of the Yerkes Laboratory of Primate
Biology in Florida and Robert Yerkes had become my friend during the
negotiations for my appointment. Still, I had encountered insurmountable
problems in trying to obtain university appointments for prospective research
personnel at the Laboratory because of its distance from any university. I had
been equally unsuccessful in moving the laboratory to a friendly university
campus where teaching appointments could be obtained.
I felt that research institutions, unless very large (such as NIH or the Max
Planck), become isolated from a broader scope of inquiry that a university can
provide, and thus, over time, become inbred and fail to continue to perform
creatively. I had tried to convince Harvard to accept a plan I had developed to
move the Laboratory to a warehouse in Cambridge, since Harvard had a stake in
the Laboratory; Lashley’s professorship was at Harvard. Harvard faculty had
nominated me for a tenure position three times in three different departments
(psychology, neurology and even social relations), but the faculty
recommendations were not approved by the administration.
All effort in these directions was suddenly halted when Yale sold the
Laboratory to Emory University (for a dollar) without consulting the board of
directors. This was in keeping with the administrative direction Yale was taking:
the Department of Physiology had already shifted its focus from the brain
sciences to biochemistry, and the president had mandated loosening Yale’s ties
(that included remanding tenured professorships) to the Laboratory and other
institutions that were mainly supported by outside funding.
The time was ripe to “Go West, young man” where other luminaries in
psychology—Ernest (Jack) Hilgard, dean of Graduate Studies, and Robert Sears,
the head of the Department of Psychology, at Stanford—had been preparing the
way.

Hello, Freud
Despite my so-called legendary fame and the administrative
disappointments at Yale and Harvard, the issue that bothered me most, as I was
moving to Stanford, was that I still really didn’t understand how the brain
worked. In discussions with Jerry Bruner, whose house my family and I rented
for that summer while I taught at Harvard, he recommended that I read Chapter
7 of Freud’s Interpretation of Dreams. I also read the recently published
biography of Freud by Ernest Jones, who mentioned that someone
knowledgeable about the brain should take a look at Freud’s hitherto neglected
Project for a Scientific Psychology, the English translation of which had been
published only a few years earlier in 1954. I read the Project and was surprised
by two important insights that Freud had proposed:

1. Motivation is the “prospective aspect of memory.” At that time,


psychologists were focused on internal ”drives” as the main source of
motivation, a conception my laboratory experiments and those of others had
already found wanting; and
2. Brain circuitry involves action potentials, resistance, and local graded
activity. My theoretical work had emphasized electrical field potentials that
were characteristic of the brain’s fine- fibered, dendritic, deep structure—
and here was Freud, a half century earlier, already aware of the importance
of this aspect of brain processing.

I immediately set to work on a paper, calling Freud’s Project a “Rosetta


stone” for understanding psychoanalysis. My paper emphasized the memory-
motive structure and local graded activity, the field potentials in the brain which
was translated into English as “cathexis.” I found fascinating insights into how a
late-19th-century neurologist could view brain function. Freud had struggled
with the problem of form in terms of quantity vs. quality. He showed that an
adrenaline-like quantitative chemical buildup resulted in anxiety—”unpleasure”
in the translation. Anxiety, Freud indicated, was controlled by way of the
memory-motive circuitry in the basal (ganglia of the) forebrain. Qualities, Freud
wrote, composed consciousness enabled by processes operating at the brain’s
surface, its cortex. The cortical processes were not made up of circuits but of
“patterns of periodicity,” of energy transmitted from our sensory receptors.
I had been suggesting that an essential part of cortical processing was
holographic-like—that is, based on interference patterns formed by frequencies
of waves. Frequencies are the reciprocals of Freud’s patterns of periodicity. I was
stunned and elated. It is helpful to find that one’s scientific views are recurrently
confirmed under a variety of experimental conditions.
The publication of my paper, entitled “The Neuro-psychology of Sigmund
Freud” in the book Experimental Foundations of Clinical Psychology, would
lead to a ten-year study of the Project in a partnership with Merton Gill, the
eminent psychoanalytic scholar. Our book, Freud’s “Project” Re-assessed, was
the fruit of this effort. It was great fun: the last sentence in the book reads “—and
our views (about the virtue of the future place of psychoanalysis as a natural
science) are irreconcilable.”
.

The “Project” Reassessed


Gill was at Berkeley and I at Stanford. When he first read my paper, he
congratulated me on having finally become a scholar, not just a bench scientist.
He noted, however that I’d made 27 errors in my not-so-very-long paper. I was
upset and went to work to correct the errors. After a month, I wrote a reply to
Merton, saying that I was grateful for his critique and that he was correct in 7 of
his criticisms—but the other 20 were his errors, not mine. After a few more
exchanges like this, I decided to telephone him. I suggested that since we were
just across San Francisco Bay from each other we might meet, which we did—
alternating between Berkeley and Stanford every Friday afternoon for the next
two years. We scrupulously studied both the English and German texts and
compared them. German was my native language, and Merton’s knowledge of
Freud’s later work was critical: we needed to see how many of the early ideas
contained in the Project had later been abandoned or changed. Surprisingly few.
After two years of work, Merton and I had a stack of notes taller than either
of us. What to do? I said, let’s write a book. I quickly disabused Merton of
writing voluminous footnotes, as was then the fashion in psychoanalysis. From
George Miller I had inherited the motto: “If it’s important, put it into the text. If
not, leave it out.” Merton and I each wrote chapters—and after a first draft we
started critiquing each other’s chapters and produced another set of notes. I used
them to write another draft. More critique from Gill. I wrote up a third draft.
Silence. After two weeks I called Gill: He hadn’t read the draft. He was sick
of the Project, and never wanted to hear of it again. I could do anything I wanted
with our work. The only stipulation was that I would never mention his name in
connection with it. He could not give me any reasons for his change of heart; all
he could say was that he was tired of the work. I was dismayed, but I did as he
asked in the two more papers that I wrote, saying only that I was deeply indebted
to M. G., who didn’t want me to mention him.

Resurrection
A decade passed, and I was asked to give the keynote address at the
meeting of the International Psychoanalytic Society in Hawaii. I felt deeply
honored and accepted the invitation. Next, I was asked who might introduce me.
Merton Gill seemed to be the perfect person. I called Gill, now in New York. He
told me that he had moved from Berkeley shortly after we had last talked and
had been depressed and hadn’t given a talk in ten years, but that he felt honored
that I was asking him to introduce me! Gill, the curmudgeon, feeling honored?
The world was turning topsy-turvy. He asked that I send him some of my recent
brain physiology studies so he’d be up to date. I replied that no, I wasn’t going to
present those. What I wanted to do was to present the findings in our book. Did
he still have a copy of the last draft? Search. “Here it is in my files,” he said.
“Have you read it?” I asked him. “No.” “Will you?” “Yes. Call me next week.” I
did, on Tuesday. Gill was ebullient: “Who wrote this?” I said I did. “Brilliant.
Wonderful.” I called back a few days later to be sure I’d actually been talking to
Merton Gill. I had. All was well.
We met in Hawaii. My talk was well received, and Merton and I decided to
resume our long-delayed joint effort. I felt that we needed to re-do the first
chapter, which was about the death instinct—a horrible way to begin a book.
Merton thought that if we could solve what Freud meant by primary and
secondary processes, we would make a major contribution to psychoanalysis and
to psychology. He polished the final chapter, and I went to work on the
“processes” and found, much to my surprise, that they were similar to the laws
of thermodynamics that were being formulated by the eminent physicist Ludwig
Boltzmann in Vienna at the same time that Freud was actively formulating his
ideas summarized in the Project.

Freud’s Ideas
During our work on the Project many more rewarding insights emerged
about Freud’s ideas. Gill was especially pleased to find that Freud had already
addressed what was to become the “superego” in his later works. Freud
described the “origin of all moral motives” to a baby’s interaction with a
caretaking person who helped the infant relieve the tensions produced by his
internally generated drives. The brain representations of the actions of the
caretaking person are therefore as primitive as those initiated by the drives. It is
the function of the developing ego to bring the caretaking person into contact
with the infant’s drives. Gill had written a paper to that effect even before he had
read the Project, and was therefore pleased to see this idea stated so clearly. The
“superego” is therefore a process during which society, in the role of a
caretaking person, tries to meet and help satisfy drive-induced needs, not to
suppress them. This is contrary to the ordinary interpretation given today which
holds that the ‘superego’ acts to suppress the ‘id.’
Another gem was Freud’s description of reality testing: An input is sensed
by a person. The input becomes unconsciously—that is, automatically—
processed in our memory-motive structure. A comparison between the results of
the memory-motive processing and that input is then made consciously before
action is undertaken. This “double loop of attention,” as Freud called it, is
necessary because action undertaken without such double-checking is apt to be
faulty.
62. Stylized representation of the “machine” or “model” of psychological processes presented in the
Project. (From Pribram and Gill Freud’s “Project” Re-assessed)

In the 1880s, Helmholtz had already described such a parallel process for
voluntarily moving the eyes. Freud greatly admired Helmholtz’s work and was
overtly trying, in the Project, to model a “scientific psychology” according to
principles laid down by Helmholtz and Mach. In the absence of voluntary reality
testing, action and thought, which for Freud was implicit action, are merely
wishful. Freud defined neuroses as composed of overriding wishes. One can see,
given the rich flavor of the Project, why I became so excited by it.
For almost a half-century I’ve taught a course on “the brain in the context
of interpersonal relations,” and I always begin with Freud. The students are as
surprised and excited, as I have been, to realize that “cognitive neuroscience,” as
it is called today, has common threads going back to the beginning of the 19th
century—threads that formed coherent tapestries toward the end of that century
in the writings of Sigmund Freud and William James.
Within a year of our decision to complete our book, Freud’s “Project” Re-
assessed, Gill and I were ready for its publication, which immediately received a
few nice reviews from neurologists and psychiatrists, and quite a few really bad
ones from psychoanalysts. Basic Books disappointed us in its distribution, and
my presentations to various psychoanalytic institutes were greeted with little
enthusiasm except for those in the Washington-Baltimore Institute and the
William Allison White Institute in New York, where discussions were lively and
informed.
Now, over thirty years after the 1976 publication of our book, beginning on
the 100th anniversary the Project in 1995, both have become reactivated:
Psychoanalysis is returning to its neurobiological roots. There are now active
neuro-psychoanalytic groups in Ghent, Belgium, and Vienna, Austria, and at the
University of Michigan at Ann Arbor—and there is also a flourishing
Neuropsychoanalytic institute in New York.
Freud called his Project, which was about the brain, as well as his later
socio-cultural work, a “metapsychology.” The metapsychology was not to be
confused with his clinical theory, which was derived from his work with his
patients. Most psychoanalysts work within Freud’s clinical theory— but that is
another story, some of which is addressed in Chapter 19. The current endeavors
in neuro-psychoanalysis are bringing a new perspective to psychoanalysis and
psychiatry. As Gill and I stated in our introduction to Freud’s “Project” Re-
assessed, Freud already had formulated a cognitive clinical psychology in the
1890s, long before the current swing of academic and practicing psychologists to
cognitivism. But even more interesting to me is the fact that the Project is a
comprehensive, well-formulated essay into cognitive neuroscience a century
before the current explosion of this field of inquiry. Reading Freud and working
with Merton Gill helped me, a half century ago, to reach a new perspective in
my thinking about what the brain does and how it does it.
For a few years, in the mid-1960s, I would present lectures on the insights
and contents of Freud’s Project as my own—only to acknowledge to an
unbelieving audience during the discussion of the presentation, that what I had
told them was vintage Freud, 1895.
Choice
Chapter 13
Novelty: The Capturing of Attention

Wherein I recount a set of surprising results of experiments performed in the


20th century that changed our view of how motivations and emotions are
formed.

When I have inserted knowledge into my mysteries or thrillers I have


found that my readers think that I’m showing off and that the
passage(s) interrupt the plot.
—Chair, ThrillerFest Conference, New York, June 2007
While the Freud “project” was going forward, most of my effort went into
designing and carrying out experiments that would answer the question: What
brain process, what disposition(s), did the diverse behaviors making up the Four
Fs have in common?
As I was about to move to Stanford, tackling the search for a common brain
process that underlies the Four Fs frontally seemed to have reached a dead end. I
therefore decided to do something that is unusual in the scientific community
(and certainly a course of action that would never be awarded a research grant): I
decided deliberately to outflank the problem by testing monkeys in some fashion
that could not be readily subsumed under the heading of the Four Fs. What
would be the effects, if any, of removal of the amygdala in such a situation?

Transfer of Training
I had recently heard a lecture on “transfer of training.” The procedure
consisted in first training an animal or child to choose the larger of two circles
and then presenting two rectangles to see whether the animal (or human) will
again choose the larger of the two without any further training. I used monkeys
that had been operated on some three years earlier—their hair had grown back,
and they were indistinguishable from the monkeys who had not been operated
upon. I then trained all the monkeys, those who had been operated on and those
who had not, to choose the paler of two gray panels set in a background of a
third shade of gray. Once the monkeys were thoroughly trained, I began to
substitute, approximately every fifth trial, two new panels with shades of gray
that were substantially different from the gray of the original panels. A number
of the monkeys chose the lighter shade of gray of the two new panels. Others
chose randomly between the lighter or darker of the two new panels, as if they
had not learned anything.
When I checked the surgical records of the monkeys I had tested, I was
delighted: All the monkeys who chose randomly had been operated on; their
amygdalas had been removed. And all the monkeys who had transferred their
training—what they had learned—to the new task were the control animals who
had never been operated upon. Here was the performance of a task that was
affected by amygdalectomy and that could by no stretch of imagination be
labeled as one of the Four Fs.
I had finished this experiment at Yale just before moving to Stanford. It was
time to reconcile this result with those obtained on the Four Fs. Muriel Bagshaw
who, as a student, had assisted me with the “taste” studies, had received her MD
degree at Yale after having a baby and had relocated to the Pediatric Department
at Stanford just at the time I moved my laboratory to the new Stanford Medical
Center. We resumed our collaboration with enthusiasm and immediately
replicated the transfer of training experiment, using circles, rectangles and other
shapes. Once again, the monkeys who had their amygdalas removed were
deficient on tasks that demanded transfer of training, while their memory for
tasks that did not involve transfer of training had remained intact. I surmised that
what might underlie the deficits obtained in the Four F experiments is that the
animals that had had their amygdalas removed failed to transfer their pre-
operative experience to the postoperative situation and failed to accumulate the
results of new experiences that accrued daily.

Growing Up
I was still not completely over my behaviorist period. Though I was
“growing up,” as Lashley had predicted that I someday would, and had done so
to a considerable extent while Miller, Galanter and I were writing Plans and the
Structure of Behavior, I still felt that we were buying into representations and
Plans being stored and reactivated “in the brain”—and I was at a loss as to how
such a process might work. I had disposed of Köhler’s isomorphism and had not
yet adopted Lashley’s interference patterns. But for my conjecture that the
disposition underlying the Four Fs has to do with transfer of training, the issue
was straightforward: Does the deficit in transfer of training indicate that
processes occuring in the limbic basal ganglia allow specific experiences to
became represented in the brain? Lashley (who firmly supported
“representations”) and I had once argued this point while we were strolling down
the boardwalk in Atlantic City. I can still “see” the herringbone pattern of the
boards under our feet every time I think of representations—suggesting that
perhaps he was right.
As I was worrying the issue of representations in the early 1960s, while I
was putting together my laboratory at Stanford, Alexander Luria and Eugene
Sokolov, his pupil, came from Moscow for a weeklong visit. We three had just
attended a conference in Princeton where Sokolov had presented his results,
definitively demonstrating that repeated sensory inputs produce a representation
in the brain—a neuronal model of the input, as he called it. Sokolov had exposed
his human subjects to regular repetitions of tones or lights. Each subject had
demonstrated an “orienting reaction” to the stimulation, an orienting reaction
which Sokolov had measured by changes in the subject’s skin conduction, heart
rate, and the EEG. As Sokolov repeated the stimulation, the subject’s orienting
reaction to that stimulus gradually waned—that is, the subject’s response
habituated. Sokolov then omitted one in the series of stimuli. Leaving out the
“expected” stimulus provoked an orienting reaction to the absence of a stimulus
that was as large as the reaction the initial stimulus had produced! The reaction
to the absence of a stimulus meant that the person now reacted to any change in
the whole pattern of stimulation, a pattern that had become a part of his
memory. Sokolov’s evidence that we construct a specific representation of the
pattern of our experience was incontrovertible. We orient to a change in the
familiar!!!
But at the same time, my reservations regarding “representations” were
confirmed. At the moment of response to the omitted stimulus, the response was
not determined by the environmental stimulus but by the memory, the neuronal
model, of prior stimulation. As we shall see in the next few chapters, there are
whole brain systems devoted to storing the results of “nonstimulation.” Walter
Freeman, professor of neuroscience at the University of California, Berkeley, has
emphasized the fact that our perceptions are as much determined by our
memories as by the environmental situations we confront. Freeman had to come
to this conclusion because, in his experiments, his animals did not show the
same brain patterns when confronted by the same environmental patterns. As
William James had put it, our experience is like a flowing river, never quite the
same.
Freeman and I are good friends, and I have tremendous respect for his work
and have been influenced by it. But for decades we discussed his findings, and I
was always troubled by the changed brain patterns that appeared to the same
stimulus in his experiments. Finally, Walter articulated clearly that it is the
memory, the knowledge attained, that is as much if not more important to
forming a precept as is the stimulating event itself. Sokolov’s “leaving out” an
expected stimulus is a simple demonstration of this finding.
Our situation is like that of the Copernicans—we grope for ways to
articulate the results of our experiments, explanations that at one level we
already “know.” With respect to the results of the experiments on familiarization,
and their relevance to emotion, these insights into what constitutes a so-called
“representation” turned out to be even more significant than what constitutes a
“representation” with regard to perception.
At Princeton, Sokolov had read his paper in somewhat of a monotone but in
reasonably good English. By contrast, Luria had made his presentation—on the
scan paths that eye movements describe when looking at a picture—with a
flourish. Sokolov seemed unimpressed, twirling his eye glasses as Luria talked. I
assumed that Sokolov had heard Luria many times, had “habituated” and was
therefore bored. Or was he really just monitoring Luria for the KGB? We were,
after all, at the height of the Cold War.
Arriving at Stanford after several stops at universities on the way, Luria
appropriated my secretary and asked her to do a myriad of tasks for him.
Sokolov seemed extremely impatient, in his black suit and balding head,
confirming my suspicion that he might be KGB. Suddenly, Luria said he was
tired and needed a nap. As we were showing him to his room, Sokolov
whispered to me, “I must see you. It is urgent.” Of course, I understood: he must
be KGB and needed to convey to me some restrictions or warnings about Luria’s
rather notorious behavior. (Unbridled in his everyday transactions, he had been
“rescued” several times by his wife, a chemist in good standing in the Soviet
Union.)
Once Luria was tucked away for his nap, Sokolov heaved a sigh of relief
and exclaimed, “That man is driving me mad. We have stopped exclusively at
camera stores between Princeton and here. I need a shirt. I saw a shopping mall
as we arrived. Could you please, please take me there?” So much for the KGB!
Our own State Department, which had vetted and cleared the Luria-Sokolov
trip to the U.S. was a bit more serious about having me monitor them while
under my custody. I was not to take Luria more than 50 miles beyond Stanford;
not to Berkeley and not even to San Francisco! “Et cetera, et cetera, et cetera,” as
the King of Siam said in The King and I.
Government monitoring aside, we managed to have a wonderful week.
Between planning experiments, we went on several (short) trips. I took Luria and
Sokolov to Monterey to see Cannery Row; both of them had read Steinbeck. We
went to a lovely little café with a garden in back, situated on a slight slope.
Luria, who was facing the garden, got up and in his usual brisk manner dashed
off to see it, failing to notice the outline he had left in the sliding glass door he’d
crashed through! His impact was so sudden that only where he hit was the glass
shattered and scattered: the rest of the door was intact. I settled with the cafe-
owner— for $30. Luria was surprised that anything had happened: there was not
a mark on him.
Despite government “injunctions,” we did go to San Francisco: Sokolov
had read all the Damon Runyan and Dashiell Hammett stories and had to see all
the sites he remembered—that is, between Luria’s camera store stops. We had an
appointment for Luria and Sokolov to speak at Berkeley and were a bit late for
their talks—but all went smoothly. I never heard from either the State
Department or the KGB about our various visits.

Orienting and Familiarization


Muriel Bagshaw and I set out to use Sokolov’s indices of the orienting
reaction in monkeys. Another student and I had first confirmed Sokolov’s
conclusions on humans. In conducting these tests, however, we had found that
the placement of the heart and blood pressure monitor was critical—somewhat
different results were obtained with even slightly different placements of the
monitor. There is much art in experimentation—that is why I try to see for
myself an experimental procedure in someone else’s laboratory, or to repeat the
experiments in my own laboratory, if the results of an experiment are critical to
my thinking.
Over the next two decades, Bagshaw and I explored the effects of removal
of the amygdala in monkeys on the orienting reaction and its subsequent
habituation. In the process, we also found that classical Pavlovian conditioning
is disrupted by removal of the amygdala. Our results showed that in the
conditions of our experiments, both habituation and conditioning are dependent
on activating the viscera (hollow organs, such as the gut, heart and lungs) of the
body and the autonomic nervous system that controls those viscera. No gut-
involvement, no habituation; that is, no familiarization.
63. Afferent and Efferent Connections of the Amygdala

Ordinarily, a novel (or reinforcing) event produces a visceral reaction


largely by way of the autonomic nervous system: a galvanic skin response due to
a slight increase in sweating; a brief increase in heart rate; and a change in
respiratory rate. These responses “habituate,” indicating that the stimulus has
become familiar: with repetition of the stimulus, the responses attain a low
amplitude of waxing and waning. What surprised us is that after amygdalectomy
the novel stimuli failed to habituate; that is, the novel failed to become familiar.
No representation, no “neuronal model” had been formed.
These new observations fitted well with those I had obtained at Yale in the
late 1940s, when I showed that electrical stimulation of the amygdala and the
related limbic motor cortex results in changes in our heart and breathing rate,
blood pressure and in gut contractions. Since then, James McGaugh of the
University of California at Irvine has shown that there is a direct influence of
amygdala stimulation on the secretion of adrenaline by the adrenal gland.
But just as in the case of the classical motor cortex, the behavioral function
of the limbic motor cortex is not to regulate the visceral/autonomic functions per
se, but rather to develop attractors; that is, to process targets. In the case of the
classical motor cortex, these targets serve as attractors that organize an
achievement in the world we navigate, an intention-in–action. In the case of the
limbic motor cortex, the targets serve to organize an appraisal, an assessment of
the relevance to the organism of a novel input.
Such assessments of relevance are usually explored under the term
“attention.” Actually, the experimental analysis of the effects of removals of the
amygdala (or their stimulation) resulted, for the most part, in changes in
attention, changes that had then to be related to the processing of emotion. The
next sections of this chapter summarize and reorganize, in terms of what we
finally achieved, what seemed to us to be a very long experimental route we had
to traverse between attention and emotion.

Something New
In 1975, Diane McGuinness, then a graduate student at the University of
London, and I published a review of decades of experiments, performed around
the world, that aimed to show differences in bodily responses to various
emotions. McGuinness and I noted, in our review, that all the physiological
measures that scientists had used to characterize differences in emotional
responses in a subject ended in discovering differences in the subject’s manner
of attending. Donald Lindsley and Horace (Tid) Magoun, working at the
University of California at Los Angeles (UCLA), had proposed an “activation
theory of emotion” based on differing amounts of “attention” being mobilized in
a situation. Thus, the results of the experiments McGuinness and I had reviewed,
including those that Bagshaw and I had performed, were in some respects
relevant to the processing of emotions. But exactly how was, before our review,
far from clear.
Mc Guinness and I recalled that William James had divided attention into
primary and secondary. This division was later characterized as assessing “what
is it?” (primary) vs. assessing “what to do?” (secondary.) Because assessing
“what is it?” interrupts our ongoing behavior, we described it as a “stop”
process. Because assessing “what to do” initiates a course of action or inaction
we may take, we described it as a “go” process. The experiments in my
laboratory had demonstrated that the amygdala and related limbic brain systems
deal with “stop,” while the upper basal ganglia deal with “go” processes.
James’s division of attentional processes into primary and secondary
paralleled his characterization of emotions as stopping at the skin, while
motivations deal with getting into practical relations with the environment. One
popular idea derived from this distinction is that some people respond to
frustration more viscerally and others more muscularly: this difference is called
“anger-in” and “anger-out.” In our culture, women are more prone to respond
with anger-in while men are more prone to anger-out.
These divisions also parallel those made by Freud: primary processes fail to
include reality testing (which depends on a double loop of attentional
processing); reality testing defines secondary processes during which we seek to
get into practical relations with the world we navigate. It is important to note
that, for both James and Freud, assessing “what to do?” as well as assessing
“what is it?” deal with attention. Thus, assessing “what to do?” motivation, is a
disposition, an attitude, not an action. Interestingly, both in French and German
“behavior” is called comportment and verhaltung—how one holds oneself. In
this sense, attending to “what is it?” also becomes an attitude.
I received an award from the Society of Biological Psychiatry for showing
that the upper basal ganglia (cau-date and putamen) organize and assess, the
sensory input involved in “what to do?”—that these basal ganglia control more
than the skeletal muscle systems of the body, as is ordinarily believed. The
award was given for my demonstration that electrical stimulations of the basal
ganglia alter receptive field properties in the visual system in both the thalamus
and the cortex. In discussing my presentation of these results, Fred Mettler,
professor of anatomy and neurology at Columbia University, had this to say at a
conference there, during which he gave the introductory and final papers:

I don’t think we should be deluded by the rather casual way that


Karl presented this material. This is an elegant display of experimental
techniques in an area which is extremely difficult to handle. I
congratulate you, Karl. What he has shown you throws into relief the
importance of two systems concerning which we have so far had only a
few hints during the presentations of the two days. He has opened the
way to a new symposium, dealing with the interconnections of the
[basal ganglia] with the cortex on the one hand, and the [basal
ganglia] with the thalamus, on the other. . . . The difference he has
shown you . . . [is] what we may call the associational handling of
sensory input. Without the [basal ganglia] the animal is quite unable
to relate itself to its environment at a satisfactory level of self-
maintenance. Without its cortex it is unable to relate itself accurately
to its environment but it can still do it. The cat, maligned feline though
it may be, is able to get along reasonably well without much cortex but
if you add a sizable [basal ganglia] deficit to this, the animal looks at
you with vacuous eyes and, in uncomprehending manner, will walk out
of a third floor window with complete unconcern.

Though these experiments were performed in the 1960s and 1970s, they
have not as yet triggered an “orienting reaction” in the habitués of established
neuroscience.

Emotions as Hang-ups
Thus, the Four Fs are expressions of our basic dispositions to eat, to fight,
to flee, to engage in sex. The experiments I’ve just described have shown that
the behavioral expressions of these dispositions are rooted in the basal ganglia.
The upper basal ganglia are involved in starting and maintaining these
behaviors, while the limbic basal ganglia, especially the amygdala in its relation
to the hypothalamic region, regulate satiety: stopping eating and drinking, and
placing social and territorial constraints on fighting and fleeing, and on sex. This
raises a key question: What has “stopping” to do with emotion?
The answer to this question came from electrical stimulations of the medial
hypothalamic region and of the amygdala in rats, cats, monkeys and humans.
When the stimulation of these brain structures is mild, the animal or human
becomes briefly alert. In humans, drowsiness or boredom are briefly interrupted
—only to be quickly reinstated. Moderate stimulation results in greater arousal,
an “opening up,” with evidence of interest in the animal’s surroundings. (In
humans such stimulation results in a smile and even in flirting.) Medium-
strength stimulation results in withdrawal and petulance. Strong stimulation
produces panic in humans and what has been called “sham rage” in animals: the
subject of the stimulation lashes out at whatever target is available. Alerting,
interest, approach, withdrawal, panic and emotional outburst are on a
continuum that depends on the intensity of the stimulation!
These results led me to talk about emotions as “hang-ups” because alerting,
interest, withdrawal, panic and outbursts do not immediately involve practical
relations with an animal’s or a person’s environment. The coping with pain and
pleasure is performed “within the skin.” As noted, William James defined
emotion as stopping at the skin. When expressed, emotions don’t get into
practical relations with anything or anyone unless there is another person to
“read” and to become influenced by the expression.
When we are no longer “hung up,” we again become disposed to getting
into practical relations with our environment; that is, we are motivated and our
upper basal ganglia become involved.

Conditioned Avoidance
Popular as well as scientific articles have proclaimed that the amygdala and
hippocampus are “the centers in the brain for fear.” Such pronouncements make
some of us who are teaching feel exasperated. Of course there is a descriptive
base for the popular assertions: Some scientists are interpreting the function of a
particular brain system on the results of one experimental test or a group of
related tests. When they find evidence, such as a freezing posture and increase of
defecation in rats, they assume that the rat is “afraid.” Working with primates
makes such assumptions more difficult to come by.
My monkeys, and even dogs, figured out what my experiments that aimed
to test for “fear” were all about. In such experiments, a signal such as turning on
a light indicated to the animal that it needs to jump out of its cage into an
adjacent one to avoid a mildly unpleasant electrical current that would be applied
to the bottom of the cage 4 seconds hence. Animals learn such a conditioned
avoidance quickly. Monkeys showed me that they had become conditioned by
jumping—and then by repeatedly reaching over and touching the electrified cage
they had just jumped out of—that they knew what was going on. Knowing and
emotionally fearing are not the same.
The dogs came to me with a Harvard graduate student who was to study the
effects of removal of the amygdala in a “traumatic avoidance” experiment. In
this experiment the dogs were to learn to avoid a noxious degree of electric
current that was turned on some seconds after a gate dividing two compartments
was lifted. In an initial experiment I lifted the gate without turning on any
electric current—and the dogs jumped immediately to the other compartment.
Was the “grass greener on the other side of the fence”? The student and I then
substituted a flashing light for the “fence” as a signal that the dogs were to jump
over a low barrier to reach the other compartment. The dogs learned to do this
substitute for a ”traumatic” conditioning procedure in fewer than ten trials. (Of
course, as was my policy for the laboratory, I never turned on any traumatic
electric current whatever, at any time.) Dogs like to please and they quickly
figured out what we wanted. In these experiments, both monkeys and dogs
appeared pleased to have a chance to play and seemed cognitively challenged
rather than afraid.
A Story
How to choose the course of the “what is it?” and “what to do?” is
beautifully illustrated by a story Robert Sternberg loves to tell, based on his
dedication to Yale University, where he teaches and does research.
Two students, one from Yale and the other from Harvard, are hiking alone
in the woods in Yellowstone Park when they come upon a bear. Both students
are interested in bears, but have also heard that bears can be dangerous. The
Harvard student panics. What to do? Climb a tree? Bears can climb trees. Run?
Could the bear outrun us? Hide? The bear can see us. Then he noticed that his
friend, the Yalie, had taken out his running shoes from his backpack, had sat
down on a fallen log, and was lacing them up. “Why are you doing that? the
Harvard student asked. “Don’t you realize that you’ll never outrun that bear
whether you have track shoes on or no?” “I don’t have to outrun him,” replied
the ‘cortical’ Yalie. “I only have to outrun you.”
The bear was a novel experience for the students. Their visceral and
autonomic systems most likely responded to the novelty. The intensity of the
response depended on what they knew of bears in that habitat. The feelings of
interest initiated by the sight of the bear were due not only to the visceral
responses that were elicited; quickly the students’ “what to do?” processes
became involved. Further, at least one of the students entertains a “cortical”
exercise, an episode in morbid humor, in attitude—or so the story goes. I can see
them laughing, to the puzzlement of the bear, who wanders away from the scene.

What Comes First


William James had suggested that the emotional components of attention
follow our awareness of “what is it?” However, recent anatomical and
physiological evidence has demonstrated that some sensory input (visual, tactile
and auditory) actually reaches our amygdala before it reaches our brain’s cortex.
When this input is sensed, signals are sent, via the autonomic part of the brain’s
motor systems, to the viscera and adrenal gland. Thus visceral and endocrine
activity is inaugurated on a “fast track” before cortical processing (the
perception of the event) occurs.
Experiments have repeatedly shown that an event can have an effect when
we do not recognize the content of the experience. The experiments consist of
showing one hundred pictures to a person and then briefly showing five hundred
pictures of which one hundred are the ones previously shown. People have no
difficulty in identifying the one hundred as having been previously experienced
—although they can’t remember the content of these pictures. To experience a
specific emotion, such as fear, we must be afraid of something, but the
previously shown pictures evoked no recognition and therefore only a feeling of
familiarity.
My colleagues and I had shown, in the experiments described above, that
removal of the amygdala did not abolish the visceral and endocrine responses
when the animals were exposed to a novel pattern. The amygdala were only a
part of the limbic motor cortex that could convey the novel input. But much to
our surprise, these responses persisted as the pattern was repeated: the responses
did not habituate.
Habituation, familiarization was dependent on the presence of an intact
amygdala able to process the visceral and endocrine responses. The amygdala
were involved in processing the effects of body responses so that the novel
pattern became familiar, a part of the subject’s memory, a “neuronal model” of
the experience.
In the absence of habituation, animals are physiologically aroused by
anything and everything as if it were “novel”—the response activating visceral
and endocrine reactions. Novelty, the unfamiliar, triggers a visceral/endocrine
response. Triggering of visceral and endocrine responses by novelty brings
together the activation and visceral theories of emotion but in a most unexpected
manner. The rest of this chapter describes this unexpected relationship between
visceral activation and emotion.

What Is Novelty?
There is a great deal of confusion regarding the perception of novelty. In
scientific circles, as well as in art and literature, much of this confusion stems
from confounding novelty with something that scientists and engineers call
“information.” During a communication, we want to be sure of what is being
communicated. I often don’t quite get what another person is saying, especially
on a telephone, so I ask that the message be repeated, which it is, perhaps in a
slightly different form. My uncertainty is then reduced. In 1949, C. E. Shannon
and W. Weaver, working at the Bell Telephone Laboratories, were able to
measure the amount of reduction of “unsureness,” the amount of uncertainty
produced during a communication. This measure of the reduction in uncertainty
became known as a measure of information. (In 1954, Gabor related this
measure to his wavelet measure, the quantum of information, which describes
the minimum uncertainty that can be attained during a communication.)
The experiencing of novelty is something rather different. During the early
1970s, G. Smets, at the University of Leuven, performed a definitive experiment
that demonstrated the difference between information (in Shannon’s sense) and
novelty. Smets presented human subjects with a panel upon which he flashed
displays of a variety of characters equated for difficulty in being told apart. The
displays differed either in the number of items or in the arrangement of the
items. The subjects had merely to note when a change in the display occurred.
Smets measured the visceral responses of the human subjects much as we had
done in our monkey studies. Changing the number of items in the display—
demanding a change in the amount of information to be processed, in Shannon’s
terms— produced hardly any change in visceral responses. By contrast,
changing the arrangement of the items into novel configurations (without
changing the number or the items themselves) evoked pronounced visceral
reactions. Novelty is sensed as a change in pattern, a change in configuration, a
change from the familiar.
In presenting the results of Smets’s experiments, this past year one of my
students at Georgetown complained that she understood that novelty is different
from information: But how? Her classmates agreed that they would like to know
specifically what characterizes “novelty.” I stewed on the problem for three
weeks and on the morning of class, as I was waking up, the answer became
obvious:
Novelty actually increases the amount of uncertainty. This increase is the
opposite of the reduction of uncertainty that characterizes Shannon information.
Novelty, not information, is the food of the novelist as indicated in the quotation
that introduces this chapter.
Normally, we quickly habituate to a novel occur-rence when it is repeated.
We form a representation that we use as a basis for responding or not
responding, and how we respond. This representation of what has become
familiar constitutes an episode, which is punctuated by a “what is it?”, an
orienting reaction to an event that initiates another episode until another
orienting reaction stops that episode and begins a novel one. We can then,
together with other episodes, weave together the complex that forms these
episodes into a story, a narrative. Unless a particular episode becomes a part of
our “personal narrative,” we fail to remember it.

Constraints
How then are episodes formed from the stream of our experience? The
basic finding—that our stream of consciousness, the amount of attention we can
”pay,” is constrained—comes from knowing that the “what is it?” stops ongoing
experience and behavior. Monkeys with their amygdala removed did not stop
eating or drinking or fleeing or fighting or mounting in circumstances where and
when the normal animals do. After I removed their amygdala, the boundaries,
the where and when, that define a sequence of experience or behavior before a
change occurs, were gone. In the normal animal, as well as in us, those
boundaries define an experienced episode.
I removed the amygdala on young puppies, and we got the same result I had
obtained in monkeys. The puppies who had their amygdala removed did not rush
to eat, as did their intact control littermates, but once they began eating they
went on and on, long after their littermates had become satiated and had left the
food tray. When we tested monkeys who had had their amygdalas removed to
see whether they ate more because they were “hungrier” than their unoperated
controls (or their preoperative selves), we found that the monkeys who had had
their amygdalas removed were actually less eager to eat at any moment (as
measured by the amount eaten over a given time period or by their immediate
reaction to the presentation of larger food portions), but once they began to eat,
they would persist for much longer before quitting the eatery.
When we are in the grip of an intense feeling, such as love, anger,
achievement or depression, it feels as if it will never end, and it pervades all that
we are experiencing. But after a while, feelings change; a stop process has set a
bound, a constraint, on the feeling. Satiety constrains appetites; territoriality
constrains sexuality; cognition constrains fleeing or fighting.
A constraint, however, does not necessarily mean simply a diminution of
intensity. Constraints (Latin: “with-tightening”) may, under specifiable
circumstances, augment and/or prolong the intensity of a feeling. It’s like
squeezing a tube of toothpaste: the greater the squeeze the more toothpaste
comes out. Obsessive-compulsive experiences and behaviors, for instance,
provide examples of constraints that increase intensity.
Constraints are provided for us by the operation of a hierarchical set of our
neural systems, of which the amygdala and other basal ganglia are in an
intermediary position. Beneath the basal ganglia lie the brain stem structures that
enable appetitive and consummatory processes and those that lead to self-
stimulation: the homeorhetic turning on and off of pleasures and pains. At the
top of the hierarchy of brain structures that enable feelings and expressions of
emotion and motivation is the cortex of the frontal lobes.
Labeling our feelings as fear, anger, hunger, thirst, love or lust depends on
our ability to identify and appraise this environmental “what” or “who,” and such
identification depends on secondarily involving some of our other brain systems,
which include the brain cortex.
The chapters hereafter deal with the importance of the cortex in enabling
the fine grain of emotional and motivational feelings and expressions that permit
us to weave such episodes into a complex, discerning narrative.

The Middle Basal Ganglion


I have said nothing about the middle basal ganglion, the globus pallidus,
because little is known. Clinically, malfunctions of the middle basal ganglion
have been held responsible for Tourette’s syndrome, the occurrence of tics that
are sudden outbursts of uncontrollable movements or speech acts, often
exclamations of obscenities. Fortunately, these embarrassing intrusions become
less frequent as the person grows older. On the basis of this clinical evidence,
one can venture the conjecture that the middle basal ganglion when functioning
normally mediates, at a primitive level, the relation between an emotional hang-
up and an attempt at a motivated “getting into practical relation with the
environment.”
Such mediation is provided at a higher level, that is, with more cortical
participation, by hippocampal processing. The experiments in my laboratory as
well as those of others, described in Chapter 15, show that the activity of the
hippocampus accomplishes efficient mediation between emotions and
motivations by processing the episode, the context, the familiar within which
attended behavior is occurring.

Epilepsy
Before the advent of EEGs, the diagnosis of temporal lobe epilepsy was
difficult to make. I had a patient who had repeatedly been violent—indeed, he
was wanted by the police. He turned himself in to the hospital and notified the
police of his whereabouts. I debated with myself as to whether he did indeed
have temporal lobe epilepsy or whether he was using the hospital as a way to
diminish the possible severity of his punishment. The answer came late that
night. The patient died in status epilepticus during a major seizure.
Fortunately, most patients with temporal lobe epilepsy do not become
violent. On another occasion, decades later, I was teaching at Napa State
Hospital in California every Friday afternoon. After my lecture, one of the
students accompanied me to my car, and I wished her a happy weekend. She was
sure to have one, she said: she was looking forward to a party the students and
staff were having that evening. The following Friday, after my class, this young
lady and several others were again accompanying me to my car. I asked her how
she had enjoyed the prior weekend’s party. She answered that unfortunately
she’d missed it—she had had a headache and must have fallen asleep. The other
girls immediately contradicted her: they said she’d been at the party all evening
—had seemed a bit out of it, possibly the result of too much to drink. Such
lapses in memory are characteristic of temporal lobe epilepsy. The young lady
was diagnosed and medicated for her problem, which eliminated the seizures.
An oft-quoted set of experiments performed in the early 1960s by S.
Schachter and J. E. Singer at Columbia University showed that experienced
feelings can be divided into two separate categories: the feelings themselves and
the labels we place on those feelings. These experiments were done with
students who were given an exam. In one situation, many of the students (who
had been planted by the experimenter) said how easy the questions were, turned
in their exams early and also pointed out that this was an experiment and would
not count on the students’ records.
Another group of students was placed in the same situation, but these
planted students moaned and groaned, one tore up the test and stomped out of
the room, another feigned tears. Monitors said that the exam was given because
the university had found out it had admitted too many students and needed to
weed some out.
When asked after the exam how each group felt about it, and what their
attitude was toward the experimenters, the expected answers were forthcoming:
the “happy” group thought the exercise was a breeze; the “unhappy” group felt it
was a travesty to be exposed to such a situation.
As part of the experiment, some of the students had been injected with
saline solution and other students with adrenaline (as noted, the adrenal gland is
controlled by the amygdala.) Analysis of the result showed that the adrenaline-
injected students had markedly more intense reactions—both pleasant and
“painful”—than did the saline group.
The label we give to a feeling that has been engendered is a function of the
situation in which it is engendered; its intensity is a function of our body’s
physiology, in the case of this experiment, a function of chemistry, the injection
of adrenaline that influenced the amygdala/ adrenal system.

Understanding What I Found


The results of the experiments described in this chapter show that the “what
is it?” response is just that. It is a capturing of attention, a stopping of an episode
of ongoing processes, an alerting. Stimuli that stop ongoing behavior produce an
increase in uncertainty, thus arousal, interest, approach, avoidance and panic.
The “what is it?” reaction occurs when an experienced event is novel. Repetition
of the event leads to familiarization. Familiarization establishes a representation
of the experienced event.
Familiarization fails to occur in the absence of visceral responses. Visceral
and endocrine responses are necessary to the formation of a representation of the
event, a memory. Emotional feelings result from the activation of a memory, not
directly from a visceral input, as suggested in earlier theories. We do not
experience an emotion when we have a stomach-ache unless it conjures up
memories of possible unpleasant visits to doctors or even surgery. Visceral and
endocrine activation follows a mismatch between what is familiar and the novel
input. If the mismatch to what is familiar to us is great, we will experience the
mismatch as disturbing, even as panic. These reactions occur along a single
dimension of intensity: electrical stimulation of the amygdala in humans and
animals produces alerting, interest, engagement, retreat and panic. Emotions
result from the processing of familiarity. Visceral reactions are essential in
gaining familiarity, a form of memory.
Visceral/endocrine inputs, ordinarily called “drive stimuli,” play a central
role in organizing memory, the retrospective aspects of emotions and
motivations. However, the impact of these “drive” stimuli on the formation of
the memory is shared by attention to the environmental context, Freud’s
caretaking person, within which the emotional and motivational feelings are
occurring.
Having completed several drafts of this chapter, it occurred to me that the
results of my experiments have shown that emotions serve as attractors;
emotions are the prospective aspect of a hedonic (pleasure/pain) memory
process just as motives are in Freud’s memory/motive theory. The action-based
memory/motive process involves the upper basal ganglia (the caudate and the
putamen). Freud did not know about the functions of the limbic basal ganglia—
in fact, neither did I, nor did my teachers when I was in medical school and
during my residencies. The familiarization/ emotional process parallels the
readiness/motivational process and involves the limbic basal ganglia (the
amygdala and accumbens). Emotion is the attractor—the prospective,
attentional, intentional, or thoughtful aspect of processing familiarity. Typically,
as the saying goes when plucking the petals of a daisy: we attempt to assess
whether he loves me; he loves me not—or will she; won’t she. It is the conscious
or unconscious emotional possible prospect (maybe next time it’ll work) that
accounts for “hang-ups” and for repeatedly returning into less than satisfactory
relationships.
Labeling our feelings as fear, anger, hunger, thirst, love or lust depends on
our ability to identify and appraise this environmental “what” or “who,” and such
identification depends on secondarily involving some of our other brain systems,
which include the brain cortex.

In Summary
1. Motivations get us into practical relations with our environment. They are
the proactive aspect of an action-based memory processes mediated by the
upper basal ganglia: the caudate and the putamen.
2. Emotions stop at the skin. Emotions are hang-ups. They stop ongoing
behavior. Emotions are based on attending to novelty which increases our
uncertainty. Thus, we alert to an unexpected occurrence. We may become
interested in such an occurrence and become “stuck” with this interest.
More often, we become bored, habituated. Habituation, familiarization
occurs as the result of a change in hedonic memory processes initiated in
the limbic basal ganglia. Familiarization depends on visceral activation.
Emotions include the prospective aspects of this change in memory.
3. Electrical stimulation of the amygdala of animals and people has produced,
in ascending order of the amount of stimulation (applied randomly),
reactions that range from a) momentary to prolonged interest; b) to
approaching conspecifics as in sexual mounting or flirting; c) to retreating
from the environment; and d) to outbursts of panic which may lead to
attack, labeled “sham rage.”
4. In short, although processing the content of an episode (the familiar) is
highly specific, the response engendered varies along an intensive
dimension reaching from interest to panic and rage.
Chapter 14
Values

Wherein I chart the brain processes that organize the manner in which we make
choices.

Motives like to push us from the past; values try to draw us into the
future.
—George A. Miller, Psychology: The Science of Mental Life, 1962

The empirical relations that determine the value of a piece of currency


depend, in part, on the utility of that piece of currency for that
particular individual. . . . Two interrelated classes of variables have
been abstracted by economists to determine utility: demand and
expectation; two similar classes have been delineated from the
experiments reported here—each of the classes related to a distinct
neural mechanism. [In addition] a still different neural mechanism has
been delineated whereby preferences among values can be
discriminated.
—Karl H. Pribram, “On the Neurology of Values,” Proceedings of the 15th
International Congress of Psychology, 1957
Preferences and Utilities
My claim in the previous chapter was that the amygdala and related systems
are involved in familiarization, a form of memory, the prospective aspects of
which form the basis of the intensities of our feelings and their expression. This
claim raises the issue as to how familiarization is based on assessing and
evaluating a situation in the world we navigate. An unexpected experimental
result that we obtained in my laboratory addressed this issue. The experiment
was a simple one, aiming to answer the question: Would a monkey who put
everything in his mouth (and ate what was chewable) choose the edible foods
before choosing the inedible objects when both were presented together on a
tray? Before surgery, the monkeys had chosen mainly the edible objects; each
monkey had shown a stable order in what had become familiar, that is, in what
he preferred. Much to my surprise, after removal of the amygdala, this order of
familiar preferences remained undisturbed—but the monkeys would now go on
to take non-food objects and try to chew on them until all the objects were gone
from the tray. Their “stop” process had become impaired.

64. Object testing board

In 1957, I presented the results of these and related experiments at the 16th
International Congress of Psychology in Bonn, Germany. My text was set in
terms of economic theory—especially as developed by the thesis of “expected
utility” that had been proposed in 1953 by the mathematicians John von
Neumann and O. Morgenstern. I had become interested in how we measure
processes such as changes in temperature, and von Neumann and Morgenstern
had an excellent exposition of the issue. Also in their book they talked about
zero-sum games, a topic which became popular during the Cold War with the
Soviet Union. It was during these literary explorations that my experimental
results on preferences occurred, and Von Neumann and Morgen-stern’s depiction
of the composition of utilities caught my attention. Wolfgang Köhler, friend and
noted psychologist, came up to me after my initial presentation of these results
and said, “Karl, you’ve inaugurated a totally new era in experimental
psychology!”
I subsequently presented my findings at business schools and departments
of economics from Berkeley to Geneva. During the 1960s and 1970s at Stanford,
I often had more students from the Business School in my classes than from the
psychology and biology departments. But despite this enthusiastic reception
from the economic community, the inauguration of a new era in neuro-
economics would have to wait a half-century for fulfillment.

Desirability and Duration


In my presentations to psychology, biology and economics students, and in
publications thereafter, I suggested that the amygdala deals with the
“desirability” of what is being chosen, while the frontal cortex is involved in
“computing” the likelihood that the desired choice is available. Desirability is an
intensive dimension that determines how much effort we are willing to expend in
order to satisfy the desire. Desirability thus determines demand. Both my
experiments with monkeys and my observations of humans revealed that the
intensity of the desire for food, as measured by the amount of effort the person
or animal would expend, is drastically lowered by amygdalectomy. The animals
and patients ate more not because they were momentarily “hungrier” but because
they failed to stop eating: their stop (satiety) mechanism had become impaired.
In the critical experiments demonstrating brain involvement in estimating
the likelihood of achieving a desired outcome, I performed a “fixed interval
experiment,” so-called because it uses a ”fixed interval schedule” for
presentation of a peanut to a monkey. I scheduled a waiting period of two
minutes between delivering each peanut to a place within the monkey’s reach.
During such a waiting period, animals and humans usually start out with very
few responses (in this case, pressing a lever). The number of responses produced
over a particular interval of time— that is, the rate of responding—increases
gradually during the two minutes until just before the expected time of delivery
of the peanut, when the number of responses reaches a maximum.
65. Effect of removal of frontal cortex

66. Hunger deprivation has no effect.

In running this experiment, I noticed that the signal on my desk that


indicated when the peanut was being delivered was coming slightly earlier as
time went on. When I investigated, I saw my monkey’s hand reach out of the
hole in his cage through which the peanuts were delivered, just seconds before
the delivery period was up. The hand cleverly flicked the delivery apparatus
(which was modeled after a refreshment bar delivery machine) into a premature
delivery. The monkey timed his flicking so that the early delivery was hardly
noticeable! I wondered how long this ruse had been going on and how many of
my monkeys had been playing this game. Fortunately, the problem was easy to
fix—a narrow tunnel was inserted between the hole in the cage and the peanut
delivery machine.
The amount of studying done during a semester follows the same pattern,
which is called a “scallop”: at the beginning of each semester, students are apt to
take their course work rather lightly, but as the semester goes forward, so does
the amount of studying. In terms of economic theory, I interpreted the scallop to
indicate that the estimation of the probability of being rewarded—the worry that
we’ll miss the boat by not studying, or miss the moment of the delivery of the
peanut—“grows” exponentially over the interval.
Lowering or raising the amount of food available to, and eaten by, a
monkey during the course of the experiment changes the rate at which he
responds—that is the number of responses he makes during the two-minute
interval between the delivery of each peanut—but does not change the
exponential shape of the scallop: another one of those unexpected results. After
removal of the frontal cortex, the rate of responding does not change, but the
scallop disappears. Two of my four operated monkeys timed the expected
delivery of the peanut so accurately that they needed only one response (the
others made fewer than five) to obtain a peanut; the rest of the time they waited
patiently or fiddled with the cage. This result demonstrated that, contrary to what
most neuroscientists believe, clock timing is not disturbed by removals of the
frontal cortex. Rather, it is a sense of duration: how long, and how structured an
experienced interval seems.

67. Peanut dispensor for monkey experiment

In a classical (Pavlovian) conditioning situation, timing was so accurate in


these monkeys that if we changed the interval between the conditional (signal)
and unconditional (rewarding) stimulus, the animal failed to adjust to the new
situation. The monkeys’ estimation of duration had been fixed so narrowly that
they missed opportunities that were just beyond the point of their fixation. As in
the case of responding in the fixed-interval experiment, their behavior had
become fixated, as is shown in human obsessive-compulsive disorders. The
Greeks distinguished clock time, kronos, from a sense of duration, kairos. Our
brains also make this distinction.

Preferences
But the most surprising finding in this series of experiments was that the
monkeys who had their amygdala removed showed no disruption of the order in
which they chose the items presented to them. Their preferences were not
disrupted. At the time when I was running my experiments, experimental
psychologists were discovering a surprising fact: preferences were shown to
depend on perceived differences arranged hierarchically.
In another set of experiments, my colleagues and I were showing that the
removal of the temporal lobe cortex biased (led) the monkeys toward taking
risks, whereas removal of the hippocampal system biased monkeys toward
caution. Preferences were shown to be arranged hierarchically according to
perceived risk: I prefer a Honda to a Cadillac; I prefer meat to vegetables; and
vegetables to potatoes. Thus, I was able to relate my results on maintaining
preferences after amygdalectomy to the monkeys’ intact ability to perceive
differences. The amygdala are not involved with preferring meat to potatoes, as
my experiments showed.
By contrast, the intensive character of “desirability” is not arranged
hierarchically, as was shown by Duncan Luce of the University of California at
Irvine in his experiments on expectations. For instance, in his discussions with
me, he noted that desirability, and therefore demand, changes dramatically, not
according to some hierarchy, with changes in the weather (packing for a trip to a
location that has a different climate is terribly difficult.) Desirability changes
dramatically with familiarity (what is exciting initially has become boring) and
with recent happenings as, for instance, when air travel became momentarily less
desirable after the attack on the World Trade Center in New York— airlines
earnings fell precipitously. It is practically impossible to rank (hierarchically)
desirability and to try to do so can sometimes get one in trouble: Never, never
attempt to charm your lover by ranking the desirability of his or her
characteristics against someone else’s. We either desire something or we don’t,
and that desire can change abruptly. Damage to the amygdala dramatically alters
a person’s or monkey’s desire for food or any of the other Four Fs, not the
ranking of preferences.
“Concilience”—What It Means
Recently a comprehensive review paper published in Science has heralded a
new approach to neuroscience: neuroeconomics. “Economics, psychology and
neuroscience are converging today into a single, general theory of human
behavior. This is the emerging field of neuroeconomics in which concilience, the
accordance of two or more inductions drawn from different groups of
phenomena, seems to be operating.” (Glimcher and Rustichini, 2004)
The review presents evidence that when paradoxical choices are made
(those choices that are unpredicted from the objective probabilities, the risks, for
attaining success), certain parts of the brain become selectively activated (as
recorded with fMRIs). These studies showed that, indeed, preferences, risks,
involve the posterior cortex—a welcome substantiation of my earlier conjecture.
But the review, though mentioning von Neumann and Morgenstern’s theorem,
fails to distinguish clearly between preferences and utilities, the important
distinction that resulted directly and unexpectedly from my brain experiments.
The recent studies reviewed in Science confirmed many earlier ones that
had shown that our frontal cortex becomes activated when the situation in which
our choices are made is ambiguous. This finding helps explain the results,
described earlier, that I had obtained in my fixed-interval and conditioning
experiments: The frontal cortex allows us a flexibility, a flexibility that may
permit us to “pick up an unexpected bargain” which would have been missed if
our shopping (responding) had been inordinately “fixed” to what and where we
had set our expectations.
Another recent confirmation of the way brain research can support von
Neumann and Morgenstern’s utility theory came from psychologist Peter
Shigzal’s experiments using brain stimulation as an effective reward. He
summarized his findings in a paper entitled “On the Neural Computation of
Utility” in a book on Well-Being: The Foundations of Hedonic Psychology.
Shigzal discerns three brain systems: “Perceptual processing determines the
identity, location and physical properties of the goal object, [its order in a
hierarchical system of preferences]; whereas . . . evaluative processing is used to
assess what the goal is worth. A third processor . . . is concerned with when or
how often the goal object is available.”
The results of Shigzal’s experiments are in accord with the earlier ones
obtained in my laboratory and are another welcome confirmation. Contrary to
practices in many other disciplines, the scientist working at the interface between
brain, behavior and experience is most always relieved to find that his or her
results hold up in other laboratories using other and newer techniques. Each
experiment is so laborious to carry out and takes so many years to accomplish
that one can never be sure of finding a “truth” that will hold up during his
lifetime.
These recent explorations in neuroeconomics owe much to the work of
psychologist Danny Kahneman of Princeton University whose extensive
analyses of how choices are made received a Nobel Prize in economics in 2003.
Kahneman set his explorations of how we make choices within the context of
economic theory, explicitly contributing to our understanding of how our
evaluations enable us to make choices, especially in ambiguous situations.

Valuation
The important distinction I have made between preference and utility is
highlighted by experiments performed by psychologists Douglas Lawrence and
Leon Festinger in the early 1960s at Stanford. In these experiments, animals
were taught to perform a task and then, once it had been learned, to continue to
perform it. For example, a male rat must learn to choose between two alleys of a
V-shaped maze. At the end of one of these alleys, which had been painted with
horizontal stripes, is a female rat in estrus. The alleys with the female at the end
are switched more or less randomly so that the placement of the female changes,
but the striped alley always leads to her. The rat quickly catches on; he has
developed a preference for the stripes because they always lead him to the
female he desires.
Once the rat knows where to find the female, does he stop? No. He
increases his speed of running the maze since he needs no more “information”
(in scientific jargon, reducing his uncertainty) in order to make his choice of
paths to reach her. But we can set a numerical “value” on running speed, on how
fast the male runs down the alley. This value measures the level, the setpoint, of
the male’s desire for the female.
Rewards (and deterrents) thus display two properties: they can provide
information, that is, reduce uncertainty for the organism, or they can bias
choices, place a value on their preferences.
The experiments performed at Stanford by Festinger and Lawrence showed
that the “laws” that govern our learning and those that govern our performance
are different and are often mirror images of one another. The more often, and the
more consistently, a reward is provided, the more rapid the learning. During
performance, such consistency in reward risks catastrophe: when the reward
ceases, so does the behavior, sometimes to the point that all behavior ceases and
the animal starves. In human communities, such a situation arises during
excommunication: a dependable source of gratification is suddenly withdrawn,
sometimes leading to severe and incurable depression.
A slower and more inconsistent schedule of reward results in slower
learning, but it assures that our behavior will endure during the performance
phase of our experience. This is due to the buildup of the expectation that the
current lack of reward may be temporary.

The Reversal of Means and Ends


George Mace of the University of London pointed out that the “mirror
image” effects shown between a learning situation and the one that exists during
performance are examples of a more general means-ends reversal in patterns of
experience. Reversals occur between 1) the acquisition of a pattern of behavior
as, for instance, during learning, where our behavior is experienced as means to
an end, and 2) the reversal of the pattern during performance, where our
behavior becomes experienced as an end in itself. His example suggests that
such means-ends reversals are ordinarily engendered by affluence:

What happens when a man, or for that matter an animal, has no


need to work for a living? . . . the simplest case is that of the
domesticated cat—a paradigm of affluent living more extreme than
that of the horse or cow. All basic needs of a domesticated cat are
provided for almost before they are expressed. It is protected against
danger and inclement weather. Its food is there before it is hungry or
thirsty. What then does it do? How does it pass its time?
We might expect that having taken its food in a perfunctory way it
would curl up on its cushion and sleep until faint internal stimulation
gave some information of the need for another perfunctory meal. But
no, it does not just sleep. It prowls the garden and woods killing young
birds and mice. It enjoys life in its own way. The fact that life can be
enjoyed, and is most enjoyed in the state of affluence, draws attention
to the dramatic change that occurs in the working of the organic
machinery at a certain stage. This is the reversal of the means-ends
relation. In the state of nature the cat must kill to live. In the state of
affluence it lives to kill. This happens to men. When men have no need
to work for a living there are broadly only two things left to them to do.
They can play and they can cultivate the arts.
In play the activity is often directed to attaining a pointless
objective in a difficult way, as when a golfer, using curious
instruments, guides a small ball into a not much larger hole from
remote distances and in the face of obstructions deliberately designed
to make the operation as difficult as may be. This involves the reversal
of the means-ends relation. The ‘end’—getting the ball into the hole—
is set up as a means to a new end, the real end: the enjoyment of
difficult activity for its own sake.

Addiction: A Tyranny of the Future


For years, learning theory was at center stage in the field of experimental
psychology. Performance had long been a neglected stepchild. In my 1971 book
Languages of the Brain, I called for a theory of performance and named it,
tongue in cheek, “addictionance” theory because the process of addiction
follows the very same rules as those that are in effect during performance.
Gambling hustlers and drug dealers know how to get someone addicted: the
person is first readily provided with the substance which then, inconsistently,
becomes harder and harder to obtain, either because opportunities become
scarcer or because the price of the substance rises. At the addiction center in
Louisville, Kentucky, an attempt was made to reverse this consequence by
allowing a steady supply of substance to be readily available, and then to cut
down the amount in regular, controlled stages rather than inconsistently. The
technique was successful provided no other, more fundamental problems, such
as incipient mental illness, had to be faced. For a while in England, addictive
drugs could be had on prescription. While these laws were in effect, the criminal
aspects that are produced by current law enforcement programs in the United
States did not occur. Earlier, in the 1930s, we had a similar experience when
Franklin Roosevelt ended prohibition of alcohol. The negative result of
enforcement seems to be a hard lesson for governments to learn; the
consequences of substance abuse overshadow a reasoned approach to it.
There are, of course, other factors that lead to addiction—factors such as
genetic predisposition and cultural setting. But we do have a demonstration
today of a situation in which addiction is avoided: when physicians prescribe
morphine for long-lasting pain, the patients are readily weaned when the source
of the pain is removed.

In Summary
My surprising experimental results obtained in exploring the effects of
amygdalectomy on expectation led to their interpretation in terms of economic
theory. I had framed the results of experiments on the amygdala on changes in
emotion and attention as indicating that emotions were based on familiarity, a
memory process, and that the prospective aspects of familiarity controlled
attention, and thus led to expectations. In turn, expectations were shown to be
composed of three processes that were formed by three separate brain systems:
preference, organized hierarchically, formed by the posterior systems of the
brain; estimations of the probability of reward by the prefrontal systems of the
brain; and desirability, an intensive dimension, formed by the amygdala (and
related systems). This intensive dimension is non-hierarchical and ordinarily is
bounded—a stop process—by the context within which it occurs. The
composition of that context is the theme of the next chapter.
Chapter 15
Attitude

Wherein the several brain processes that help organize our feelings are brought
into focus.

“A wise teacher once quoted Aristotle: ‘The law is reason free from
passion.’ Well, no offense to Aristotle, but in my three years at Harvard
I have come to find that passion is a key ingredient to the study and
practice of law and life.

“It is with passion, courage of conviction, and strong sense of self that
we take our next steps into the world, remembering that first
impressions are not always correct. You must always have faith in
people, and most importantly, you must always have faith in yourself.”
— Elle’s commencement address in the movie Legally Blonde
Efficiency: Remembering What Isn’t
When I first began to remove the amygdala, I did so on one side at a time.
Later, I was able to remove both amygdala in one operation. While the initial
single-sided removal took from four to six hours, my most recent removals of
both amygdala took forty minutes—twenty minutes per side. I have often stood
in wonder when a skilled artisan such as a watchmaker, pianist or sculptor
performs a task with an ease and rapidity that is almost unbelievable.
My experience shows that efficiency in performance develops by leaving
out unnecessary movements. This chapter reviews the evidence that the Papez
Circuit, the hippocampal-cingulate system, has a great deal to do with processing
our experience and behavior efficiently, and how efficiency is accomplished.
One of the Stanford graduate students in my class referred to the
hippocampus as “the black hole of the neurosciences.” It is certainly the case
that more research is being done on the hippocampus than on any other part of
the brain (except perhaps the visual cortex) without any consensus emerging as
to what the function of the hippo-campus actually is. Part of the reason for the
plethora of research is that hippocampal tissue lends itself readily to growth in a
Petri dish: such neural tissue cultures are much easier to work with, chemically
and electrically, than is tissue in situ. These studies clarify the function of the
tissue that composes the hippocampus but does not discern its role in the brain’s
physiology or in the organism’s behavior.
Another reason for the difficulty in defining the function of the
hippocampus is that there is a marked difference between the hippocampus of a
rat, the animal with which much of the research has been accomplished, and the
hippocampus of a primate. In the primate, the upper part of the hippocampus has
shriveled to a cord of tissue. The part of the hippocampus that is studied in rats is
adjacent to, and receives fibers from, the part of the isocortex that receives an
input from muscles and skin. Thus, removal of the hippocampus in rats changes
their way of navigating the shape of their space.
What remains of the hippocampus in primates is mainly the lower part. This
part is adjacent to and receives most of its input from, the visual and auditory
isocortex. Studies of the relationship of the hippocampus to behavior have
therefore focused on the changes in their patterns of behavior beyond those used
in navigating the shape of their space.
Just as in the case of research on the amygdala, almost all research on
damage to the hippocampus in primates, including humans, has devolved on the
finding of a very specific type of memory loss: the memory of what is happening
to the patient personally fails to become familiar. Again it is memory, and
familiarity, not the processing of information, or for that matter of emotion, that
is involved.
Fortunately, I have found a way to work through the welter of data that are
pouring into the “black hole” of hippocampal research: I can begin with the
results of research that my colleagues and I performed in my laboratory. The
most important of these was based on the observation that when we navigate our
world, what surrounds the target of our navigating is as important as the target
itself. When I steer my way through an opening in a wall (a door), I am not
aware of the wall—unless there is an earthquake. I am familiar with the way
rooms are constructed and, under ordinary circumstances, I need not heed, need
not attend the familiar. The example of the earthquake shows, however, that
when circumstances rearrange the walls, the “representation” of walls, the
memory, the familiar has been there all along. The experiments we performed in
my laboratory showed that the hippocampus is critically involved in “storing”
that familiar representation.

68. Afferents to Hippocampus


69. Efferents from Hippocampus

However, the processing of what is familiar is different for the hippocampal


system than for the amygdala system. A series of experiments in my laboratory
demonstrated that while the amygdala is processing what is novel during
habituation, the hippocampus is processing the context within which habituation
is happening: the hippocampus is processing what is already familiar. In a
learning task, when a monkey is choosing between a “plus” and a “zero” and
choosing the “plus” is always rewarded irrespective of where it is placed, the
amygdala is processing choosing the “plus,” the hippocampus is processing not-
choosing the “minus,” the non-rewarded cue. This was a surprising result for our
group until we realized that in order to walk through a door we must process the
walls so as not to bump into them. Our experience with familiar walls has
become, for the moment, ir-relevant, no longer “elevated” in attention. We no
longer are interested in walls unless an earthquake occurs, in which instance our
experience of the walls promptly dis-habituates and re-elevates into attention.
The memory of walls and of the minus cue does not disappear during habituation
and learning.
In another experiment, we recorded the electrical activity occurring in the
hippocampus of monkeys trained to perform a “go no-go” task: on every other
trial they were rewarded with a peanut for opening a box. On the intervening
trials they had to refrain from opening the box. As expected, the electrical
activity of the hippocampus was very different when recorded during the two
responses.
We next trained the monkeys on a “go-right go-left” task in which each trial
was rewarded: the monkeys had to choose between two boxes, one marked with
a plus sign and the other with a square. The box with the square always had a
peanut in it. The monkeys quickly learned the task and got themselves a peanut
on every trial. Our surprise came when we analyzed the recordings of the
electrical activity from the hippocampus: the electrical activity was identical to
that obtained when the monkeys had had to refrain from opening the box in the
previous task!!! When choosing between a plus and a square, the choice could be
based on either a positive or a negative choice: “The plus always signifies a
peanut” or “The square always signifies no peanut.”
Actually, we showed that in ordinary circumstances, the monkeys attend,
over the course of the experiment, both the “always a peanut” and “always no
peanut” boxes when performing these tasks. But the brains of our monkeys
sorted out which box is being attended, the amygdala responds to choose
“always a peanut;” the hippocampus to choose “always no peanut.”
In several other experiments we showed that the hippocampus is important
in attending these “always no peanut” aspects of a task. For instance, the number
of such “always no peanut” boxes on any given trial is important to monkeys
whose hippocampus is intact, but not to monkeys whose hippocampus has been
removed.
A slightly different but related effect of hippocampal removal was shown in
a task in which the “always a peanut” was shifted from a panel with a plus sign
to a panel with the square sign. The monkeys who had had their hippocampus
removed learned that the shift had occurred as quickly as did the monkeys with
their hippocampus intact—but once they had attained a 50% level of obtaining a
peanut, they just continued at that level of performance. It seemed to me that
these monkeys acted as if they were satisfied with being rewarded half the time
without worrying or thinking. They were no longer motivated to do better. After
a few weeks at this level of performance, all of a sudden the monkeys “snapped
out of it” and went on to do as well as did the monkeys with their hippocampus
intact. The slopes of achieving 50% and 90% were normal. But during the long
period when they were at the 50% level, they seemed not to “want to exert the
extra effort” to do better.
My interpretation was that at the 50% level of reward the new build-up of
“not-choosing” had not proceeded sufficiently to counter the context of “not-
choosing” from the pre-reversal stage of the experiment to kick the monkey into
a more productive motivational phase. Support for this interpretation came from
the finding that over progressive reversals the mid-plateau of 50% performance
became progressively shorter.
Ordinarily, when we walk through doors, we do so efficiently; we don’t
need to expend any effort scanning the walls for an exit. Come an earthquake,
and we immediately make that effort by “paying” attention in order to find the
safest possible place to shelter.
Experiments performed with normal rats have shown that, given several

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy