Iklan

Tuesday, April 5, 2011

Science Brain and Behavior contiuned 22

CH08.qxd 1/3/05 10:09 AM Page 300

ls
le
ll
Yet, interestingly, as Figure 8-40 illustrates, D. F. could draw reasonable facsimiles of
objects from memory. But, when doing so, she did not recognize what she was drawing.
D. F. clearly had a lesion that interfered with her ventral-stream “what” pathway.
Despite her inability to identify objects or to estimate their size and orientation,
remarkably, D. F. still retained the capacity to appropriately shape her hand
when reaching out to grasp something. (This capacity is illustrated in Figure
8-1.) Goodale,Milner, and their research colleagues (1991) have studied
D. F. extensively for the past few years, and they have devised a way to
demonstrate D. F.’s skill at reaching for objects.
The middle column in Figure 8-41 shows the grasp patterns of a control
subject (S. H.) when she picks up something irregularly shaped. S. H.
grasps the object along one of two different axes that makes it easiest to
pick up.When D. F. is presented with the same task, shown in the left-hand
column, she is as good as S. H. at placing her index finger and thumb on
appropriately opposed “grasp” points.
Clearly, D. F. remains able to use the structural features of objects to
control her visually guided grasping movements, even though she is unable
to “perceive” these same features. This result demonstrates that we
are consciously aware of only a small part of the sensory processing that
goes on in the brain. Furthermore, D. F.’s ability to use structural features
of objects for guiding movement but not for perceiving shapes again
shows us that the brain has separate systems for these two types of visual
operations.
D. F.’s lesion is located quite far posteriorly in the ventral visual pathway.
Lesions that are located more anteriorly produce other types of
deficits, depending on the exact location. For example, J. I., whose case
has been described by Oliver Sacks and Robert Wasserman (1987), was
an artist who became color-blind owing to a cortical lesion presumed to
be in region V4. His principal symptom was achromatopsia, mentioned
earlier as an inability to distinguish any colors whatsoever. Yet J. I.’s vision
appeared otherwise unaffected.
Similarly, L. M., a woman described by Josef Zihl and his colleagues
(1983), lost her ability to detect movement after suffering a lesion presumed
to be in region V5. In her case, objects either vanished when they
moved or appeared frozen despite their movement. L. M. had particular
difficulty pouring tea into a cup, because the fluid appeared to be frozen
in midair. Yet she could read, write, and recognize objects, and she appeared
to have normal form vision—until objects moved.
These varied cases demonstrate that cortical injuries in the ventral
stream all somehow interfere with the determination of “what” things are
Figure 8-40
Injury to the Ventral Stream Examples of the inability of D. F. to recognize
and copy line drawings. She was not able to recognize either of the two
drawings on the left. Nor, as the middle column shows, was she able to make
recognizable copies of those drawings. She was, however, able to draw
reasonable renditions from memory. But, when she was later shown her
drawings, she had no idea what they were. Adapted from The Visual Brain in Action
(p. 127), by A. D. Milner and M. A. Goodale, 1995, Oxford: Oxford University Press.
Model Copy Memory
D. F. S. H. R. V.
Although D. F. does not recognize
the object, she and S. H. (the
control subject with no brain
damage) form their hands
similarly to grasp it.
R. V. recognizes the
object but cannot
shape her hand
appropriately to pick
it up.
The brain has different systems for visual object
recognition and visual guidance of movement.
Conclusion
Figure 8-41
Grasp Patterns Representative “grasping” axes of three
different shapes for patient D. F., who has visual-form
agnosia, control subject S. H., and patient R. V., who
suffered bilateral occipital parietal damage. Each line
passes through the points where the index finger and
thumb first made contact with the perimeter of the shape
on individual trials in which the subjects were instructed to
pick up the shape. Adapted from The Visual Brain in Action (p.
132), by A. D. Milner and M. A. Goodale, 1995, Oxford: Oxford
University Press.
CH08.qxd 1/3/05 10:09 AM Page 301

ls
le
ll
or are like. In each case, the symptoms are somewhat different, however, which is
thought to be indicative of damage to different subregions or substreams of the ventral
visual pathway.
Injury to the “How” Pathway
In 1909, R. Balint described a rather peculiar set of visual symptoms associated with a
bilateral parietal lesion. The patient had full visual fields and could recognize, use, and
name objects, pictures, and colors normally. But he had a severe deficit in visually
guided reaching, even though he could still make accurate movements directed
toward his own body (presumably guided by tactile or proprioceptive feedback
from his joints). Balint called this syndrome optic ataxia.
Since Balint’s time, many descriptions of optic ataxia associated with parietal
injury have been recorded. Goodale has studied several such patients, one of
whom is a woman identified as R. V. In contrast with patient D. F.’s visual-form
agnosia, R. V. had normal perception of drawings and objects, but she could not
guide her hand to reach for objects.
The right-hand column in Figure 8-41 shows that, when she was asked to
pick up the same irregularly shaped objects that D. F. could grasp normally, R. V.
often failed to place her fingers on the appropriate grasp points, even though she
could distinguish the objects easily. In other words, although R.V.’s perception of
the features of an object was normal for the task of describing that object, her
perception was not normal for the task of visually guiding her hand to reach for
the object.
To summarize, people with damage to the parietal cortex in the dorsal visual
stream can “see” perfectly well, yet they cannot accurately guide their movements
on the basis of visual information. The guidance of movement is the
function of the dorsal stream. In contrast, people with damage to the ventral
stream cannot “see” objects, because the perception of objects is a ventral-stream
function. Yet these same people can guide their movements to objects on the basis of
visual information.
The first kind of patient, like R. V., has an intact ventral stream that analyzes the
visual characteristics of objects. The second kind of patient, like D. F., has an intact dorsal
stream that visually directs movements. By comparing the two types of cases, we
can infer the visual functions of the dorsal and ventral streams.
SUMMARY
How does the nervous system interpret sensory stimuli such as light waves to construct
our perceptions—for example, vision? D. B.’s surprising ability to locate lights shining in
the blind side of his visual field makes clear that our sensory world is not unitary,
In Review .
As Figure 8-42 shows, our visual experience is largely a result of visual processing in the
ventral stream, but much of our visually guided behavior is a result of activity in the dorsal
stream. An important lesson here is that we are conscious of only a small amount of what the
brain actually does, even though we usually have the impression of being in control of all
our thoughts and behaviors. Apparently, this impression of “free will” is partly an illusion.
302 ! CHAPTER 8
Figure 8-42
Summary of the What and How Visual
Streams The dorsal stream, which
takes part in visual action, begins in V1
and flows through V5 and V3A to the
posterior parietal visual areas. Its role is
to guide movements such as the hand
postures for grasping a mug or pen.
The ventral stream, which takes part in
object recognition, begins in V1 and
flows through V2 to V3 and V4 to the
temporal visual areas. Its job is to
identify objects in our visual world.
The double-headed arrows show that
information flows back and forth
between the dorsal and ventral streams,
between recognition and action.
Optic ataxia. Deficit in the visual
control of reaching and other movements.
Recognition Action
Temporal visual
areas
Parietal visual
areas
V3
(dynamic
form)
V4
(color
form)
V5
(motion)
V3A
(form)
V2 V2
V1 V1
Ventral stream Dorsal stream
CH08.qxd 1/3/05 10:09 AM Page 302

ls
le
ll
despite what our conscious experience suggests. To understand the nature of sensation,
and of vision in particular, we can dissect the visual system and examine how the parts
work together to produce vision. Vision is but one of a half dozen senses that allow us
to act in the world that we perceive.
How does the brain transform light energy into neural energy? Like all sensory systems,
vision begins with receptor cells. These photoreceptors transduce the physical energy,
such as light waves, into neural activity. The visual receptors (rods and cones) are located
in the retina at the back of the eye. Rods are sensitive to dim light, whereas cones are sensitive
to bright light and are responsible for color vision. The three types of cones, each
of which is maximally sensitive to a different wavelength of light, are often referred to as
blue, green, and red cones. The name refers not to the color of light to which the cone responds
but rather to the wavelength of light to which it is maximally sensitive.
How does visual information get from receptors in the retina to the brain? Retinal ganglion
cells receive input from photoreceptors through bipolar cells, and the axons of
the ganglion cells send their axons out of the eye to form the optic nerve. The two categories
of ganglion cells, P and M, each send a different kind of message to the brain.
P cells receive input mostly from cones and convey information about color and fine
detail. M cells receive input from rods and convey information about luminance and
movement but not color. The optic nerve forms two distinct routes into the brain: the
geniculostriate and tectopulvinar pathways. The geniculostriate pathway synapses first
in the lateral geniculate nucleus of the thalamus and then in the primary visual cortex.
The tectopulvinar pathway synapses first in the superior colliculus of the midbrain’s
tectum, then in the pulvinar of the thalamus, and finally in the visual cortex.
What are the pathways for visual information within the cortex? Among the visual regions
in the occipital cortex, regions V1 and V2 carry out multiple functions, whereas
the remaining regions (V3, V3A, V4, and V5) are more specialized.Visual information
flows from the thalamus to V1 and V2 and then divides to form two distinctly different
pathways, or streams. The dorsal stream is concerned with the visual guidance of
movements, whereas the ventral stream is concerned with the perception of objects.
How are neurons in the visual system organized? At each step in the visual pathways,
neurons produce distinctly different forms of activity. The sum of the neural activity
in all regions produces our visual experience. Like all cortical regions, each functional
column in the visual regions is a functional unit about 0.5 millimeter in diameter and
extends to the depth of the cortex. Columns in the visual system are specialized for
processes such as analyzing lines of a particular orientation or comparing similar
shapes, such as faces.
How does the visual system interpret shapes? Neurons in the ventral stream are selective
for different characteristics of shapes. For example, cells in the visual cortex are
maximally responsive to lines of different orientations, whereas cells in the inferior
temporal cortex are responsive to different shapes, which in some cases appear to be
abstract and in other cases have forms such as hands or faces.
How does the visual system interpret colors? Cones in the retina are maximally responsive
to different wavelengths of light, roughly corresponding to the perception of
green, blue, and red. Retinal ganglion cells are opponent-process cells and have a center–
surround organization such that cells are excited by one hue and inhibited by another
(e.g., red versus green; blue versus yellow). Color-sensitive cells in the primary
visual cortex, which are located in the blobs, also have opponent-process properties.
Cells in region V4 also respond to the colors that we perceive rather than to particular
HOW DO WE SENSE, PERCEIVE, AND SEE THE WORLD? ! 303
CH08.qxd 1/3/05 10:09 AM Page 303

ls
le
ll
wavelengths. Perceived color is influenced by the brightness of the world and by the
color of nearby objects.
What happens when the visual system is damaged? Injury to the eye or optic nerve results
in a complete or partial loss of vision in one eye.When the visual information enters
the brain, information from the left and right visual fields goes to the right and left
sides of the brain, respectively. As a result, damage to the visual areas on one side of the
brain results in visual disturbance in both eyes. Specific visual functions are localized
to different regions of the brain, and so localized damage to a particular region results
in the loss of a particular function. For example, damage to region V4 produces a loss
of color constancy, whereas damage to regions in the parietal cortex produces an inability
to shape the hand appropriately to grasp objects.
What is the difference between visual processing for “what” and visual processing for
“how?”Visual information is used for two distinctly different functions: identifying objects
(the what) and moving in relation to the objects (the how). Visual information
travels in the cortex from V1 to the temporal lobe, forming the ventral stream, and
from V1 to the parietal lobe, forming the dorsal stream. The ventral stream produces
our conscious awareness of visual information, including properties such as shape,
movement, and color. In contrast, we are largely unconscious of the visual information
processing in the dorsal stream, which is a type of “on-line” analysis that allows us to
make accurate movements related to objects.
KEY TERMS
REVIEW QUESTIONS
1. Describe the pathways that visual information follows through the brain.
2. Describe how the M and P cells differ and how they give rise to distinctly
different pathways that eventually form the dorsal and ventral visual streams.
3. How do cells in the different levels of the visual system code different types of
information?
4. Summarize what you believe to be the major point of this chapter.
FOR FURTHER THOUGHT
How does the visual system create a visual world? What differences between people and
members of other species would contribute to different impressions of visual reality?
blind spot, p. 274
blob, p. 282
color constancy, p. 296
cone, p. 276
cortical column, p. 282
extrastriate (secondary)
cortex, p. 282
fovea, p. 276
geniculostriate system,
p. 280
homonymous hemianopia,
p. 298
luminance contrast, p. 291
magnocellular (M) cell,
p. 278
ocular-dominance column,
p. 292
opponent-process theory,
p. 296
optic ataxia, p. 302
optic chiasm, p. 279
parvocellular (P) cell,
p. 278
primary visual cortex,
p. 282
quadrantanopia, p. 298
receptive field, p. 284
retina, p. 272
retinal ganglion cell, p. 278
rod, p. 276
scotoma, p. 298
striate cortex, p. 280
tectopulvinar system,
p. 280
topographic map, p. 286
trichromatic theory, p. 295
visual field, p. 284
visual-form agnosia,
p. 300
304 ! CHAPTER 8
neuroscience interact ive
Many resources are available for
expanding your learning on-line:
www.worthpublishers.com/kolb/
chapter8
Try some self-tests to reinforce your
mastery of the material. Look at some
of the news updates on current research
on the brain. You’ll also be able to link
to other sites that will reinforce what
you’ve learned.
On your CD-ROM, you’ll be able to
quiz yourself on your comprehension
of the chapter. The module on the
Visual System includes a threedimensional
model of the eye,
illustrations of the substructures of the
eye and the visual cortex, video clips of
patients with visual disorders, and
interactive activities to explore how the
visual field and color vision are created.
CH08.qxd 1/3/05 10:09 AM Page 304

ls
le
ll
RECOMMENDED READING
Hubel, D. H. (1988). Eye, brain, and vision. New York: Scientific American Library. This
book, written by a Nobel laureate for work on vision, is a general survey of how the
visual system is organized. Like the other books in the Scientific American Library
series, this one has beautiful illustrations that bring the visual system to life.
Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford: Oxford
University Press.Milner and Goodale have revolutionized our thinking of how the
sensory systems are organized. This little book is a beautiful survey of
neuropsychological and neurophysiological studies of the visual system.
Posner, M. I., & Raichle, M. E. (1997). Images of the mind. New York: Scientific American
Library. This award-winning book is an introduction to the study of cognitive
neuroscience, with an emphasis on visual cognitive neuroscience.
Weizkrantz, L. (1986). Blindsight: A case study and implications. Oxford: Oxford University
Press.Weizkrantz’s book about a single patient describes one of the most important
case studies in neuropsychology. This book forced investigators to reconsider
preconceived notions not only about how sensory systems work, but also about ideas
such as consciousness.
Zeki, S. (1993). A vision of the brain. Oxford: Blackwell Scientific. Not only is Zeki’s book a
discussion of how he believes the visual system works, but it also has much broader
implications for cortical functioning in general. Zeki is not afraid to be controversial,
and the book does not disappoint.
HOW DO WE SENSE, PERCEIVE, AND SEE THE WORLD? ! 305
CH08.qxd 1/3/05 10:09 AM Page 305

Focus on New Research: The Evolution of Language
and Music
Sound Waves: The Stimulus
for Audition
Physical Properties of Sound Waves
Perception of Sound
Properties of Language and Music As Sounds
Anatomy of the Auditory System
Structure of the Ear
Auditory Receptors
Pathways to the Auditory Cortex
Auditory Cortex
Neural Activity and Hearing
Hearing Pitch
Detecting Loudness
Detecting Location
Detecting Patterns in Sound
Anatomy of Language and Music
Processing Language
Focus on Disorders: Left-Hemisphere Dysfunction
Focus on Disorders: Arteriovenous Malformations
Processing Music
Focus on Disorders: Cerebral Aneurysms
Auditory Communication in
Nonhuman Species
Birdsong
Echolocation in Bats
306 !
C H A P T E R9
How Do We Hear, Speak, and
Make Music?
Left: Dr. Dennis Kunkel/Phototake. Middle: Barbara Haynor/Tony Stone.
Right: WDCN/University College London/Photo Researchers.
ls
le
ll
CH09.qxd 1/6/05 3:10 PM Page 306

was the leg bone of a young bear that looked as if it had
been fashioned into a flute.
The bone had holes aligned along one of its sides that
could not have been made by gnawing animals. Rather, the
spacing resembles the hole positions on a modern flute. But
the bone flute is at least 43,000 years old—perhaps as old
as 82,000 years. All the evidence suggests that Neanderthals,
not modern humans, made the instrument.
Bob Fink, a musicologist, analyzed the flute’s musical
qualities. He found that an eight-note scale similar to a dore-
mi scale could be played on the flute, but, compared
with the scale most familiar in European music, one note
was slightly off. That “blue note,” a staple of jazz, is found
in musical scales throughout Africa and India today.
The similarity between Neanderthal and contemporary
musical scales encourages us to speculate about the brain
that made this ancient flute. Like modern humans, Neanderthals
probably had complementary hemispheric specialization
for language and music. If so, their communication,
social behaviors, and cultural systems may have been more
advanced than scientists formerly reasoned.
The Evolution of Language and Music
Focus on New Research
T he finding that early modern humans (Homo sapiens)
made music implies that music has been important
in our evolution. Thomas Geissmann (2001) noted that,
among most of the 26 species of singing primates, males
and females sing duets. All singing primates are monogamous,
suggesting that singing may somehow relate to
sexual behaviors. Music may also play a role in primates’
parenting behaviors.
The modern human brain is specialized for analyzing
certain aspects of music in the right temporal lobe, which
is complemented by specialization for analyzing aspects of
speech in the left temporal lobe. Did music and language
evolve simultaneously in our species? Possibly.
Neanderthals (Homo neanderthalensis) have long fascinated
researchers. Neanderthals originated about 230,000
years ago and disappeared some 200,000 years later. During
that time, they coexisted in Europe with Homo sapiens,
whom they resembled in many ways.
In some locations, the two species may have even
shared resources and tools. Researchers hypothesize that
Neanderthal culture was significantly less developed than
that of early modern humans. Neanderthals buried their
dead with artifacts, which implies that they held spiritual
beliefs, but no evidence reveals that they created visual art.
In contrast, Homo sapiens began painting on cave walls
some 30,000 years ago, near the end of the Neanderthal
era. Anatomically, some skeletal analyses suggest that
Neanderthals’ physical language ability made them far less
fluent speakers than their Homo sapiens contemporaries.
What about music? Behavioral scientists have shown
that music plays as central a role in our social and emotional
lives as language does. The evolutionary view of
music and language as complementary behaviors has now
cast serious doubt on the earlier view of Neanderthals as
culturally “primitive.”
Shown in Figure 9-1 is the bone flute found in 1995
by Ivan Turk, a palenontologist at the Slovenian Academy
of Sciences in Ljubljana. Turk was excavating a cave in
northern Slovenia used by Neanderthals long ago as a hunting
camp. Buried in the cave among a cache of stone tools
ls
le
ll
Figure 9-1
Ancient Bone Flute The alignment of the holes in this piece
of bear femur found in a cave in northern Slovenia suggests
that Neanderthals made a flute from it and made music with
the flute.
Courtesy of Ivan Turk/Institut 2A
Archeologijo, ZRC-Sazu, Slovenia.
Photograph by Marko Zaplatil.
CH09.qxd 1/6/05 3:10 PM Page 307

ls
le
ll
308 ! CHAPTER 9
Both language and music are universal among humans. The oral language of
every known culture follows similar basic structural rules, and people in all cultures
create and enjoy music. When we use language and music to communicate,
what are we communicating, and why?
Music and language allow us both to organize and to interact socially. Like music,
language probably improves parenting. Those who can communicate their intentions
to one another and to their children presumably will be better parents.
Human social interaction is one of the most complex behaviors studied by ethologists.
Consider groupings of teenage girls. Their social interactions are complex not
only by virtue of the numbers of girls in groups but also by the rich set of rules—with
rules about language and music high on the list—that each group invents to bond its
members.
Humans’ capacity for language and music are linked conceptually, because both
are based on sound. Understanding how and why we engage in speech and music are
the goals of this chapter.We first examine the physical nature of the energy that we perceive
as sound and then how the human ear and nervous system function to detect and
interpret it. We next examine the complementary neuroanatomy of human language
and music processing. Finally, we investigate how two other species, birds and bats, interpret
and utilize auditory stimuli.
SOUND WAVES: THE STIMULUS FOR AUDITION
When you strike a tuning fork, the energy of its vibrating prongs displaces adjacent air
molecules. Figure 9-2 shows that, as one prong moves to the left, the air molecules to
the left compress (grow more dense) and the air molecules to the right become more
rarefied (less dense). The opposite happens when the prong moves to the right. The
undulating energy generated by this displacement of molecules causes waves of changing
air pressure—sound waves—to emanate from the fork. Sound waves may move
through water as well, and even through the ground.
What we experience as sound, as for our sight, is a creation of the brain, as you
learned in Chapter 8.When a tree falls in the forest, it makes no sound unless someone
hears it.Without a brain, sound does not exist. A falling tree merely makes molecules of
air vibrate, compressing them and rarefiying them into waves of changing air pressure,
just as a tuning fork does.
We can represent waves of changing air pressure emanating from a falling tree or
tuning fork by plotting air-molecule density against time at a single point, as shown in
the top graph in Figure 9-3. The bottom graph shows how the energy from the righthand
prong of the fork moves to create the air-pressure changes associated with a single
Sound wave. Undulating displacement
of molecules caused by changing
pressure.
Frequency. Number of cycles that a
wave completes in a given amount of
time.
Hertz (Hz). Measure of frequency
(repetition rate) of a sound wave; one
hertz is equal to one cycle per second.
Figure 9-2
How a Tuning Fork Produces Sound
Waves (A) When the fork is still, air
molecules are distributed randomly
around it. (B) When struck, the right
arm of the fork moves to the left,
causing air to be compressed on the
leading edge and rarefied on the trailing
edge. (C) The arm moves to the right,
compressing the air to the right and
rarefying the air to the left.
(A) (B) (C)
Compression
Rarefication
Rarefication
Compression
Waves of pressure
changes in air molecules
are sound waves.
CH09.qxd 1/6/05 3:10 PM Page 308

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 309
cycle. A cycle is one complete peak and valley on the graph—that is, the change from
one maximum or minimum air-pressure level of the sound wave to the next maximum
or minimum level, respectively.
PHYSICAL PROPERTIES OF SOUND WAVES
Light is electromagnetic energy that we see; sound is mechanical energy that we hear.
Sound-wave energy, produced by the displacement of air molecules, has three physical
attributes: frequency, amplitude, and complexity, summarized in Figure 9-4. The
auditory system analyzes each property separately, just as the visual system analyzes
color and form separately.
SOUND-WAVE FREQUENCY
Although sound waves travel at a fixed speed of 1100 feet per second, sound energy varies
in wavelength (frequency).More precisely, frequency is the number of cycles that a wave
completes in a given amount of time. Sound-wave frequencies are measured in cycles per
second called hertz (Hz), named after the German physicist Heinrich Rudolph Hertz.
One hertz is 1 cycle per second; 50 Hz is 50 cycles per second; 6000 Hz is 6000 cycles
per second; 20,000 Hz is 20,000 cycles per second; and so on. The top panel of Figure 9-4
shows that sounds that we perceive as being low in pitch have slower wave frequencies
(fewer cycles per second), whereas sounds that we perceive as being high pitched have
faster wave frequencies (many cycles per second).
Just as we can perceive light only at visible wavelengths, we can perceive sound
waves in only a limited range of frequencies. These frequencies are plotted in Figure 9-5.
Humans’hearing range is from about 20 to 20,000 Hz.Many animals communicate with
sound, which means that their auditory systems are designed to interpret their speciestypical
sounds. After all, there is no point in making complicated songs or calls if other
members of your species cannot hear them.
The range of sound-wave frequencies heard by different species varies extensively.
Figure 9-5 shows that some species (such as frogs and birds) have rather narrow hearing
ranges, whereas others (such as dogs, whales, and humans) have broad ranges.
Some species use extremely high frequencies (bats are off the scale), whereas others use
the low range (as do fish).
Figure 9-3
Visualizing a Sound Wave Airmolecule
density plotted against time at
a particular point relative to the right
prong of the tuning fork. (Physicists call
the resulting cyclical waves sine waves.)
One cycle
Normal air pressure
0 1 2 3 4
Air pressure
(molecule density)
Air pressure
(molecule density)
Time (s)
Fork moves
right
Fork moves
left
+

+

Simple Complex
Complexity (timbre)
Most sounds are a mixture of frequencies. The
particular mixture determines the sound's timbre,
or perceived uniqueness. Timbre provides
information about the nature of a sound. For
example, timbre allows us to distinguish the
sound of a trombone from that of a violin
playing the same note.
High amplitude
(loud sound)
Low amplitude
(soft sound)
Low frequency
(low-pitched sound)
High frequency
(high-pitched sound)
Frequency (pitch)
The rate at which waves vibrate, measured
as cycles per second, or hertz (Hz). Frequency
roughly corresponds to our perception of pitch.
Amplitude (loudness)
The intensity of sound, usually measured in
decibels (dB). Amplitude roughly corresponds to
our perception of loudness.
The Properties of Sound
Figure 9-4
Physical Dimensions of Sound
Waves The frequency, amplitude, and
complexity of sound waves correspond
to the perceptual dimensions of pitch,
loudness, and timbre.
CH09.qxd 1/6/05 3:10 PM Page 309

ls
le
ll
310 ! CHAPTER 9
It is quite an achievement that the auditory systems of whales and dolphins are
responsive to sound waves of such range. The characteristics at the extremes of these frequencies
allow marine mammals to use them in different ways. Very-low-frequency
sound waves travel long distances in water; whales produce them as a form of underwater
communication over miles of distance. High-frequency sound waves create echoes
and form the basis of sonar. Dolphins produce them in bursts, listening for the echoes
that bounce back from objects and help the dolphins to navigate and locate prey.
As stated earlier, differences in the frequency of sound waves become differences
in pitch when heard. Consequently, each note in a musical scale must have a different
frequency because each has a different pitch. Middle C on the piano, for instance, has
a frequency of 264 Hz.
Most people can discriminate between one musical note and another, but some
can actually name any note that they hear (A, B flat, C sharp, and so forth). This perfect
(or absolute) pitch runs in families, suggesting a genetic influence. The difference
in the ability of a person’s auditory system to distinguish pitch may be analogous to
differences in the ability to perceive the color red, discussed in Chapter 8. On the side
of experience, most people who develop perfect pitch also receive musical training in
matching pitch to note from an early age.
SOUND-WAVE AMPLITUDE
Sound waves vary not only in frequency, causing differences in perceived pitch, but also
in strength, or amplitude, causing differences in perceived intensity, or loudness. An example
will help you understand the difference between the amplitude and the frequency
of a sound wave.
If you hit a tuning fork lightly, it produces a tone with a frequency of, say, 264 Hz
(middle C). If you hit it harder, you still produce a frequency of 264 Hz but you also
transfer more energy into the vibrating prong. It now moves farther left and right but
0
Frequency (Hz)
20 50 100 200 500 1000 2000 5000 10,000 20,000 50,000 100,000
Fish
Humans
Whales and
dolphins
Seals and
sea lions
Birds
Frogs
Rodents
Bats
Dogs
Figure 9-5
Hearing Ranges among Animals
Frogs and birds hear a relatively narrow
range of frequencies; whales and
dolphins have an extensive range, as
do dogs. Although the human hearing
range is fairly broad, we do not perceive
many sound frequencies that other
animals can both make and hear.
CH09.qxd 1/6/05 3:10 PM Page 310

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 311
at the same frequency. This greater energy is due to an increased quantity of air molecules
compressed in each wave, even though the same middle C frequency (number of
waves) is created every second.
This new dimension of energy in the sound wave is amplitude, the magnitude of
change in air-molecule density. Increased compression of air molecules intensifies the
energy in a sound wave, which “amps” the sound—makes it louder. Differences in amplitude
are graphed by increasing the height of a sound wave, as shown in the middle
panel of Figure 9-4.
Sound-wave amplitude is usually measured in decibels (dB), a measure
of the strength of a sound relative to the threshold of human hearing as a
standard, pegged at 0 dB. Normal speech sounds, for example, measure
about 40 dB. Sounds that register more than about 70 dB we perceive as
loud; those less than about 20 dB we perceive as quiet.
Because the human nervous system evolved to be sensitive to weak
sounds, it is literally “blown away” by extremely strong ones. People regularly
damage their hearing by exposure to very loud sounds (such as rifle
fire at close range) or even by prolonged exposure to sounds that are only
relatively loud (such as at a live concert).As a rule of thumb, sounds louder
than 100 dB are likely to damage our hearing, especially if our exposure to
them is prolonged.
Heavy-metal bands, among others, routinely play music that registers
higher than 120 dB and sometimes as high as 135 dB. One researcher
(Drake-Lee, 1992) found that rock musicians had a significant loss of sensitivity
to sound waves, especially at about 6000 Hz. After a typical 90-min
concert, this loss was temporarily far worse—as
much as a 40-fold increase in sound pressure was
needed to reach a musician’s hearing threshold.
SOUND-WAVE COMPLEXITY
Sounds with a single frequency wave are pure tones,
much like what you would get from a tuning fork
or pitch pipe, but most sounds mix wave frequencies
together in combinations and so are called
complex tones. To better understand the blended
nature of a complex tone, picture a clarinetist, such
as Don Byron in Figure 9-6, playing a steady note.
The upper graph in Figure 9-6 represents the sound
wave produced by the clarinet.
Notice that the clarinet waveform has a more
complex pattern than those of the simple waves
described earlier in this chapter. Even when a musician
plays a single note, the instrument is making
Amplitude. Intensity of a stimulus; in
audition, roughly equivalent to loudness,
graphed by increasing the height of a
sound wave.
Loudness (dB)
200 180 160 140 120 100 80 60 40 20 0
Rocket Rock band Chainsaw Normal speech Threshold
Simple waves that
make up sound
of clarinet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Waveform from clarinet
Fundamental
frequency
Overtones
Figure 9-6
Breaking Down a Complex Tone The
wave shape of a single note from Don
Byron’s clarinet (top) and the component
frequencies—the fundamental frequency
(middle) and overtones (bottom)—that
make up the complex tone. From Stereo
Review, copyright 1977 by Diamandis
Communications Inc.
Christian Ducasse/Gamma-Liaison
CH09.qxd 1/6/05 3:10 PM Page 311

ls
le
ll
a complex tone, not a pure tone. Using a mathematical technique known as Fourier
analysis, we can break this complex tone into its many component pure tones, the
numbered waves at the bottom of Figure 9-6.
The fundamental frequency (wave 1) is the rate at which the complex waveform
pattern repeats. Waves 2 through 20 are overtones, a set of higher-frequency sound
waves that vibrate at whole-number (integer) multiples of the fundamental frequency.
Different musical instruments sound unique because they produce overtones of different
amplitudes.Among the clarinet overtones, represented by the heights of the blue
waves in Figure 9-6, wave 5 is low amplitude, whereas wave 2 is high amplitude.
Like primary colors, pure tones can be blended into complex tones in an almost
infinite variety. In addition to emanating from musical instruments, complex tones
emanate from the human voice, from birdsong, and from machines or repetitive mechanisms
that give rise to rhythmic buzzing or humming sounds.
A key feature of complex tones, besides being made up of two or more pure tones,
is some sort of periodicity. The fundamental frequency repeats at regular intervals.
Sounds that are aperiodic, or random, we call noise.
Perception of Sound
The auditory system’s task is to convert the physical properties of sound-wave energy
into electrochemical neural activity that travels to the brain, which we then perceive as
sound. Remember that the waves themselves make no sounds. The sounds that we hear,
like the images that we see, are but a product of the brain.
To better understand the relation between the energy of sound-wave sensations
and sound perceptions, think about what happens when you toss a pebble into a pond.
Waves of water emanate from the point where the pebble enters the water. These waves
produce no audible sound. But, if your skin were able to convert the energy of the water
waves into neural activity that stimulated your auditory system, you would “hear” the
waves when you placed your hand into the rippling water. When you removed your
hand, the “sound” would stop.
The pebble hitting the water is much like a falling tree, and the waves that emanate
from the pebble’s point of entry are like the air-pressure waves that emanate from the
place where the tree strikes the ground. The frequency of the waves determines the
pitch of the sound heard by the brain, whereas the height (amplitude) of the waves determines
the sound’s loudness.
Our sensitivity to sound waves is extraordinary. At the threshold of human hearing,
we can detect the displacement of air molecules of about 10 picometers (10!11
meter).We are rarely in an environment where we can detect such a small air-pressure
change, because there is usually too much background noise. A very quiet rural setting
is probably as close as we ever get to an environment suitable for testing the acuteness
of our hearing. So, the next time you visit the countryside, take note of the sounds that
you can hear. If there is no sound competition, you can often hear a single car engine
miles away.
In addition to detecting very small changes in air pressure, the auditory system is
also adept at simultaneously perceiving different sounds. As you sit reading this chapter,
you are able to differentiate all sorts of sounds around you—traffic on the street,
people talking next door, your computer’s fan humming, footsteps in the hall. If you
are listening to music, you detect the sounds of different instruments and voices.
You can perceive different sounds simultaneously because the different frequencies
of air-pressure change associated with each sound wave stimulate different neurons in
your auditory system. The perception of sounds is only the beginning of your auditory
experience. Your brain interprets sounds to obtain information about events in your
312 ! CHAPTER 9
CH09.qxd 1/6/05 3:10 PM Page 312

ls
le
ll
environment, and it analyzes a sound’s meaning. These processes are clearly illustrated
in your use of sound to communicate with other people through both language and
music.
Properties of Language and Music As Sounds
Language and music differ from other auditory sensations in fundamental ways. Both
convey meaning and evoke emotion. The analysis of meaning in sound is a considerably
more complex behavior than simply detecting a sound and identifying it. The brain has
evolved systems that analyze speech and musical sounds for meaning, in the left and
right temporal lobes, respectively, as you learned at the beginning of this chapter.
Infants are receptive to speech and musical cues before they have any obvious utility,
suggesting the innate presence of these skills.Humans have an amazing capacity for
learning and remembering linguistic and musical information. We are capable of
learning a vocabulary of tens of thousands of words, often in many languages, and we
have a capacity for recognizing thousands of songs.
Language facilitates communication. We can organize our complex perceptual
worlds by categorizing information with words.We can tell others what we think and
know and imagine. Imagine the efficiency that gestures and spoken language added to
the cooperative food hunting and gathering behaviors of early humans.
All these benefits of oral language seem obvious, but the benefits of music may
seem less straightforward. In fact, music helps us to regulate our emotions and to affect
the emotions of others.After all, when do people most commonly make music? We
sing and play music to communicate with infants and put children to sleep. We play
music to enhance social interactions and gatherings and when we feel romantic.We use
music to bolster group identification—school songs or national anthems are examples.
Another characteristic that distinguishes speech and musical sounds from other
auditory inputs is their delivery speed. Nonspeech and nonmusical noise produced at
a rate of about 5 segments per second is perceived as a buzz. (A sound segment is a distinct
unit of sound.)
Normal speed for speech is on the order of 8 to 10 segments per second, and we
are capable of understanding speech at nearly 30 segments per second. Speech perception
at these higher rates is truly amazing, because the speed of input far exceeds the
auditory system’s ability to transmit all the speech as separate pieces of information.
Properties of Language
Experience in listening to a particular language helps the brain to analyze rapid speech,
which is one reason why people who are speaking languages with which you are unfamiliar
often seem to be talking incredibly fast.Your brain does not know where the foreign
words end and begin, making them seem to run together in a rapid-fire stream.
A unique characteristic of our perception of speech sounds is our tendency to hear
variations of a sound as if they were identical, even though the sound varies considerably
from one context to another. For instance, the English letter “d” is pronounced
differently in the words “deep,” “deck,” and “duke,” yet a listener perceives the pronunciations
to be the same “d” sound. This auditory constancy is reminiscent of the visual
system’s capacity for object constancy described in Chapter 8.
The auditory system must therefore have a mechanism for categorizing sounds as
being the same despite small differences in pronunciation. This mechanism, moreover,
must be affected by experience, because different languages categorize speech sounds
differently. A major obstacle to mastering a foreign language after the age of 10 is the
difficulty of learning the categories of sound that are treated as equivalent.
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 313
CH09.qxd 1/6/05 3:10 PM Page 313

ls
le
ll
314 ! CHAPTER 9
PROPERTIES OF MUSIC
Like other sounds, the sounds of music differ from one another in the subjective properties
that people perceive them to have. One property is loudness, the magnitude of
the sound as judged by a person. Loudness, as you know, is related to the amplitude of
a sound wave and is measured in decibels, but loudness is also subjective.What is “very
loud”music for one person may be only “moderately loud” for another, whereas music
that seems “soft” to one listener may not seem soft at all to someone else.
Another subjective property of musical sounds is pitch, the position of each tone
on a musical scale as judged by the listener. Although pitch is clearly related to soundwave
frequency, there is more to it than that. Consider the note middle C as played on
a piano. This note can be described as a pattern of sound frequencies, as is the clarinet
note in Figure 9-6.
Like the note played on the clarinet, any musical note is defined by its fundamental
frequency, which is the lowest frequency of the sound-wave pattern, or the rate at
which the overall pattern is repeated. For middle C, the fundamental frequency is 264
Hz, as mentioned earlier. The sound waves, as measured by a spectrograph, are shown
in Figure 9-7. Notice that, by convention, sound-wave spectrographs are measured in
kilohertz (kHz), units of thousands of hertz. Thus, if we look at the fundamental frequency
for middle C, it is the first large wave, which is at 0.264 kHz. The fundamental
frequencies for G and E are 196 and 330, respectively.
An important feature of the human brain’s analysis of music is that middle C is
perceived as being the same note regardless of whether it is played on a piano or on a
guitar, even though the sounds made by these instruments are very different. The right
temporal lobe has a special function in extracting pitch from sound, whether the sound
is speech or music. In speech, pitch contributes to the perceived melodical tone of a
voice, or prosody.
A final property of musical sound is quality, the characteristics that distinguish a
particular sound from all others of similar pitch and loudness. We can easily distinguish
the sound of a violin from that of a trombone even though the same note is being
played on both instruments at the same loudness. The quality of their sounds differs.
The French word timbre is normally used to describe perceived sound quality.
ANATOMY OF THE AUDITORY SYSTEM
Our next task is to understand how the nervous system analyzes sound waves.We begin
by tracing the pathway taken by sound energy to and through the brain. The ear collects
sound waves from the surrounding air and converts them into electrochemical neural
energy, which then begins a long route through the brainstem to the auditory cortex.
In Review .
Sound energy, the physical stimulus for the auditory system, is produced by changes in
pressure waves that are converted into neural activity in the ear. Sound waves have three
key qualities: frequency, amplitude, and complexity. Frequency is the rate at which the
waves vibrate and roughly corresponds to the high or low pitch of the sound that we percieve.
Amplitude, or wave height, is the magnitude of change in air-molecule pressure that
the wave undergoes and roughly corresponds to perceived loudness. Complexity refers to
the particular mixture of frequencies that create a sound’s perceived uniqueness, or timbre.
Combinations of these qualities allow the human auditory system to comprehend
sounds as complex as language and music.
G
C
E
C E G
Frequency (kHz)
Amplitude Amplitude Amplitude
0 1.0 2.0
Frequency (kHz)
0 1.0 2.0
Frequency (kHz)
0 1.0 2.0
Figure 9-7
Fundamental Frequencies of Piano
Notes The shapes of the sound waves
of C, E, and G as played on a piano and
recorded on a spectrograph. The first
wave in each of these graphs is the
fundamental frequency, and the
secondary waves are the overtones.
Courtesy of D. Rendall.
Prosody. Melodical tone of the spoken
voice.
CH09.qxd 1/6/05 3:10 PM Page 314

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 315
Before we can trace the journey from the ear to the cortex,we need to ask what the auditory
system is designed to do.Because sound waves have the properties of frequency, amplitude,
and complexity,we can predict that the auditory system is structured to code these
properties. In addition, most animals can tell where a sound comes from, and so there
must be some mechanism for locating sound waves in space. Finally, many animals, including
humans, not only analyze sounds for meaning but also make sounds themselves.
Because the sounds that they produce are often the same as the ones that they hear,we can
infer that the neural systems for sound production and analysis must be closely related.
In humans, the evolution of sound-processing systems for both language and
music led to the enhancement of specialized cortical regions, especially in the temporal
lobes. In fact, a major difference between the human and the monkey cortex is a
marked expansion of auditory areas in humans.
Structure of the Ear
The ear is a biological masterpiece in three acts: the outer, middle, and inner ear, all illustrated
in Figure 9-8. Both the pinna, the funnel-like external structure of the outer
ear, and the external ear canal, which extends a short distance from the pinna inside
Monkey
Human
Brainstem
Temporal
lobe
Temporal
lobe
Cerebrum
Brainstem
Cerebrum
Middle ear
and inner ear
Pinna
Cross section
through cochlea
Organ of Corti
External
ear canal
Eardrum
Eardrum
Tectorial
membrane
Basilar
membrane
Outer
hair cells
Cilia
Nerve
fibers
Inner
hair cell
Ossicles
Ossicles
Hammer
Anvil
Stirrup
Oval
window
Semicircular
canals
Semicircular
canals
Cochlea
Cochlea
Auditory
nerve
Auditory
nerve
Axons of
auditory nerve
Outer
hair cell
The pinna catches
sound waves and
deflects them into the
external ear canal.
1
Waves are
amplified and
directed to the
eardrum, causing
it to vibrate,…
2
Vibration of oval
window sends waves
through cochlear fluid,…
5
…causing the basilar
and tectorial membranes
to bend,…
6
…which in turn cause cilia of outer hair cells,
embedded in the tectorial membrane, to bend.
This bending generates neural activity in hair cells.
7
…which in
turn vibrates
ossicles.
3
Ossicles amplify
and convey
vibrations to the
oval window.
4
Outer ear
Middle
ear
Inner
ear
Sound
wave
Figure 9-8
Anatomy of the Human
Ear Sound waves
gathered into the outer ear
are transduced from air
pressure into mechanical
energy in the middle-ear
ossicles and into
electrochemical activity in
the inner-ear cochlea. Hair
cells embedded in the
basilar membrane (the
organ of Corti) are tipped
off by cilia. The cilia are
displaced by movements of
the basilar and tectorial
membranes, leading to
changes in the inner hair
cells’ membrane potentials
and resultant activity of
auditory (bipolar) neurons.
CH09.qxd 1/6/05 3:10 PM Page 315

ls
le
ll
the head, are made of cartilage and flesh. The pinna is designed to catch sound waves
in the surrounding environment and deflect them into the external ear canal.
The external canal, because it narrows from the pinna, amplifies sound waves
somewhat and directs them to the eardrum at its inner end.When sound waves strike
the eardrum, it vibrates, the rate of vibration varying with the frequency of the waves.
On the inner side of the eardrum, as depicted in Figure 9-8, is the middle ear, an airfilled
chamber that contains the three smallest bones in the human body, connected to
one another in a series.
These three ossicles are called the hammer, the anvil, and the stirrup because of
their distinctive shapes. The ossicles attach the eardrum to the oval window, an opening
in the bony casing of the cochlea, the inner-ear structure that contains the auditory
receptor cells. These receptor cells and the cells that support them are collectively
called the organ of Corti, shown in detail in Figure 9-8.
When sound waves vibrate the eardrum, those vibrations are transmitted to the ossicles.
The ossicles’ leverlike action conveys and amplifies the vibrations onto the membrane
that covers the cochlea’s oval window.As Figure 9-8 shows, the cochlea coils around itself
and looks a bit like a snail shell. (The name cochlea derives from the Latin word for “snail.”)
Inside its bony exterior, the cochlea is hollow, as the cross-sectional drawing reveals.
The hollow cochlear compartments are filled with a lymphatic fluid, and floating in
its midst is the thin basilar membrane. Embedded in a part of the basilar membrane
are outer and inner hair cells. At the tip of each hair cell are several filaments called cilia,
and the cilia of the outer hair cells are embedded in an overlying membrane, called the
tectorial membrane. The inner hair cells loosely contact the tectorial membrane.
Pressure from the stirrup on the oval window makes the cochlear fluid move because
a second membranous window in the cochlea (the round window) bulges outward
as the stirrup presses inward on the oval window. In a chain reaction, the waves
that travel through the cochlear fluid cause the basilar and tectorial membranes to
bend, and the bending membranes stimulate the cilia at the tips of the outer hair cells.
This stimulation generates receptor, or graded, potentials in the inner hair cells, which
act as the auditory receptor cells. The change in the membrane potential of the inner
hair cells varies the amount of neurotransmitter that they release onto auditory neurons
that go to the brain.
The key question is how the conversion of sound waves into neural activity codes the
various properties of sound that we perceive. In the late 1800s,German physiologist Hermann
von Helmholtz proposed that sound waves of different frequencies cause different
parts of the basilar membrane to resonate.Von Helmholtz was not precisely correct.Actually,
all parts of the basilar membrane bend in response to incoming waves of any frequency.
The key is where on the basilar membrane the peak displacement takes place.
This solution to the coding puzzle was not determined until 1960,
when George von Békésy was able to observe the basilar membrane directly.
He saw that a traveling wave moves along the membrane all the
way from the oval window to the membrane’s apex. The structure and
function of the basilar membrane are easier to visualize if the cochlea
is uncoiled and laid flat, as in Figure 9-9.
The uncoiling structure in Figure 9-9A maps the frequencies to
which each part of the basilar membrane is most responsive.When the
oval window vibrates in response to the vibrations of the ossicles, shown
in Figure 9-9B, it generates waves that travel through the cochlear fluid. Békésy placed little
grains of silver along the basilar membrane and watched them jump in different places
with different frequencies of incoming waves. Faster wave frequencies caused maximum
peaks of displacement near the base of the basilar membrane, whereas slower wave frequencies
caused maximum displacement peaks near the membrane’s apex.
316 ! CHAPTER 9
Ossicles. Bones of the middle ear:
malleus (hammer), incus (anvil), and
stapes (stirrup).
Cochlea. Inner-ear structure that contains
the auditory receptor cells.
Basilar membrane. Receptor surface in
the cochlea that transduces sound waves
into neural activity.
Hair cell. Sensory neurons in the
cochlea tipped by cilia; when stimulated
by waves in the cochlear fluid, outer hair
cells generate graded potentials in inner
hair cells, which act as the auditory
receptor cells.
George von Békésy
(1899–1972)
CH09.qxd 1/6/05 3:10 PM Page 316

ls
le
ll
As a rough analogy, consider what happens when you shake a rope. If you shake it
very quickly, the rope waves are very small and short and remain close to the hand in
which you are holding the rope. But, if you shake the rope slowly with a broader movement,
the longer waves reach their peak farther along the rope. The key point is that,
although both rapid and slow shakes of the rope produce movement along the rope’s
entire length, the maximum displacement of the rope is found at one end or the other,
depending on whether the wave movements are rapid or slow.
This same response pattern is true of the basilar membrane to sound-wave frequency.
All sound waves cause some displacement along the entire length of the basilar
membrane, but the amount of displacement at any point varies with the frequency
of the sound wave. In the human cochlea, shown uncoiling in Figure 9-9A, the basilar
membrane near the oval window is maximally affected by frequencies as high as about
20,000 Hz, whereas the most effective frequencies at the membrane’s apex register less
than 100 Hz.
Intermediate frequencies maximally displace points on the basilar membrane between
its two ends, as shown in Figure 9-9B.When a wave of a certain frequency travels
down the basilar membrane, hair cells at the point of peak displacement are stimulated,
resulting in a maximal neural response in those cells. An incoming signal composed of
many frequencies causes several different points along the basilar membrane to vibrate
and excites hair cells at all these points.
Not surprisingly, the basilar membrane is much
more sensitive to changes in frequency than the rope in
our analogy is. This degree of sensitivity is achieved because
the basilar membrane varies in thickness along
its entire length. It is narrow and thick at the base, near
the oval window, and wider and thinner at its tightly
coiled apex. The combination of varying width and
thickness enhances the effect of small differences in
frequency on the basilar membrane. As a result, the
cochlear receptors can code small differences in soundwave
frequency as neural impulses.
Auditory Receptors
Hair cells ultimately transform sound waves into neural
activity. Figure 9-8 shows the anatomy of the hair
cells; Figure 9-10 illustrates how they are stimulated by
sound waves. The human cochlea has two sets of hair
cells: 3500 inner hair cells and 12,000 outer hair cells.
Only the inner hair cells are the auditory receptors.This
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 317
(A) Uncoiling of cochlea (B) Uncoiled cochlea
20,000 4000 1000 100
Hertz
Basilar
membrane
Basilar membrane
Cochlear
base
A narrow, thick base is tuned
for high frequencies.
A wide, thin apex is tuned
for low frequencies.
Sound waves at medium frequencies cause peak
bending of the basilar membrane at this point.
Figure 9-9
Anatomy of the Cochlea (A) The
frequencies to which the basilar
membrane is maximally responsive
are mapped as the cochlea uncoils.
(B) Sound waves of different frequencies
produce maximal displacement of the
basilar membrane (shown uncoiled) at
different locations.
Tectorial
membrane
Cochlea
Outer
hair
cells
Movement of the basilar
membrane in response to
sound waves…
...creates a shearing force that bends cilia in
contact with and near the overlying tectorial
membrane. This bending generates neural activity
in the hair cells from which the cilia extend.
Basilar
membrane
Inner
hair cell
Axons of
cochlear nerve
Displacement
Figure 9-10
Transducing Waves into Neural
Activity Movement of the basilar
membrane creates a shearing force in
the cochlear fluid that bends the cilia,
leading to the opening or closing of
calcium channels in the outer hair cells.
The influx of calcium ions leads to the
release of transmitter by the inner hair
cell, which stimulates an increase in
action potentials in auditory neurons.
CH09.qxd 1/6/05 3:10 PM Page 317

ls
le
ll
total number of receptor cells is small, considering the number of different sounds that
we can hear.
As diagrammed in Figure 9-10, the hair cells are anchored in the basilar membrane.
The tips of the cilia of outer hair cells are attached to the overlying tectorial
membrane, but the cilia of the inner hair cells do not touch that membrane. Nevertheless,
the movement of the basilar and tectorial membranes causes the cochlear fluid
to flow past the cilia of the inner hair cells, bending them back and forth. Animals with
intact outer hair cells but no inner hair cells are effectively deaf. The outer hair cells
function simply to sharpen the resolving power of the cochlea by contracting or relaxing
and thereby changing the stiffness of the tectorial membrane.
How this function of the outer hair cells is controlled is puzzling.What stimulates
these cells to contract or relax? The answer seems to be that the outer hair cells, through
connections with axons in the auditory nerve, send a message to the brainstem auditory
areas and receive a message back that causes the outer hair cells to alter tension on the tectorial
membrane. In this way, the brain helps the hair cells to create an auditory world.
A final question remains: How does movement of the cilia alter neural activity?
The neurons of the auditory nerve have a tonic rate of firing action potentials, and this
rate is changed by how much neurotransmitter is released from the hair cells. It turns
out that movement of the hair-cell cilia causes a change in polarization of the hair cell
and a change in neurotransmitter release. Look at Figure 9-8 again and you’ll notice
that the cilia of a hair cell differ in height.
Movement of the cilia in the direction of the tallest results in depolarization, opening
calcium channels and leading to the release of neurotransmitter onto the dendrites
of the cells that form the auditory nerve, generating more nerve impulses. Movement
in the direction of the shortest cilia hyperpolarizes the cell membrane and transmitter
release decreases, thus decreasing activity in auditory neurons.
Hair cells are amazingly sensitive to the movement of their cilia. A movement sufficient
to allow sound-wave detection is only about 0.3 nm, about the diameter of a
large atom.We now can understand why our hearing is so incredibly sensitive.
Pathways to the Auditory Cortex
The inner hair cells in the organ of Corti synapse with neighboring
bipolar cells, the axons of which form the auditory (cochlear)
nerve. The auditory nerve in turn forms part of the eighth cranial
nerve, which governs hearing and balance (review Figure 2-26).
Whereas ganglion cells in the eye receive inputs from many receptor
cells, bipolar cells in the ear receive input from only a single
hair-cell receptor.
The cochlear-nerve axons enter the brainstem at the level of
the medulla and synapse in the cochlear nucleus, which has ventral
and dorsal subdivisions. Two other nearby structures in the brainstem,
the superior olive and the trapezoid body, each receive connections
from the cochlear nucleus, as shown in Figure 9-11. The
projections from the cochlear nucleus connect with cells on the
same side of the brain as well as with cells on the opposite side.
This arrangement mixes the inputs from the two ears to form the
perception of a single sound.
Both the cochlear nucleus and the superior olive send projections
to the inferior colliculus in the dorsal midbrain. Two distinct
pathways emerge from the inferior colliculus, coursing to the
medial geniculate nucleus, which lies in the thalamus. The ventral
318 ! CHAPTER 9
Auditory
cortex
Auditory
cortex
Medial
geniculate
nucleus
Medial
geniculate
nucleus
Inferior
colliculus
Inferior
colliculus
Olivary
complex
Left brain
Dorsal
cochlear
nucleus
Ventral
cochlear
nucleus
Hindbrain
Cerebral
cortex
Thalamus
Midbrain
Right brain
Trapezoid
body
Trapezoid
body
Trapezoid
Dorsal body
cochlear
nucleus
Olivary
complex
Ventral
cochlear
nucleus
Auditory
(cochlear)
nerve
Cochlea of left ear
Figure 9-11
Auditory Pathway Flowchart follows
the primary route that auditory stimuli
take from the cochlear nucleus through
levels of processing in the hindbrain,
midbrain, and auditory cortex. Auditory
inputs cross to the hemisphere opposite
the ear in the hindbrain and midbrain,
then recross in the thalamus so that
information from each ear reaches both
hemispheres. Multiple nuclei process
inputs en route to the auditory cortex.
CH09.qxd 1/6/05 3:10 PM Page 318

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 319
region of the medial geniculate nucleus projects to the primary auditory cortex
(areaA1), whereas the dorsal region projects to the auditory cortical regions adjacent to
area A1.
In Chapter 8, you learned about two distinct visual pathways through the cortex:
the ventral stream for object recognition and the dorsal stream for the visual control
of movement. A similar what–how distinction exists in the auditory cortex (Romanski
et al., 1999). Just as we can identify objects by their sound characteristics, we can direct
our movements by the sound that we hear.
The role of sound in guiding movement is less familiar to sight-dominated people
than it is to the blind.Nevertheless, the ability exists in us all. Imagine waking up in the
dark and reaching to pick up a ringing telephone or to turn off an alarm clock. Your
hand will automatically form the appropriate shape needed to carry out these movements
just on the basis of the sound that you have heard.
That sound is guiding your movements much as a visual image guides them. Although
relatively little is known about the auditory pathways in the cortex, one pathway
appears to continue through the temporal lobe, much like the ventral visual
pathway, and plays a role in identifying auditory stimuli. A second auditory pathway
apparently goes to the posterior parietal region, where it forms a type of dorsal pathway
for the auditory control of movement.
Auditory Cortex
In the human cortex, the primary auditory cortex lies within Heschl’s gyrus and is
surrounded by secondary cortical areas, as shown in Figure 9-12A.The secondary cortex
lying behind Heschl’s gyrus is called the planum temporale (meaning “temporal plane”).
In right-handed people, the planum temporale is
larger on the left side of the brain than it is on the right,
whereas Heschl’s gyrus is larger on the right side than on
the left. The cortex of the left planum forms a speech zone,
known as Wernicke’s area (the posterior speech zone),
whereas the cortex of the larger, right-hemisphere Heschl’s
gyrus has a special role in the analysis of music.
These hemispheric differences mean that the auditory
cortex is anatomically and functionally asymmetrical, a
fundamental property of the nervous system that we encountered
in Chapter 2. Although cerebral asymmetry is
not unique to the auditory system, it is most obvious here
because the auditory analysis of speech takes place only
in the left hemisphere of right-handed people. About 70
percent of left-handed people have the same anatomical
Medial geniculate nucleus. Major
thalamic region concerned with audition.
Primary auditory cortex (area A1).
Asymmetrical structures, found within
Heschl’s gyrus in the temporal lobes, that
receive input from the ventral region of
the medial geniculate nucleus.
Wernicke’s area. Secondary auditory
cortex (planum temporale) lying behind
Heschl’s gyrus at the rear of the left
temporal lobe that regulates language
comprehension; also called posterior
speech zone.
Lateral
sulcus
(A) Auditory cortex
Primary
auditory cortex
Primary
auditory cortex
(Heschl’s gyrus)
Wernicke’s
area
Heschl’s gyri
Secondary
auditory
cortex
Secondary
auditory cortex
Wernicke’s
area (planum
temporale)
Retractor opens lateral fissure
to reveal auditory cortex. Secondary auditory
cortex
Right hemisphere
Left hemisphere
Thalamus
Thalamus Lateral
ventricle
Gustatory
cortex
Auditory
cortex
Corpus
callosum
(B) Insula
Amygdala
Insula
Hippocampus
Figure 9-12
Human Auditory Cortex (A) Diagram of the brain’s left
hemisphere shows the primary auditory cortex buried within
Heschl’s gyrus and adjacent secondary regions. In cross section,
the posterior speech zone (Wernicke’s area) is larger on the left.
Heschl’s gyrus is larger in the right hemisphere. (B) Frontal
section showing the extent of the multifunctional insular cortex
buried in the lateral fissure.
CH09.qxd 1/6/05 3:10 PM Page 319

ls
le
ll
asymmetries as right-handers, an indication that speech organization is not related to
hand preference. Language, which includes speech and other functions such as reading
and writing, also is asymmetrical, although there are some right-hemisphere contributions
to these broader functions.
The remaining 30 percent of left-handers fall into two distinct groups. The organization
is opposite that of right-handers in about half of these people. The other half
have some type of idiosyncratic bilateral representation of speech. That is, about 15
percent of all left-handed people have some speech functions in one hemisphere and
some in the other hemisphere.
The localization of language on the side of the brain is often referred to as lateralization.
We return to lateralization later in this chapter and again in Chapter 14 in connection
with thinking. Note here simply that, as a rule of thumb in neuroanatomy, if
one hemisphere is specialized for one type of analysis, such as language in the left, the
other hemisphere has a complementary function. In regard to audition, the right hemisphere
appears to be lateralized for music.
Finally, as you can see in Figure 9-12B, the sulci of the temporal lobe enfold a large
volume of cortical tissue. In particular, the cortical tissue buried in the lateral fissure,
called the insula, is more extensive than the auditory cortex alone. The insular cortex
not only has lateralized regions related to language but also contains areas controlling
the perception of taste (the gustatory cortex) and areas linked to the neural structures
underlying social cognition. We consider both of these topics in Chapter 11. As you
might expect, injury to the insula can produce diverse deficits, such as disturbance of
both language and taste.
In Review .
Incoming sound-wave energy vibrates the eardrum, which in turn vibrates the tiny bones
of the middle ear. The innermost ossicle presses on the inner ear’s oval window and sets
in motion the cochlear fluid. The motion of this fluid vibrates cilia on the outer hair cells
in the cochlea by displacing the basilar membrane. This bending generates membrane potential
changes in the inner hair cells that alter their neurotransmitter release and the subsequent
activity of auditory neurons, thus converting sound waves into changes in neural
activity. The frequencies of incoming sound waves are largely coded by the surface areas
on the basilar membrane that are most displaced. The axons of bipolar cells of the cochlea
form the auditory (cochlear) nerve, which enters the brain at the medulla as part of cranial
nerve 8 and synapses on cells in the cochlear nucleus. The neurons of each cochlear
nucleus and associated regions in the medulla then begin a pathway that courses to the
opposite-side midbrain (inferior colliculus), then recrosses in the thalamus (medial geniculate
nucleus), and ends in the left and right auditory cortex. As in the visual system, two
different cortical auditory pathyways exist, one for sound recognition (like the ventral visual
stream) and one for sound localization (like the dorsal visual stream).The auditory
cortex on the left and right are asymmetrical, with the planum temporale being larger on
the left and Heschl’s gyrus being larger on the right in the brains of right-handed people.
This anatomical asymmetry is correlated to a functional asymmetry: the left temporal cortex
analyzes language-related sounds, whereas the right temporal cortex analyzes musicrelated
ones. Most left-handed people have a similar lateralization, although about 30
percent of left-handers have different patterns. The auditory cortext is part of the multifunctional
cortex called the insula, which lies within the lateral fissure.
320 ! CHAPTER 9
In the overview of the brain area in
the Central Nervous System module of
your CD, investigate cortical anatomy and
the four lobes.
Lateralization. Process whereby
functions become localized primarily on
one side of the brain.
Insula. Located with the lateral fissure,
multifunctional cortical tissue that
contains regions related to language, to
the perception of taste, and to the neural
structures underlying social cognition.
CH09.qxd 1/6/05 3:10 PM Page 320

ls
le
ll
NEURAL ACTIVITY AND HEARING
We now turn to the ways in which the activities of neurons in the auditory system create
our perception of sounds. Neurons at different levels in this system serve different
functions. To get an idea of what individual hair cells and cortical neurons do, we consider
how the auditory system codes sound-wave energy so that we perceive pitch,
loudness, location, and pattern.
Hearing Pitch
Recall that our perception of pitch corresponds to
the frequency (repetition rate) of sound waves,
which is measured in hertz (cycles per second).
Hair cells in the cochlea code frequency as a function
of their location on the basilar membrane.
The cilia of hair cells at the base of the cochlea
are maximally displaced by high-frequency
waves that we hear as high-pitched sounds, and
those at the apex are displaced the most by lowfrequency
waves that we hear as low-pitched
sounds.
This arrangement is a tonotopic representation
(literally meaning “tone place”). Because
the axons of the bipolar cells that form the cochlear nerve are each connected to only
one hair cell, they contain information about the spot on the basilar membrane, from
apex to base, that is being stimulated.
If we record from single fibers in the cochlear nerve, we find that, although each
axon transmits information about only a small part of the auditory spectrum, the cells
do respond to a range of sound-wave frequencies. In other words, each hair cell is maximally
responsive to a particular frequency but also responds to nearby frequencies,
even though the sound wave’s amplitude must be greater (louder) for these nearby frequencies
to generate a change in membrane potential.
This range of hair-cell responses to different frequencies at different amplitudes
can be plotted to form a tuning curve, as in Figure 9-13. Such a sound-wave curve is
reminiscent of the light-wave curves in Figure 8-7, which show the responsiveness of
cones in the retina to different wavelengths of light. Each type of receptor cell is maximally
sensitive to a particular wavelength, but it still responds somewhat to nearby
wavelengths.
The axons of the bipolar cells in the cochlea project to the cochlear nucleus in
an orderly manner. Those entering from the base of the cochlea connect with one
location, those entering from the middle connect to another location, and those
entering from the apex connect to yet another.
As a result, the tonotopic representation of the
basilar membrane is reproduced in the cochlear
nucleus.
This systematic representation is maintained
throughout the auditory pathways and can be
found in cortical area A1, the primary auditory
cortex. Figure 9-14 shows the distribution of
projections from the base and apex of the cochlea
across area A1. Similar tonotopic maps can be
constructed for each level of the auditory system.
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 321
Tonotopic representation. Property
of audition in which sound waves are
processed in a systematic fashion from
lower to higher frequencies.
High
Low
Sound amplitude (dB)
0
High
Low
100 1000 10,000
Frequency (Hz)
Figure 9-13
Tuning Curves The graphs represent
two different axons in the cochlear nerve.
Each is plotted by the frequency and
amplitude of the sound-wave energy
required to increase the firing rate of the
neuron. The lowest point on each curve is
the frequency to which that hair cell is
most sensitive. The upper tuning curve is
centered on a midrange frequency of
1000 Hz, whereas the lower tuning curve
is centered on a frequency of 10,000 Hz,
in the high range of human hearing.
Retractor
Primary auditory
cortex (A1)
500 Hz
1000 Hz
2000 Hz
4000 Hz
8000 Hz
16,000 Hz
Corresponds to
apex of cochlea
Corresponds to
base of cochlea
Figure 9-14
Tonotopic Representation of Area A1
The anterior end of the primary auditory
cortex corresponds to the apex of the
cochlea and hence low frequencies,
whereas the posterior end corresponds to
the base of the cochlea and hence high
frequencies. Heschl’s gyrus is buried on
the ventral side of the lateral fissure; so a
retractor must be used to open the fissure
to reveal the underlying auditory cortex.
CH09.qxd 1/6/05 3:10 PM Page 321

ls
le
ll
The systematic organization of tonotopic maps has enabled the development of
cochlear implants—electronic devices surgically inserted in the inner ear to allow deaf
people to hear (see Loeb, 1990). A miniature microphone-like processor detects the
component frequencies of incoming sound waves and sends them to the appropriate
place on the basilar membrane through tiny wires. The nervous system does not distinguish
between stimulation coming from this artificial device and stimulation coming
through the middle ear.
As long as appropriate signals go to the correct locations on the basilar membrane,
the brain will “hear.” Cochlear implants work very well, even allowing the deaf to detect
the fluctuating pitches of speech. Their success corroborates the tonotopic representation
of pitch in the basilar membrane.
One minor difficulty with the tonotopic theory of frequency detection is that the
cochlea does not use this mechanism at the very apex of the basilar membrane, where
hair cells, as well as the bipolar cells to which they are connected, respond to frequencies
below about 200 Hz.At this location, all the cells respond to movement of the basilar
membrane, but they do so in proportion to the frequency of the incoming wave.
Higher rates of bipolar cell firing signal a relatively higher frequency, whereas lower
rates of firing signal a lower frequency.
Why the cochlea uses a different system to differentiate pitches within this range
of very-low-frequency sound waves is not clear. The reason probably has to do with the
physical limitations of the basilar membrane. Although discriminating among lowfrequency
sound waves is not important to humans, animals such as elephants and
whales depend on these frequencies to communicate.Most likely they have more neurons
at this end of the basilar membrane than we humans do.
Detecting Loudness
The simplest way for cochlear (bipolar) cells to indicate sound-wave intensity is to fire
at a higher rate when amplitude is greater, which is exactly what happens.More intense
air-pressure changes produce more intense vibrations of the basilar membrane and
therefore greater shearing of the cilia. This increased shearing leads to a greater amount
of transmitter released onto bipolar cells. As a result, the bipolar axons fire more frequently,
telling the auditory system that the sound is getting louder.
Detecting Location
The fact that each cochlear nerve synapses on both sides of the brain provides mechanisms
for locating the source of a sound. In one mechanism, neurons in the brainstem
compute the difference in a sound wave’s arrival time at each ear. Such differences in
arrival time need not be large to be detected. If two sounds presented through earphones
are separated in time by as little as 10 microseconds, the listener will perceive
that a single sound came from the leading ear.
This computation of left-ear–right-ear arrival times is carried out in the medial
part of the superior olivary complex (see Figure 9-11). Because these hindbrain cells
receive inputs from each ear, they are able to compare exactly when the signal from
each ear reaches them.
Figure 9-15 shows how sound waves originating on the left reach the left ear
slightly before they reach the right ear. As the sound source moves from the side of
the head toward the middle, a person has greater and greater difficulty locating it.
The reason is that the difference in arrival time becomes smaller and smaller until
there is no difference at all. When we detect no difference, we infer that the sound
322 ! CHAPTER 9
Cochlear implant. Electronic device
implanted surgically into the inner ear to
transduce sound waves into neural
activity and allow deaf people to hear.
Learn more about the development of
the cochlear implant on the Web site at
www.worthpublishers.com/kolb/
chapter9
Michael Newman/PhotoEdit
Cochlear implants bypass the normal route
for sound-wave energy by electronically
processing incoming stimulation directly to
the correct locations on the basilar
membrane.
CH09.qxd 1/6/05 3:10 PM Page 322

is either directly in front of us or directly behind
us.
To identify the location, we move our heads,
making the sound waves strike one ear sooner.
We have a similar problem distinguishing between
sounds directly above and below us. Again, we
solve the problem by tilting our heads, thus causing
the sound waves to strike one ear before the other.
Another mechanism used by the auditory system
to detect the source of a sound has to do not
with the difference in arrival times of sound waves
at the two ears, but instead with the sound’s relative
loudness on the left or the right. The basis of
this mechanism is that higher-frequency sound
waves do not easily bend around the head; so
the head acts as an obstacle. As a result, higherfrequency
sound waves on one side of the head are
louder than on the other.
This difference is detected in the lateral part of the superior olive and the trapezoid
body in the hindbrain. For sound waves coming from directly in front or behind
or from directly above or below, the same problem of differentiation exists, requiring
the same solution of tilting or turning the head.
Head tilting and turning take time. Although not usually important for humans,
time is important for other animals, such as owls, that hunt by using sound. Owls need
to know the location of a sound simultaneously in at least two directions (e.g., left and
below or right and above).
Owls, like humans, can orient in the horizontal plane to sound waves (called azimuth
detection) by using the different times at which sound waves reach the two ears.
Additionally, the owl’s ears have evolved to be slightly
displaced in the vertical direction so that they can detect
the sound waves’ relative loudness in the vertical plane
(called elevation detection). This solution allows owls to
hunt entirely by sound in the dark (Figure 9-16). Bad
news for mice.
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 323
Extra distance that
sound must travel to
reach the right ear.
Figure 9-15
Locating a Sound Sound
waves that originate on the left
side of the body reach the left
ear slightly before they reach
the right ear, allowing us to
locate the sound source. But
the difference in arrival time
is subtle, and the auditory
system fuses the dual auditory
stimuli so that we perceive a
single, clear sound coming from
the left.
Facial
ruff
Ear-canal
opening
Figure 9-16
Hunting by Ear (Left) For an owl, differences in perceived loudness yield
clues to the elevation of a sound source, whereas inter-ear time differences
are used to detect the source’s horizontal direction. This barn owl has aligned
its talons with the body axis of the mouse that it is about to catch in the dark.
(Right) The facial structure and external ears of the barn owl are formed by
rows of tightly packed feathers, the facial ruff that extends from the relatively
narrow skull down the length of the face to join below the beak. The ruff
collects and funnels sound waves into ear-canal openings through feathered
troughs formed by the ruff above and below the eyes. The owl’s left ear is
more sensitive to sound waves from the left and below, because the ear canal
is higher on the left side and the trough is tilted down. The ear canal on the
right is lower and the trough is tilted up, making the right ear more sensitive
to sound waves from the right and above. Drawing adapted from “The Hearing of
the Barn Owl,” by E. I. Knudsen, 1981, Scientific American, 245(6), p. 115.
Art Wolfe/Tony Stone
ls
le
ll
CH09.qxd 1/6/05 3:11 PM Page 323

ls
le
ll
Detecting Patterns in Sound
Music and language are perhaps the primary sound-wave patterns that humans recognize.
Perceiving sound-wave patterns as meaningful units is thus fundamental to auditory
analysis. Because music and language are lateralized in the right and left temporal
lobes, respectively, we can guess that neurons in the right and left temporal cortex take
part in pattern recognition and analysis of these two auditory experiences. Studying
the activities of auditory neurons in humans is not easy, however.
Most of the knowledge that neuroscientists have comes from studies of how individual
neurons respond in nonhuman primates. For instance, Peter Winter and Hans
Funkenstein (1971) found that neurons in the auditory cortex of the squirrel monkey
are specifically responsive to squirrel monkey vocalizations. More recently, Joseph
Rauschecker and his colleagues (1995) discovered that neurons in the secondary auditory
areas of rhesus monkeys are more responsive to mixtures of sound waves than to
pure tones.
Other researchers also have shown that the removal of the temporal auditory
cortex abolishes the ability to discriminate the vocalizations made by other members
of the species (Heffner and Heffner, 1990). Interestingly, discrimination of speciestypical
vocalizations in monkeys seems more severely disrupted by injury to the
left temporal cortex than to the right. This finding implies a possible functional
asymmetry for the analysis of complex auditory material in nonhuman primates,
as do the singing primates studied by Geissmann, cited at the beginning of this
chapter.
ANATOMY OF LANGUAGE AND MUSIC
This chapter began with the discovery of the Neanderthal flute and its evolutionary implications.
The fact that Neanderthals made flutes implies not only that they processed
musical sound-wave patterns but also that they made music. In the modern human
brain, musical ability is generally a right-hemisphere specialization complementary to
language ability, largely localized in the left hemisphere.
No one knows whether these complementary systems evolved together in the hominid
brain, but it is certainly very possible that they did. Both language and music
abilities are highly developed in the modern human brain. Although little is known
about how language and music are processed at the cellular level, electrical stimulation
and recording and blood-flow imaging studies have been sources of important insights
into the regions of the cortex that process them. We investigate such studies next,
focusing first on how the brain processes language.
In Review .
Neurons in the cochlea form tonotopic maps that code sound-wave frequencies and are
maintained throughout the levels of the auditory system. The same cells in the cochlea
vary their firing rate, depending on sound-wave amplitude. Detecting the location of a
sound is a function of neurons in the superior olive and trapezoid body of the brainstem.
These neurons compute differences in sound-wave arrival time and loudness in the two
ears. Understanding the sound-wave patterns of music and language requires pattern
recognition, which is performed by cortical auditory neurons.
324 ! CHAPTER 9
CH09.qxd 1/6/05 3:11 PM Page 324

ls
le
ll
Processing Language
More than 4000 human languages are spoken in the world today, and probably many
others have gone extinct in past millennia. Researchers have wondered whether the
brain has a single system for understanding and producing any language, regardless of
its structure, or whether very different languages, such as English and Japanese, are
processed in different ways. To answer this question, it helps to analyze languages to
determine just how fundamentally similar they are, despite their obvious differences.
UNIFORMITY OF LANGUAGE STRUCTURE
Foreign languages often seem impossibly complex to nonspeakers. Their sounds alone
may seem odd and difficult to make. If you are a native speaker of English, for instance,
Asian languages, such as Japanese, probably sound peculiarly melodic and almost without
obvious consonants to you, whereas European languages, such as German or
Dutch, may sound heavily guttural.
Even within related languages, such as Spanish, Italian, and French, marked differences
can make learning one of them challenging, even if the student already knows
another. Yet, as real as all these linguistic differences may be, it turns out that they are
superficial. The similarities among human languages, although not immediately apparent,
are actually far more fundamental than their differences.
Noam Chomsky is usually credited with being the first linguist to stress similarities
over differences in human language structure. In a series of books and papers
written in the past 40 years, Chomsky made a very sweeping claim, as have researchers
such as Steven Pinker (1997) more recently. They argue that all languages have common
structural characteristics because of a genetically determined constraint on the
nature of human language. Humans, apparently, have a built-in capacity for creating
and using language.
When Chomsky first proposed this idea in the 1960s, it was greeted with skepticism,
but it has since become clear that human language does indeed have a genetic
basis. An obvious piece of evidence: language is universal in human populations. All
people everywhere use language.
The complexity of language is not related to the technological complexity of a culture.
The languages of technologically primitive peoples are every bit as complex and
elegant as the languages of postindustrial cultures. Nor is the “Olde” English of Shakespeare
inferior or superior to modern English; it is just different.
Another piece of evidence that Chomsky adherents cite in favor of a genetic basis
of human language is that humans learn language early in life and seemingly without
effort. By about 12 months of age, children everywhere have started to speak words. By
18 months, they are combining words, and, by age 3 years, they have a rich language
capability.
Perhaps the most amazing thing about language development is that children are
not formally taught the structure of their language. As toddlers, they are not painstakingly
instructed in the rules of grammar. In fact, their early errors—sentences such as
“I goed to the zoo”—are seldom even corrected by adults.Yet children master language
rapidly.
They also acquire language through a series of stages that are remarkably similar
across cultures. Indeed, the process of language acquisition plays an important role in
Chomsky’s theory of the innateness of language, which is not to say that language development
is not influenced by experience.
At the most basic level, for example, children learn the language that they hear
spoken. In an English household, they learn English; in a Japanese home, they learn
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 325
CH09.qxd 1/6/05 3:11 PM Page 325

ls
le
ll
Japanese. They also pick up the vocabulary and grammar (structure) of the people
around them, which can vary from one speaker to another. Children go through a sensitive
period for language acquisition, probably from about 1 to 6 years of age. If they
are not exposed to language throughout this critical period, their language skills are severely
compromised (see Chapter 6).
A third piece of evidence in favor of a genetic basis of language is the many basic
structural elements that all languages have in common. Granted, every language has
its own particular rules that specify exactly how the various parts of speech are positioned
in a sentence (syntax), how words are inflected to convey different meanings,
and so forth. But there are also overarching rules that apply to all human
languages.
For instance, all languages employ grammar, the parts of speech that we call subjects,
verbs, and direct objects. Consider the sentence “Jane ate the apple.” “Jane” is the
subject, “ate” is the verb, and “apple” is the direct object. Syntax is not specified by any
universal rule but rather is a characteristic of the particular language. In English, the
order is subject, verb, object; in Japanese, the order is subject, object, verb; in Gaelic,
the order is verb, subject, object. Nonetheless, all languages have both syntax and
grammar.
The existence of these structural pillars in all human languages can be seen in the
phenomenon of creolization—the development of a new language from what was formerly
a very rudimentary language, or pidgin. Creolization took place in the seventeenth-
century Americas, when slave traders and the owners of colonial plantations
brought together people from various African villages who lacked a common language.
Because the new slaves needed to communicate, they quickly created a pidgin
based on whatever language the plantation owners spoke—English, French, Spanish,
or Portuguese.
The pidgin had a crude syntax (word order) but lacked a real grammatical structure.
The children of the slaves who invented this pidgin were brought up by caretakers
who spoke only pidgin to them. Yet, within a generation, these children had created
their own creole, a language complete with a genuine syntax and grammar.
Clearly, the pidgin invented of necessity by adults was not a learnable language for
children. Their innate biology shaped a new language similar in basic structure to all
other human languages. All creolized languages seem to evolve in a similar way, even
though the base languages are unrelated. This phenomenon could happen only if there
were an innate, biolgical component to language development.
LOCALIZATION OF LANGUAGE APPARATUS IN THE BRAIN
Finding a universal basic language structure set researchers on a search for an innate
brain system that underlies language use. By the late 1800s, it had become clear that
language functions were at least partly localized—not just within the left hemisphere
but to specific areas there. Clues that led to this conclusion began to emerge in
the early part of the nineteenth century, when neurologists observed patients with
frontal-lobe injuries who suffered language difficulties.
Not until 1861, however, was the localization of certain language
functions in the left hemisphere confirmed. Paul Broca examined a
patient who had entirely lost his ability to speak except to say “tan”
and to utter an oath. The man died shortly thereafter, and Broca examined
his brain, finding a fresh injury to the left frontal lobe. On
the basis of this case and several subsequent cases, Broca concluded
that language functions are localized in the left frontal lobe in a region
just in anterior to the central fissure. A person with damage in
this area is unable to speak despite both an intact vocal apparatus and
326 ! CHAPTER 9
Paul Broca
(1824–1880)
CH09.qxd 1/6/05 3:11 PM Page 326

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 327
normal language comprehension. The discovery of Broca’s area was significant because
it initiated the idea that the left and right hemispheres might have different
functions.
Other neurologists at the time believed that Broca’s area might be only one of
several left-hemisphere regions that control language. In particular, neurologists suspected
a relation between hearing and speech. Proving this suspicion correct, Karl
Wernicke later described patients who had difficulty comprehending language after
injury to the posterior region of the left temporal lobe identified as Wernicke’s area
in Figure 9-17.
We referred to Wernicke’s area earlier as a speech zone (see Figure 9-12A).
Damage to any speech area produces some form of aphasia, the general term for
any inability to comprehend or produce language despite the presence of normal
comprehension and intact vocal mechanisms. At one extreme, people who suffer
Wernicke’s aphasia can speak fluently, but their language is confused and makes little
sense, as if they have no idea what they are saying. At the other extreme, a person
with Broca’s aphasia cannot speak despite normal comprehension and intact
physiology.
Wernicke went on to propose a model for how the two language areas of the left
hemisphere interact to produce speech, diagrammed in Figure 9-17A. He theorized
that images of words are encoded by their sounds and stored in the left posterior temporal
cortex.When we hear a word that matches one of those sound images, we recognize
it, which is how Wernicke’s area contributes to speech comprehension.
To speak words, Broca’s area in the left frontal lobe must come into play, because
the motor program to produce each word is stored in this area. Messages are sent to
Broca’s area from Wernicke’s area through a pathway, the arcuate fasciculus, that connects
the two regions. Broca’s area in turn controls the articulation of words by the
vocal apparatus, as diagrammed in Figure 9-17B.
Wernicke’s model provided a simple explanation both for the existence of two
major language areas in the brain and for the contribution of each area to the control
of language. But the model was based on postmortem examinations of patients with
brain lesions that were often extensive. Not until the pioneering studies of neurosurgeon
Wilder Penfield, begun in the 1930s, were the language areas of the left hemisphere
clearly and accurately mapped.
A1
Broca’s area
Cranial nerves
Motor area
for face
Arcuate
fasciculus
Wernicke’s area
Wernicke’s area
(A) (B)
Motor
programs to
talk stored
here.
Connects Wernicke’s
and Broca’s areas.
Contains sound
images of words.
Thought
Broca’s area
(stores motor
programs for
speaking
words)
Wernicke’s
area
Facial area
of motor cortex
Cranial
nerves Speak
Spoken
word A1
Wernicke’s area
(contains
sound images
of words)
Comprehend
word heard
On your CD, look at the language area
on the three-dimensional model of the
brain in the brain overview section in the
module on the Central Nervous System.
Figure 9-17
Neurology of Language In Wernicke’s
model of speech recognition, summarized
in part A, stored sound images are
matched to spoken words in the left
posterior temporal cortex, shown in
yellow. Speech is produced through the
connection that the arcuate fasciculus
makes between Wernicke’s area and
Broca’s area, as summarized in part B.
Broca’s area. Anterior speech area in
the left hemisphere that functions with
the motor cortex to produce the
movements needed for speaking.
Aphasia. Inability to speak or
comprehend language despite the
presence of normal comprehension and
intact vocal mechanisms; Broca’s aphasia
is the inability to speak fluently despite
the presence of normal comprehension
and intact vocal mechanisms; Wernicke’s
aphasia is the inability to understand or
to produce meaningful language even
though the production of words is still
intact.
CH09.qxd 1/6/05 3:11 PM Page 327

ls
le
ll
328 ! CHAPTER 9
AUDITORY AND SPEECH ZONES MAPPED BY
BRAIN STIMULATION
It turns out, among Penfield’s discoveries, that instead of Broca’s area being the site of
speech production and Wernicke’s area the site of language comprehension, electrical
stimulation of either region disrupts both processes. Penfield took advantage of the
chance to map auditory and language areas of the brain when operating on patients
undergoing elective surgery to treat intractable epilepsy. If you’ve read “Epilepsy” on
page 000 you know that the goal of this surgery is to remove tissues where the abnormal
discharges are localized.
A major challenge is to prevent injury to critical regions that serve important functions.
To determine the location of these critical regions, Penfield used a tiny electrical
current to stimulate the surface of the brain.By monitoring the response of the patient to
stimulation in different locations, Penfield could map brain functions along the cortex.
Typically, two neurosurgeons perform the operation (Figure 9-18A), and a
neurologist analyzes the electroencephalogram in an adjacent room. Because
patients are awake, they can contribute during the procedure, and the effects of
brain stimulation in specific regions can be determined in detail and mapped.
Penfield placed little numbered tickets on different parts of the brain’s surface
where the patient noted that stimulation had produced some noticeable sensation
or effect, producing the cortical map shown in Figure 9-18B.
When Penfield stimulated the auditory cortex, patients often reported
hearing various sounds, a ringing sound like that of a doorbell, a buzzing
noise, or a sound like that of birds chirping among them. This result is consistent
with those of later studies of single-cell recordings from the auditory
cortex in nonhuman primates. Findings in these later studies showed that the
auditory cortex has a role in pattern recognition.
Penfield also found that stimulation in area A1 seemed to produce simple
tones, ringing sounds, and so forth, whereas stimulation in the adjacent auditory
cortex (Wernicke’s area) was more apt to cause some interpretation of a
sound, ascribing it to a familiar source such as a cricket, for instance. There was no difference
in the effects of stimulation of the left or right auditory cortex, and the patients
heard no words when the brain was stimulated.
Sometimes, however, stimulation of the auditory cortex produced effects other
than the perception of sounds. Stimulation of one area, for example, might cause a patient
to experience a sense of deafness, whereas stimulation of another area might produce
a distortion of sounds actually being heard. As one patient exclaimed after a
(A)
Figure 9-18
Mapping Cortical Functions (A) The
patient is fully conscious during
neurosurgery, lying on his right side and
kept comfortable with local anesthesia.
The left hemisphere of his brain is
exposed as Wilder Penfield stimulates
discrete cortical areas. In the background,
the neurologist monitors the
electroencephalogram recorded from
each stimulated area, which will help in
identifying the eleptogenic focus. The
anesthetist (seated) observes the patient’s
response to the cortical stimulation.
(B) A drawing of the entire skull overlies
a photograph of the patient’s exposed
brain during surgery. The numbered
tickets identify the points that Penfield
stimulated to map the cortex. At points
26, 27, and 28, a stimulating electrode
produced interference with speech. Point
26 is presumably in Broca’s area, 27 is the
facial-control area in the motor cortex,
and 28 is in Wernicke’s area in this
patient’s brain.
Central
sulcus
Lateral
fissure
(B)
Montreal Neurological Institute
CH09.qxd 1/6/05 3:11 PM Page 328

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 329
Figure 9-19
Cortical Regions That Control
Language This map, based on
Penfield’s extensive study of patients
who had surgery for the relief of
intractable epilepsy, summarizes areas in
the left hemisphere where direct
stimulation may interfere with speech or
produce vocalization. Adapted from Speech
and Brain Mechanisms (p. 201), by W. Penfield
and L. Roberts, 1956, London: Oxford University
Press.
Click on the Web site at www.
worthpublishers.com/kolb/chapter9
for current research on aphasia.
Supplementary speech area. Speechproduction
region on the dorsal surface of
the left frontal lobe.
Positron emission tomography
(PET). Imaging technique that detects
changes in blood flow by measuring
changes in the uptake of compounds
such as oxygen or glucose.
On the CD, find the area on PET in
the Research Methods module for a threedimensional
model of a PET camera and
samples of PET scans.
certain region had been stimulated, “All that you
said was mixed up!”
Penfield was most interested in the effects of
brain stimulation not on simple sound-wave processing
but on language. He and later researchers
used electrical stimulation to identify four important
cortical regions that control language. The two
classic regions—Broca’s area and Wernicke’s area—
are left-hemisphere regions. Located on both sides
of the brain are the other two major regions of
language use: the dorsal area of the frontal lobes
and the areas of the motor and somatosensory
cortex that control facial, tongue, and throat muscles
and sensations. Although the effects on speech
vary, depending on the region, stimulating any of them disrupts speech in some way.
Clearly,much of the left hemisphere takes part in audition, and Figure 9-19 shows
the areas of the left hemisphere that Penfield found engaged in some way in processing
language. In fact, Penfield mapped cortical language areas in two ways, first by disrupting
speech and then by eliciting speech. Not surprisingly, damage to any speech
area produces some form of aphasia.
Disrupting Speech Penfield expected that electrical current might disrupt ongoing
speech by effectively “short-circuiting” the brain.He stimulated different regions of the
cortex while the patient was in the process of speaking. In fact, the speech disruptions
took several forms, including slurred speech, confusion of words, or difficulty in finding
the right word. Such aphasias are detailed in “Left-Hemisphere Dysfunction.”
Electrical stimulation of the supplementary speech area on the dorsal surface of the
frontal lobes, shown in Figure 9-19, can even stop ongoing speech completely, a reaction
that Penfield called speech arrest. Stimulation of other cortical regions well removed from
the temporal and frontal speech areas has no effect on ongoing speech,with the exception
of regions of the motor cortex that control movements of the face. This exception makes
sense because talking requires movement of facial, tongue, and throat muscles.
Eliciting Speech The second way that Penfield mapped language areas was to stimulate
the cortex when a patient was not speaking to see if he could cause the person to
utter a speech sound. Penfield did not expect to trigger coherent speech, because cortical
stimulation is not physiologically normal and so probably would not produce actual
words or word combinations. His expectation was borne out.
Stimulation of regions on both sides of the brain—for example, the supplementary
speech areas—produces a sustained vowel cry, such as “Oh” or “Eee.” Stimulation
of the facial areas in the motor cortex and the somatosensory cortex produces some
vocalization related to movements of the mouth and tongue. Stimulation outside these
speech-related zones produces no such effects.
AUDITORY CORTEX MAPPED BY POSITRON
EMISSION TOMOGRAPHY
Today, researchers use positron emission tomography (PET) to study the metabolic
activity of brain cells engaged in processing language. PET imaging is based on a surprisingly
old idea. In the late 1800s, Angelo Mosso was fascinated by the observation
that pulsations in the living brain keep pace with the heartbeat. Mosso believed that
the pulsations were related to changes in blood flow in the brain.
He later noticed that the pulsations appeared to be linked to mental activity. For
example, when a subject was asked to perform a simple calculation, the increase in
A1
(sounds
heard)
Broca’s
area
(aphasia)
Aphasia
Wernicke’s
area
(aphasia)
Supplementary
speech area
(vocalization or
speech arrest)
Areas controlling
facial movement or
sensation (vocalization
or speech arrest)
CH09.qxd 1/6/05 3:11 PM Page 329

ls
le
ll
330 ! CHAPTER 9
brain pulsations and, presumably, in blood flow, was immediate. But to demonstrate a
relation between mental activity and blood flow within the brain requires a more quantifiable
measure than just visual observation. Various procedures for measuring blood
flow in the brain were devised in the twentieth century, one of which is described in
“Arteriovenous Malformations.” But not until the development of PET in the 1970s
could blood flow in the brain of a human subject be measured safely and precisely
(Posner and Raichle, 1997), confirming Mosso’s observations.
A PET camera, like the one shown in Figure 9-20, is a doughnut-shaped array of
radiation detectors that encircles a subject’s head. A small amount of water, labeled
with radioactive molecules, is injected into the bloodstream. The person injected with
these molecules is in no danger, because the molecules, such as the radioactive isotope
oxygen-15 (15O), are very unstable. They break down in just a few minutes and are
eliminated from the body quickly.
The radioactive 15O molecules release tiny, positively charged, subatomic particles
known as positrons (electrons with a positive charge). Positrons are emitted from an
atom that is unstable because it is deficient in neutrons. The positrons are attracted to
the negative charge of electrons in the brain, and the subsequent collision of these two
particles leads to both of them being annihilated, thus creating energy.
This energy, in the form of two photons, leaves the head at the speed of light and is detected
by the PET camera. The photons exit the head in exactly opposite directions from
the site of positron–electron annilation at the same speed, and so their source can be identified.
A computer identifies the coincident photons and locates the annihilation source.
The PET system enables the measurement of blood flow in the brain because the unstable
radioactive molecules accumulate in the brain in direct proportion to the rate of
local blood flow. Local blood flow, in turn, is related to neural activity because potassium
Left-Hemisphere Dysfunction
Focus on Disorders
Susan S., a 25-year-old college graduate and mother of two,
suffered from epilepsy. When she had a seizure, which was
almost every day, she would lose consciousness for a short
period, during which she would often engage in repetitive
behaviors, such as rocking back and forth. Such psychomotor
seizures can usually be controlled by medication, but the
drugs were ineffective for Susan. The attacks were very disruptive
to her life because they prevented her from driving a
car and restricted the types of jobs that she could hold.
So Susan decided to undergo neurosurgery to remove
the region of abnormal brain tissue that was causing the
seizures. This kind of surgery has a very high success rate. In
her case, it entailed the removal of a part of the left temporal
lobe, including most of the cortex in front of the auditory
areas. Although it may seem to be a substantial amount of
the brain to cut away, the excised tissue is usually abnormal;
so any negative consequences are typically minor.
After her surgery, Susan did well for a few days, but then
she suffered unexpected and unusual complications, which
led to the death of the remainder of her left temporal lobe, including
the auditory cortex and Wernicke’s area. As a result,
she was no longer able to understand language, except to respond
to the sound of her name and to speak just one phrase:
“I love you.” Susan was also unable to read, showing no sign
that she could even recognize her own name in writing.
To find ways to communicate with Susan, we tried humming
nursery rhymes to her. She immediately recognized
them and could say the words. We also discovered that her
singing skill was well within the normal range and she had
a considerable repertoire of songs.
Susan did not seem able to learn new songs, however,
and she did not understand us if we “sang messages” to her.
Apparently, Susan’s musical repertoire was stored and controlled
independently of her language system.
CH09.qxd 1/6/05 3:11 PM Page 330

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 331
ions released from stimulated neurons dilate adjacent blood vessels. The greater the
blood flow, the higher the radiation counts recorded by the PET camera.
With the use of sophisticated computer imaging, blood flow in the brain when a
person is at rest with closed eyes can be mapped (Figure 9-21). The map shows where
the blood flow is highest in a series of frames. Even though the distribution of blood
is not uniform, it is still difficult to conclude very much from such a map.
Annihilation
photons
A small amount of radioactively labeled water is
injected into a subject. Active areas of the brain use
more blood and thus have more radioactive labels.
Positrons from the radioactivity are released;
they collide with electrons in the brain, and
photons (a form of energy) are produced,
exit the head, and are detected.
Annihilation photon
detectors
Figure 9-21
Resting State PET images of blood
flow obtained while a single subject
rested quietly with eyes closed. Each
scan represents a horizontal section,
from the dorsal surface (1) to the ventral
surface (31) of the brain.
M. E. Raichle, Mallinckrodt Institute of Radiology, Washington University School of Medicine Hank Morgan/Science Source/Photo Researchers
Alan Carruthers/Photo Researchers
Figure 9-20
PET Scanner and Image In
the image produced by a PET
scan, shown at the far right,
brightly colored yellow and red
areas are regions of highest
blood flow.
CH09.qxd 1/6/05 3:11 PM Page 331

ls
le
ll
332 ! CHAPTER 9
So PET researchers who are studying the link between blood flow and mental activity
resort to a statistical trick. They subtract the blood-flow pattern when the brain
is in a carefully selected control state, such as that depicted in the images in Figure 9-21,
from the pattern of blood flow imaged when the subject is engaged in the experimental
task under study. As illustrated in Figure 9-22, this subtraction process provides an
image of the change in blood flow in the two states. The change can be averaged across
subjects to yield a representative, average image difference that reveals which areas of
the brain are selectively active during the task.
What happens when PET is used while subjects listen to sounds? Although there
are many PET studies of auditory stimulation, a series conducted by Robert Zatorre
and his colleagues (1992, 1995) serves as a good example. These researchers hypothesized
that simple auditory stimulation, such as bursts of noise, would be analyzed by
Arteriovenous Malformations
Focus on Disorders
An arteriovenous malformation (also called an AV malformation
or angioma) is a mass of enlarged and tortuous cortical
blood vessels that form congenitally. AV malformations
are quite common, accounting for as many as 5 percent of
all cases of cerebrovascular disease.
Although angiomas may be benign, they often interfere
with the functioning of the underlying brain and can produce
epileptic seizures. The only treatment is to remove the
malformation. This procedure carries significant risk, however,
because the brain may be injured in the process.
Walter K. was diagnosed with an AV malformation when
he was 26 years old. He had consulted a physician because
of increasingly severe headaches, and a neurological examination
revealed an angioma over his occipital lobe. A surgeon
attempted to remove the malformation, but the surgery
did not go well; Walter was left with a defect in the bone
overlying his visual cortex. This bone defect made it possible
to listen to the blood flow through the malformation.
Dr. John Fulton noticed that when Walter suddenly
began to use his eyes after being in the dark, there was a
prompt increase in the noise (known as a bruit) associated
with blood flow. Fulton documented his observations by
recording the sound waves of the bruit while Walter performed
visual experiments.
For example, if Walter had his eyes closed and then
opened them to read a newspaper, there was a noticeable increase
in blood flow through the occipital lobe. If the lights
went out, the noise of the blood flow subsided. Merely shining
light into Walter’s eyes had no effect; nor was there an
effect when he inhaled vanilla or strained to listen to faint
sounds.
Apparently, the bruit and its associated blood flow were
triggered by mental effort related to vision. To reach this conclusion
was remarkable, given that Fulton used only a stethoscope
and a simple recording device for his study. Modern
instrumentation, such as that of positron emission tomography,
has shown that Fulton’s conclusion was correct.
MRI angiogram looking down on the surface of the brain of an
18-year-old girl with an angioma. The abnormal cerebral blood
vessels (in white) formed a balloonlike structure (the blue area at
lower right) that caused the death of brain tissue around it in the
right occipital cortex.
Simon Fraser/Royal Victoria Infirmary/Newcastle Upon Tyne/
Science Photo Library/Photo Researchers
CH09.qxd 1/6/05 3:11 PM Page 332

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 333
area A1, whereas more complex auditory stimulation, such as speech syllables, would
be analyzed in adjacent secondary auditory areas.
The researchers also hypothesized that the performance of a speech-sounddiscrimination
task would selectively activate left-hemisphere regions. This selective
activation is exactly what they found. Figure 9-23A shows increased activity in the primary
auditory cortex in response to bursts of noise, whereas secondary auditory areas
are activated by speech syllables (Figure 9-23B and C).
Both types of stimuli produced responses in both hemispheres, but there was
greater activation in the left hemisphere for the speech syllables. These results imply that
auditory area A1 analyzes all incoming auditory signals, speech and nonspeech,whereas
the secondary auditory areas are responsible for some higher-order signal processing required
for the analysis of language sound patterns.
As Figure 9-23C shows, the speech-sound-discrimination task yielded an intriguing
additional result: during this task, Broca’s area in the left hemisphere was activated
as well. The involvement of this frontal-lobe region during auditory analysis may seem
surprising. In Wernicke’s model, Broca’s area is considered the place where the motor
programs needed to produce words are stored. It is not normally a region thought of as
the site of speech-sound discrimination.
A possible explanation is that, to determine that the “g” in “bag” and “pig” is the
same speech sound, the auditory stimulus must be related to how that sound is actually
articulated. That is, the speech-sound perception requires a match with the motor behaviors
associated with making that sound.
This role for Broca’s area in speech analysis is confirmed further when investigators
ask people to determine if a stimulus is a word or a nonword (e.g.,“tid” versus “tin”
or “gan” versus “tan”). In this type of study, information about how the words are articulated
is irrelevant, and Broca’s area would not need to be recruited. Imaging reveals
that it is not.
Processing Music
Although Penfield did not study the effect of brain stimulation on musical analysis,
many researchers study musical processing in brain-damaged patients. “Cerebral
Aneurysms” describes one such case. Collectively, the results of these studies confirm
Figure 9-22
The Procedure of Subtraction In the upper row of scans,
the control condition of resting while looking at a static
fixation point is subtracted from the experimental condition
of looking at a flickering checkerboard. The subtraction
produces a different scan for each of the five experimental
subjects shown in the middle row, but all show increased
blood flow in the occipital region. The difference scans are
averaged to produce the representative image at the bottom.
Figure 9-23
Selective Cortical Areas Activated in
Different Language-Related Tasks
(A) Passively listening to noise bursts
activates the primary auditory cortex.
(B) Listening to words activates the
posterior speech area, including
Wernicke’s area. (C) Making a phonetic
discrimination activates the frontal
region, including Broca’s area.
M. E. Raichle, Mallinckrodt Institute of Radiology,
Washington University School of Medicine
CH09.qxd 1/6/05 3:12 PM Page 333

ls
le
ll
334 ! CHAPTER 9
that musical processing is in fact largely a right-hemisphere specialization, just as language
processing is largely a left-hemisphere one.
An excellent example of right-hemisphere predominance for the
processing of music is seen in a famous patient—French composer
Maurice Ravel (1875–1937).“Bolero” is perhaps his best-known work.
At the peak of his career, Ravel suffered a left-hemisphere stroke and
developed aphasia. Yet many of Ravel’s musical skills remained intact
after the stroke because they were localized to the right hemisphere.
He could still recognize melodies, pick up tiny mistakes in music that
he heard being played, and even judge the tuning of pianos.His music
perception was largely intact.
Interestingly, however, skills that had to do with music production were among
those destroyed. Ravel could no longer recognize written music, play the piano, or
compose. This dissociation of music perception and music production is curious. Apparently,
the left hemisphere plays at least some role in certain aspects of music processing,
especially those that have to do with making music.
To find out more about how the brain carries out the perceptual side of music processing,
Zatorre and his colleagues (1994) conducted PET studies.When subjects listened
simply to bursts of noise, Heschl’s gyrus became activated (Figure 9-24A), but this was
Cerebral Aneurysms
Focus on Disorders
C. N. was a 35-year-old nurse described by Isabelle Peretz
and her colleagues (1994). In December 1986, C. N. suddenly
developed severe neck pain and headache. A neurological
examination revealed an aneurysm in the middle
cerebral artery on the right side of her brain.
An aneurysm is a bulge in a blood-vessel wall caused by
a weakening of the tissue, much like the bulge that appears
in a bicycle tire at a weakened spot. Aneurysms in a cerebral
artery are dangerous because, if they burst, severe bleeding
and subsequent brain damage result.
In February 1987, C. N.’s aneurysm was surgically repaired,
and she appeared to suffer few adverse effects. However,
postoperative brain imaging revealed that a new
aneurysm had formed in the same location but on the opposite
side of the brain. This second aneurysm was repaired
2 weeks later.
After her surgery, C. N. had temporary difficulty finding
the right word when she spoke, but, more important, her perception
of music was deranged. She could no longer sing,
nor could she recognize familiar tunes. In fact, singers
sounded to her as if they were talking instead of singing. But
C. N. could still dance to music.
Because her music-related symptoms did not go away, she
was given a brain scan. It revealed damage along the lateral fissure
in both temporal lobes. The damage did not include the
primary auditory cortex, nor did it include any part of the posterior
speech zone. For these reasons, C. N. could still recognize
nonmusical sound patterns and showed no evidence
of language disturbance. This finding reinforces the hypothesis
that nonmusical sounds and speech sounds are analyzed
in parts of the brain separate from those that process music.
Bulge in
bicycle tire
Aneurysm in
cerebral artery
Maurice Ravel
(1875–1937)
CH09.qxd 1/6/05 3:12 PM Page 334

ls
le
ll
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 335
not the case when the subjects listened to melodies. As shown in Figure 9-24B, the perception
of melody triggers major activation in the right-hemisphere auditory cortex
lying in front of Heschl’s gyrus, as well as minor activation in the same region of the left
hemisphere (not shown).
In another test, subjects listened to the same melodies but this time were asked to
indicate whether the pitch of the second note was higher or lower than that of the first
note. During this task, which requires short-term memory of what has just been heard,
blood flow in the right frontal lobe increased (Figure 9-24C). As with language, then,
the frontal lobe plays a role in auditory analysis when short-term memory is required.
As noted earlier, the capacity for language appears to be innate. Sandra Trehub and
her colleagues (1999) showed that music may be innate as well, as we hypothesized at
the beginning of the chapter. Trehub found that infants show learning preferences for
musical scales versus random notes. Like adults, children are very sensitive to musical
errors, presumably because they are biased for perceiving regularity in rhythms. Thus,
it appears that, at birth, the brain is prepared for hearing both music and language and,
presumably, selectively attends to these auditory signals.
AUDITORY COMMUNICATION IN
NONHUMAN SPECIES
Sound has survival value, as you know if you’ve ever crossed a busy intersection on foot.
Audition is as important a sense to many animals as vision is to humans.Many animals
also communicate with other members of their species by using sound, as humans do.
In Review .
The auditory system has complementary specialization in the cortex: left for languagerelated
analyses and right for music-related ones. This asymmetry, however, appears to be
relative, because there is good evidence that the left hemisphere plays a role in some aspects
of music-related behaviors and that the right hemisphere has some language capabilities.
The results of both electrical-stimulation and PET studies show that the left hemisphere contains
several language-related areas. For instance, Wernicke’s area identifies speech syllables
and words, representations of which are stored in that location. Broca’s area matches
speech sounds to the motor programs necessary to articulate them, and, in this way, it plays
a role in discriminating closely related speech-sound patterns. Additional regions of the
frontal lobe play a role in the initiation of speech (supplementary speech area) and in the
motor control of facial, tongue, and throat muscles (motor cortex). Damage to different regions
of the left hemisphere produces different types of language disruptions (aphasias),
whereas damage to the right hemisphere interferes with musical perception.
(A) Listening to bursts of noise
Heschl‘s gyrus
(B) Listening to melodies
Secondary
auditory cortex
(C) Comparing pitches
Frontal lobe
Figure 9-24
Selective Cortical Areas Activated
in Different Music-Related Tasks
(A) Passively listening to noise bursts
activates Heschl’s gyrus. (B) Listening
to melodies activates the secondary
auditory cortex. (C) Making relative
pitch judgments about two notes of
each melody activates a right-frontallobe
area.
CH09.qxd 1/6/05 3:12 PM Page 335

ls
le
ll
Here we consider just two types of auditory communication in nonhumans: birdsong
and echolocation. Each strategy provides a model for understanding different aspects
of brain–behavior relations in which the auditory system plays a role.
Birdsong
Of about 8500 living species of birds, about half are considered songbirds. Birdsong has
many functions, including attracting mates (usually employed by males), demarcating
territories, and announcing location or even mere presence. Although all birds of the
same species have a similar song, the details of that song vary markedly from region to
region, much as dialects of the same human language vary.
Figure 9-25 includes sound-wave spectrograms for the songs ofmale white-crowned
sparrows that live in three different localities near San Francisco.Notice how the songs of
birds are quite different from region to region. These regional differences are due to the
fact that song development in young birds is influenced not just by genes but also by early
experience and learning. In fact, young birds can acquire more elaborate songs than can
other members of their species if the young birds have a good tutor (Marler, 1991).
Birdsong and human language have other broad similarities. Both appear to be innate
yet are sculpted by experience. Both are diverse and can vary in complexity. Humans
seem to have a basic template for language that is programmed into the brain,
and a variety of specific structural forms are added to this template by experience. If a
young bird is not exposed to song until it is a juvenile and then listens to recordings of
birdsongs of different species, the young bird shows a general preference for the song
of its own species. This preference must mean that a species-specific song template exists
in the brain of each bird species. As for language, the details of this birdsong template
are modified by experience.
Another broad similarity between birdsong and human language mentioned earlier
is their great diversity. Among birds, this diversity can also be seen in the sheer
number of songs that a species possesses. Species such as the white-crowned sparrow
have but a single song; the marsh wren has as many as 150.
The number of syllables in birdsong also varies greatly, ranging from 30 for the canary
to about 2000 for the brown thrasher. In a similar way, even though all modern
human languages are equally complex, they vary significantly in the type and number
of elements that they employ. For instance, the number of meaningful speech-sound
patterns in human languages ranges from about 15 (for some Polynesian languages) to
about 100 (for some dialects spoken in the Caucasus Mountains).
A final broad similarity between birdsong and human language lies in how they
develop. In many bird species, song development is heavily influenced by experience
336 ! CHAPTER 9
White-crowned
sparrow
Point Reyes
Berkeley
San Francisco
Bay
Sunset Beach
Figure 9-25
Birdsong Dialects Sound-wave
spectrograms of male white-crowned
sparrows found in three locales around
San Francisco Bay are very similar, but
the dialects differ from one another.
Thus, like humans, birds raised in
different regions have different dialects.
Adapted from “The Instinct to Learn,” by
P. Marler, 1991, in S. Carey and R. German (Eds.),
The Epigenesis of mind: Essays on biology and
cognition (p. 39), Hillsdale, NJ: Lawrence Erlbaum.
CH09.qxd 1/6/05 3:12 PM Page 336

ls
le
ll
during a so-called sensitive period, just as it is in humans, as you learned
in Chapter 6. Birds also go through stages in song development, just as
humans go through stages in language development. Hatchlings make
noises that attract the attention of their parents, usually for feeding, and
human babies, too, emit cries to signal hunger, among other things.
The fledgling begins to make noises that Charles Darwin compared
to the prespeech babbling of human infants. These noises, called subsong,
are variable in structure and low in volume, and they are often produced
as the bird appears to doze. Presumably, subsong, like human babbling,
is a type of practice for the later development of adult communication
after the bird has left the nest.
As a young bird matures, it starts to produce sound-wave patterns
that contain recognizable bits of the adult song. Finally, the adult song
emerges. In most species, the adult song remains remarkably stable, although
a few species, such as canaries, can develop a new song every year.
The neurobiology of birdsong has been a topic of intense research,
partly because it provides an excellent model of changes in the brain that accompany
learning and partly because it can be a source of insight into how sex hormones influence
behavior. Fernando Nottebohm and his colleagues first identified the major structures
controlling birdsong in the late 1970s (Nottebohm & Arnold, 1976). These
structures are illustrated in Figure 9-26.
The largest are the higher vocal control center (HVC) and the nucleus robustus
archistriatalis (RA). The axons of the HVC connect to the RA, which in turn sends
axons to the 12th cranial nerve. This nerve controls the muscles of the syrinx, the structure
that actually produces the song.
The HVC and RA have several important, and some familiar, characteristics:
They are asymmetrical in some species, with the structures in the left hemisphere
being larger than those in the right hemisphere. In many cases, this asymmetry is similar
to the lateralized control of language in humans: if the left-hemisphere pathways
are damaged, the birds stop singing, but similar injury in the right hemisphere has no
effect on song.
Birdsong structures are sexually dimorphic (see “Hormones and the Range of a Behavior,”
on page 000). That is, they are much larger in males than in females. In canaries,
they are five times as large in the male bird. This sexual difference is due to the
hormone testosterone in males. Injection of testosterone into female birds causes the
song-controlling nuclei to increase in size.
The size of the birdsong nuclei is related to singing skill. For instance, unusually talented
singers among male canaries tend to have larger HVCs and RAs than do lessgifted
singers.
The HVC and RA contain not only cells that produce birdsong but also cells responsive
to hearing song, especially the song of a bird’s own species.
The same structures therefore play a role in both song production and song perception.
This avian neural anatomy is comparable to the overlapping roles of Broca’s
and Wernicke’s areas in language perception and production in humans.
Echolocation in Bats
Next to rodents, bats are the most numerous order ofmammals.The two general groups,
or suborders, of bats consist of the smaller echolocating bats (Microchiroptera)and the
larger fruit-eating and flower-visiting bats (Megachiroptera), sometimes called flying
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 337
Ad RA
IMAN
KEY
Cell groups specialized
for vocal learning
Cell groups specialized
for vocal learning
and adult song
Area X
DLM
Syrinx
(vocal organ) Trachea
Muscle
Canary brain HVC
12th cranial
nerve
HYP
Figure 9-26
Avian Neuroanatomy Lateral view of
the canary brain shows several nuclei
that control song learning, the two
critical ones being the higher vocal
control center (HVC) and the nucleus
robustus archistriatalis (RA). These
areas are necessary both for adult
singing and for learning the song.
Other regions necessary for learning
the song during development but not
required for the adult song include the
dorsal archistriatum (Ad), the lateral
magnocellular nucleus of the anterior
neostriatum (IMAN), area X of the avian
striatum, and the medial dorsolateral
nucleus of the thalamus (DLM).
CH09.qxd 1/6/05 3:12 PM Page 337

ls
le
ll
foxes.The echolocating bats interest us here because they use sound waves to navigate, to
hunt, and to communicate. (Bats are not unique in using sound waves for these purposes.
Dolphins are among the marine mammals that use a similar system in water.)
Most of the 680 species of echolocating bats feed on insects. Some others live on
blood (vampire bats), and some catch frogs, lizards, fishes, birds, and small mammals.
The auditory system of bats is specialized to use echolocation not only to locate targets
in the dark but also to analyze the features of targets, as well as features of the environment
in general. Through echolocation, a bat identifies prey, navigates through the
leaves of trees, and locates surfaces suitable to land on. Perhaps a term analogous to visualization,
such as “audification,” would be more appropriate.
Echolocation works rather like sonar. The larynx of a bat emits bursts of sound
waves at ultrasonic frequencies that bounce off objects and return to the bat’s ears, allowing
the animal to identify what is in the surrounding environment. The bat, in other
words, navigates by the echoes that it hears, differentiating among the different characteristics
of those echoes.
Objects that are moving, such as insects, have a moving echo. Smooth objects give
a different echo from rough objects, and so on. A key component of this echolocation
system is the analysis of differences in the return times of echoes. Close objects return
echoes sooner than more distant objects do, and the textures of various objects’ surfaces
impose minute differences in return times.
A bat’s cries are of short duration (ranging from 0.3 to 200 ms) and high frequency
(from 12,000 to 200,000 Hz; see Figure 9-5). Most of this range lies at too high a frequency
for the human ear to detect. Different bat species produce sound waves of different
frequency that depend on the animal’s ecology. Bats that catch prey in the open
use different frequencies from those used by bats that catch insects in foliage and from
those used by bats that hunt for prey on the ground.
The echolocation abilities of bats are impressive. The results of laboratory studies
show that a bat with a 40-cm wing span can fly in total darkness through 14-by-14-cm
openings in a grid of nylon threads only 80 lm thick, as shown in Figure 9-27. Bats in
the wild can be trained to catch small food particles thrown up into the air in the dark.
These echolocating skills make the bat a very efficient hunter. The little brown bat, for
instance, can capture very small flying insects, such as mosquitoes, at the remarkable
rate of two per second.
Researchers have considerable interest in the neural mechanisms of bat echolocation.
Each bat species emits sound waves in a relatively narrow range of frequencies, and
a bat’s auditory pathway has cells specifically tuned to echoes in the frequency range of its
species. For example, the mustached bat sends out sound waves ranging from 60,000 to
62,000 Hz, and its auditory system has a cochlear fovea (a maximally sensitive area in the
organ of Corti) that corresponds to that frequency range. In this way,more neurons are
dedicated to the frequency range used for echolocation
than to any other range of frequencies.
Analogously, recall that our visual system
dedicates more neurons to the retina’s fovea, the
area responsible for our most detailed vision. In
the cortex of the bat’s brain, several distinct areas
process complex echo-related inputs. For instance,
one area computes the distance of given targets
from the animal, whereas another area computes
the velocity of a moving target. This neural system
makes the bat exquisitely adapted for nighttime
navigation.
338 ! CHAPTER 9
Echolocation. Ability to identify and
locate an object by bouncing sound
waves off the object.
Microchiroptera
(an echolocating bat)
Megachiroptera
(a fruit-eating bat)
Figure 9-27
Born with Sonar A bat
with a 40-cm wingspan can
navigate through openings
in a 14-by-14-cm mesh
made of 80-lm nylon
thread while flying in total
darkness. The echolocating
bat creates an accurate map
of the world based entirely
on auditory information.
Steven Dalton/NHPA
CH09.qxd 1/6/05 3:12 PM Page 338

ls
le
ll
SUMMARY
What do language and music contribute to our auditory world? Although we take
language and music for granted, they play central roles in our mental lives as well as in
our social lives. Both language and music provide a way to communicate with other
people—and with ourselves. They also faciliate social identification, parenting, and
cultural transmission.
What is the nature of the stimulus that the brain perceives as sound? The stimulus for
the auditory system is the energy of sound waves that results from changes in air pressure.
The ear transduces three fundamental physical qualities of sound-wave energy:
frequency (repetition rate), amplitude (size), and complexity. Perceptually, neural networks
in the brain then translate these energies into the pitch, loudness, and timbre of
the sounds that we hear.
How does the nervous system transduce changes in air pressure into our impression of
sounds? Beginning in the ear, a combination of mechanical and electrochemical systems
transform sound waves into auditory perceptions—what we hear. Changes in air
pressure are conveyed in a mechanical chain reaction from the eardrum to the bones
of the middle ear to the oval window of the cochlea and the cochlear fluid that lies behind
it in the inner ear. Movements of the cochlear fluid produce movements in specific
regions of the basilar membrane, leading to changes in the electrochemical activity
of the auditory receptors, the inner hair cells found on the basilar membrane that send
neural impulses through the auditory nerve into the brain.
How does the auditory system analyze sound waves? The basilar membrane has a
tonotopic organization. High-frequency sound waves maximally stimulate hair cells at
their base, whereas low-frequency sound waves maximally stimulate hair cells at the
apex, enabling cochlear neurons to code various sound frequencies. Tonotopic organization
of sound-wave analysis is found at all levels of the auditory system, and the system
also detects both amplitude and location. Sound amplitude is coded by the firing
rate of cochlear neurons, with louder sounds producing higher firing rates than softer
sounds do. Location is detected by structures in the brainstem that compute differences
in the arrival times and the loudness of a sound in the two ears.
In Review .
The analysis of birdsong has identified several important principles of auditory functioning
similar to those observed in human language, reinforcing the idea that many characteristics
of human language may be innate. One principle underlying birdsong is that specialized
structures in the avian brain produce and perceive vocal stimuli. Another is that
these structures are influenced by early experience. Third, an innate template imposes an
important constraint on the nature of the songs that a bird produces and perceives. Insecteating
bats employ high-frequency sound waves as biological sonar that allows them to
navigate in the dark and to catch insects on the fly as well as to communicate with others
of their species. An echolocating bat’s auditory world is easily as rich as our visual world
because it contains information about the shape and velocity of objects—information that
our visual system provides. Humans do not hear nearly as well as bats, but we see much
better.
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 339
CH09.qxd 1/6/05 3:12 PM Page 339

ls
le
ll
What pathway does information travel from the auditory receptors to the neural cortex?
The hair cells of the cochlea synapse with bipolar neurons that form the cochlear
nerve, which in turn forms part of the eighth cranial nerve. The cochlear nerve takes
auditory information to three structures in the hindbrain: the cochlear nucleus, the superior
olive, and the trapezoid body. Cells in these areas are sensitive to differences in
both sound-wave intensity and arrival times in the two ears. In this way, they enable
the brain to locate a sound. The auditory pathway continues from the hindbrain areas
to the inferior colliculus of the midbrain, then to the medial geniculate nucleus in the
thalamus, and finally to the auditory cortex. Like vision, dual pathways exist in the auditory
cortex, one for pattern recognition and the other for controlling movements in
auditory space. Cells in the cortex are responsive to specific categories of sounds, such
as species-specific communication.
How does the brain understand language and music? Despite differences in speechsound
patterns and structures, all human languages have the same basic foundation of
a syntax and a grammar. This fundamental similarity implies an innate template for
creating language. The auditory areas of the cortex in the left hemisphere play a special
role in analyzing language-related information, whereas those in the right hemisphere
play a special role in analyzing music-related information. Among several
language-processing areas in the left hemisphere,Wernicke’s area identifies speech syllables
and words and so is critically engaged in speech comprehension. Broca’s area
matches speech-sound patterns to the motor behaviors necessary to make them and so
plays a major role in speech production. Broca’s area also discriminates between closely
related speech sounds. The primary auditory cortex of the right hemisphere plays a
critical role in comprehending music. The right temporal lobe also analyzes prosody,
the melodical qualities of speech.
How does brain organization relate to the unique auditory worlds of other species? Nonhuman
species have evolved specialized auditory structures and behaviors. One example
is birdsong. Regions of songbirds’ brains are specialized for producing and
comprehending song. In many species, these regions are lateralized to the left hemisphere,
analogous in a way to how language areas are lateralized to the left hemisphere
in most humans. The similarities between the development of song in birds and the
development of language in humans, as well as similarities in the neural mechanisms
underlying both the production and the perception of song and language, are striking.
Both owls and bats can fly and catch prey at night by using only auditory information
to guide their movement. Echolocating bats evolved a biological sonar that allows them
to map of the objects in their world, just as humans map their visual worlds. This
mainly auditory reality we humans can only try to imagine.
KEY TERMS
aphasia, p. 327
amplitude, p. 311
basilar membrane, p. 316
Broca’s area, p. 327
cochlea, p. 316
cochlear implant, p. 322
echolocation, p. 338
frequency, p. 308
hair cell, p. 316
hertz (Hz), p. 308
insula, p. 320
lateralization, p. 320
medial geniculate nucleus,
p. 319
ossicles, p. 316
positron emission
tomography (PET),
p. 329
primary auditory cortex
(area A1), p. 319
prosody, p. 314
sound wave, p. 308
supplementary speech area,
p. 329
tonotopic representation,
p. 321
Wernicke’s area, p. 319
340 ! CHAPTER 9
neuroscience interact ive
Many resources are available for
expanding your learning on line:
www.worthpublishers.com/kolb/
chapter9
Try some self-tests to reinforce your
mastery of the material. Look at some
of the updates on current research on
the brain. You’ll also be able to link to
other Web sites that will reinforce what
you’ve learned.
www.nad.org
Link to the National Association of the
Deaf and learn about living with
deafness.
www.aphasia.org
Do more research about aphasia at the
site for the National Aphasia
Association.
On your CD-ROM, you’ll be able to
quiz yourself on your comprehension
of the chapter. The module on the
Central Nervous System also provides
important review of the auditory
pathways and cortical anatomy
important for understanding this
chapter.
CH09.qxd 1/6/05 3:12 PM Page 340

ls
le
ll
REVIEW QUESTIONS
1. What are the three physical properties of sound waves, and how does the
auditory system code each one?
2. How does the auditory system code the location of a sound?
3. Why do all human languages have the same basic structure?
4. How is language perception organized in the brain?
5. How is blood flow measured in the brain, and what does it tell us about brain
function?
6. Give a simple neurobiological explanation of how we understand and produce
language.
7. What can we learn from birdsong that is relevant to human auditory function?
FOR FURTHER THOUGHT
1. Different species have different ranges of hearing.Why would such differences be
adaptive?
2. What is special about language and music?
RECOMMENDED READING
Drake-Lee, A. B. (1992). Beyond music: Auditory temporary threshold shift in rock
musicians after a heavy metal concert. Journal of the Royal Society of Medicine, 85,
617–619. Have you ever wondered what listening to loud music might be doing to your
hearing? This paper looks at the effects of hearing loud music at a rock concert on
hearing thresholds in the musicians.What is important to remember is that the
musicians are standing beside the speakers, and so those in the front rows are likely to
hear even louder music.
Gazzaniga, M. S. (1992). Nature’s mind. New York: Basic Books. Michael Gazzaniga is an
eminent cognitive neuroscientist who has an easy writing style. He has written several
popular books, such as Nature’s Mind, each of which is chock full of interesting ideas
about how the brain works. This book is a pleasure to read and introduces the reader to
Gazzaniga’s ideas about why the brain is asymmetrically organized and what the
fundamental differences between the hemispheres might be.
Luria, A. R. (1972). The man with a shattered world. Chicago: Regnery. Alexander Luria
wrote many neuropsychology books in his long career, but this one is perhaps the most
interesting and accessible to the nonspecialist. This book describes the effect of a bullet
wound to the head of a university student who was recruited to defend Leningrad in
World War II. The book has many anecdotes, often humorous, that show how the
young man’s mental world was severely altered by this traumatic experience. Reading
this book can be a source of insight into what it is like to cope with brain damage.
Peretz, I. & Zatorre, R.J. (Eds.) (2001) The biological foundations of music. New York: New
York Academy of Sciences. The origins, organization, and neural control of music is a
fascinating topic. This book is a wonderful summary of what is known.
Pinker, S. (1997). How the mind works. New York: Norton. Steven Pinker gives us a
provocative look at theories of how brain activity produces mental events. For those
interested in cognitive neuroscience, this book is a good introduction to questions that
we might ask in everyday life. For example, why does a face look more attractive with
makeup? Or, why is the thought of eating worms disgusting?
HOW DO WE HEAR, SPEAK, AND MAKE MUSIC? ! 341
CH09.qxd 1/6/05 3:12 PM Page 341

Focus on Comparative Biology: Portrait of an Artist
Hierarchical Control of Movement
The Forebrain and Initiating Movement
The Brainstem and Species-Typical Movement
Focus on Disorders: Autism
The Spinal Cord and Executing Movement
Focus on Disorders: Spinal-Cord Injury
Organization of the Motor System
The Motor Cortex
Corticospinal Tracts
Motor Neurons
Control of Muscles
The Motor Cortex and Skilled
Movements
Investigating Neural Control of Skilled Movements
Control of Skilled Movements in Nonhuman Species
How Motor-Cortex Damage Affects Skilled Movements
The Basal Ganglia and the Cerebellum
The Basal Ganglia and Movement Force
Focus on Disorders: Tourette’s Syndrome
The Cerebellum and Movement Skill
Organization of the Somatosensory
System
Somatosensory Receptors and Perception
Dorsal-Root Ganglion Neurons
Somatosensory Pathways to the Brain
Spinal Reflexes
Feeling and Treating Pain
The Vestibular System and Balance
Exploring the Somatosensory Cortex
The Somatosensory Homunculus
Focus on New Research: Tickling
Effects of Damage to the Somatosensory Cortex
The Somatosensory Cortex and Complex Movement
342 !
C H A P T E R 10
How Does the Nervous System
Respond to Stimulation and
Produce Movement?
Left: Dr. David Scott/Phototake. Middle: Corbis.
Right: Dr. Scott T. Grafton/Visuals Unlimited.
ls
le
ll
ED.: Page runs 2+
lines long, OK?
CH10.qxd 1/6/05 3:19 PM Page 342

with it. Her trunk is both immensely strong and very agile.
With it, Kamala can lift objects as large as an elephant calf
or uproot entire trees. Yet this same trunk can grasp a single
peanut from the palm of a proffered hand.
Kamala also uses her versatile trunk in one very unusual
way (Onodera & Hicks, 1999). She is one of only a
few elephants in the world that paints with its trunk. Like
many artists, Kamala, pictured here at her easel, paints
when it suits her. She has commemorated many important
zoo events on canvas.
The idea of an elephant artist is not as far-fetched as it
may at first seem. Other elephants, both in the wild and in
captivity, pick up small stones and sticks and draw in the
dust with them. But Kamala has gone well beyond this simple
doodling. When given paints and a brush, she began to
produce works of art, many of which have been sold to collectors.
As have some other domesticated elephants, Kamala
has achieved an international reputation as an artist
(Komar & Melamid, 2004).
Portrait of an Artist
Focus on Comparative Biology
K amala (the name means “lotus flower”) is a female Indian
elephant that lives at the zoo in Calgary, Canada.
Her trunk, which is really just a greatly extended upper lip
and nose, consists of about 2000 fused muscles. A pair of
nostrils runs its length, and fingerlike projections are located
at its tip. The skin of the trunk is soft and supple and is covered
sparsely with sensory hairs.
Like all elephants, Kamala uses her trunk for many
purposes—to gather food, scratch an ear, rub an itchy eye,
or caress a baby. It can also be used to explore. Kamala
raises her trunk to sniff the wind, lowers it to examine the
ground for scents, and sometimes even pokes it into another
elephant’s mouth to investigate the food there.
Like other elephants, Kamala can inhale as much as 4
liters of water into her trunk, which she can then place in
her mouth to drink or squirt over her body to bathe. She can
also inhale dust or mud for bathing.
Kamala’s trunk is a potential weapon. She can flick it
as a threat, lash out with it in aggression, and throw things
ls
le
ll
Born in 1975 in Sri Lanka’s Yala National Park and orphaned as an infant, Kamala was adopted
by the Calgary, Alberta, Zoological Society. Kamala began painting as part of an environmental
enrichment program, and her paintings are widely sold to collectors.
The Calgary Zoological Society
The Calgary Zoological Society
CH10.qxd 1/6/05 3:19 PM Page 343

ls
le
ll
344 ! CHAPTER 10
Adefining feature of animals is their ability to move. We humans display the
most-skilled motor control of all animals, but members of many species display
highly dexterous movements, as Kamala illustrates. This chapter explores
how the nervous system produces movement.
We begin by considering how the control of movement is organized in the nervous
system. Then we examine the various contributions of the neocortex, the brainstem,
and the spinal cord to movements both gross and fine. Next we investigate how
the basal ganglia and the cerebellum help to fine-tune our control of movement.
Finally, we turn to the role of the somatosensory system—the body senses of touch
and balance. Although other senses, such as vision and hearing, play a part in enabling
movement, body senses play a special role, as you will soon discover.
HIERARCHICAL CONTROL OF MOVEMENT
When Kamala paints a picture, her behaviors are sequentially organized. First, she
looks at her canvas and her selection of paints, then, she considers what she wants to
paint, and, finally, she executes her painting. These sequentially organized behaviors
are dictated by the hierarchical organization of Kamala’s nervous system. The major
components of this nervous system hierarchy are the neocortex, the brainstem, and
the spinal cord. All contribute to controlling the behaviors required to produce her
artwork.
In the same way, your hierarchically organized nervous system controls every
movement that you make. Figure 10-1 shows the sequence of steps executed by the
human nervous system in directing a hand to pick up a mug. The visual system must
first inspect the cup to determine what part of it should be grasped. This information
is then relayed from the visual cortex to cortical motor regions, which plan and
initiate the movement, sending instructions to the part of the spinal cord that controls
the muscles of the arm and hand.
As you grasp the handle of the cup, information
from sensory receptors in the fingers travels to the
spinal cord, and from there messages are sent to sensory
regions of the cortex that control touch. The
sensory cortex, in turn, informs the motor cortex
that the cup is now being held. Other regions of the
brain also participate in controlling the movement,
as you learned in Chapter 2. For example, the basal
ganglia help to produce the appropriate amount of
force, and the cerebellum helps to regulate timing
and corrects any movement errors.
Although at this point you probably will not remember
all these various steps in controlling an
everyday movement, refer to Figure 10-1 when you
reach the end of this chapter as a way of reviewing what you have learned.
The important concept to remember right now is simply the hierarchical
organization of the entire system.
Recall, again from Chapter 2, that the idea that the nervous system is
hierarchically organized originated with English neurologist John Hughlings-
Jackson a century ago. He thought of the nervous system as being
organized into a number of layers. Successively higher levels control
more-complex aspects of behavior by acting through the lower levels. The
Motor
nerve
Senory
nerve
Frontal-lobe motor areas
plan the reach and command
the movement.
2
1
Sensory receptors on the fingers send
message to sensory cortex saying that
the cup has been grasped.
5
Motor neurons carry
message to muscles of the
hand and forearm.
4
Spinal cord carries
sensory information
to brain.
6
Spinal cord carries
information to hand.
3
Visual information
required to locate target.
Sensory cortex receives
the message that the
cup has been grasped.
8
Basal ganglia judge
grasp forces, and
cerebellum corrects
movement errors.
7
Figure 10-1
Sequentially Organized Movement
Movements such as reaching for a cup
require the participation of broad areas
of the nervous system. The brain tells
the hand to reach, and the hand tells
the brain that it has succeeded.
CH10.qxd 1/6/05 3:19 PM Page 344

ls
le
ll
three major levels in Hughlings-Jackson’s model are the same as those just mentioned
for Kamala: the forebrain, the brainstem, and the spinal cord. Hughlings-Jackson also
proposed that, within these divisions, further levels of organization could be found.
Hughlings-Jackson adopted the concept of hierarchical organization from evolutionary
theory. He knew that the chordate nervous system had evolved gradually: the
spinal cord had developed in worms; the brainstem in fish, amphibians, and reptiles;
and the forebrain in birds and mammals.
Because each level of the nervous system had developed at different times, Hughlings-
Jackson reasoned that each must have some functional independence. Consequently,
if higher levels of the nervous system were damaged, the result would be
regression to the simpler behaviors of “lower” animals, a phenomenon that Hughlings-
Jackson called dissolution. The brain-damaged person would still possess a repertoire
of behaviors, but they would be more typical of animals that had not yet evolved the
destroyed brain structure.
A hierarchically organized structure such as the mammalian nervous system, however,
does not operate piece by piece. It functions as a whole, with the higher regions
working through and influencing the actions of the lower ones. In the control of movement,
many parts of the nervous system participate, with some regions engaged in sensory
control, others in planning and commanding the movement, and still others in
actually carrying the action out. To understand how all these various regions work
together to produce even a simple movement, such as picking up a mug, we will consider
the major components of the hierarchy one by one, starting at the top with the
forebrain.
The Forebrain and Initiating Movement
Complex movements consist of many components. Take painting a work of art. Your
perceptions of what is appearing on the canvas must be closely coordinated with the
brush strokes that your hand makes to achieve the desired effect.
The same high degree of control is necessary for many other complex behaviors.
Consider playing basketball. At every moment, decisions must be made and actions
must be performed. Dribble, pass, and shoot are different categories of movement, and
each can be carried out in numerous ways. Skilled players choose among the categories
effortlessly and execute the movements seemingly without thought.
One explanation of how we control movements that was popular in the 1930s centers
on the concept of feedback. It holds that, after we perform an action, we wait for
feedback about how well that action has succeeded, and then we make the next movement
accordingly. But Karl Lashley (1951), in an article titled “The Problem of Serial
Order in Behavior,” found fault with this explanation.
Lashley argued that movements, such as those required for playing the piano,
were performed too quickly to rely on feedback about one movement shaping the
next movement. The time required to receive feedback about the first movement
combined with the time needed to develop a plan for the subsequent movement and
send a corresponding message to muscles was simply too long to permit piano playing.
Lashley suggested that movements must be performed as motor sequences,
with one movement module held in readiness while an ongoing sequence is being
completed.
According to this view, all complex behaviors, including playing the piano, painting
pictures, and playing basketball, require the selection and execution of multiple
movement sequences. As one sequence is being executed, the next sequence is being
HOW DOES THE NERVOUS SYSTEM RESPOND TO STIMULATION AND PRODUCE MOVEMENT? ! 345
Dissolution. Hypothetical condition
whereby disease or damage in the highest
levels of the nervous system would
produce not just loss of function but a
repertory of simpler behaviors as seen in
animals that have not evolved that
particular brain structure.
Motor sequence. Movement modules
preprogrammed by the brain and
produced as a unit.
CH10.qxd 1/6/05 3:19 PM Page 345

ls
le
ll
346 ! CHAPTER 10
prepared so that the second can follow the first smoothly. Interestingly, the act of
speaking seems to bear out Lashley’s view. When people use complex rather than
simple sequences of words, they are more likely to pause and make “umm” and “ahh”
sounds, suggesting that it is taking them more time than usual to organize their word
sequences.
The frontal lobe of each hemisphere is responsible for planning and initiating
motor sequences. The frontal lobe is divided into a number of different regions,
including the three illustrated in Figure 10-2. From front to back, they are the prefrontal
cortex, the premotor cortex, and the primary motor cortex.
At the top of this hierarchy, a function of the prefrontal cortex is to plan complex
behaviors. Such a plan might be deciding to get up at a certain hour to arrive at work
on time, deciding to stop at the library to return a book that is due, or deciding whether
an action is right or wrong and thus whether it should be performed at all. Humans
with prefrontal cortex injury often break social and legal rules not because they do not
know them or the consequences of breaking them but because their decision making
is faulty (Chapter 14). The prefrontal cortex does not specify the precise movements to
be made. Like a business executive, it simply specifies the goal toward which movements
should be directed.
To bring a plan to completion, the prefrontal cortex sends instructions to the premotor
cortex, which produces the complex movement sequences appropriate to the
task. If the premotor cortex is damaged, such sequences cannot be coordinated, and the
goal cannot be accomplished. For example, the monkey on the right in Figure 10-3 has
a lesion in the dorsal part of its premotor cortex. It has been given the task of extracting
a piece of food wedged in a hole in a table (Brinkman, 1984). If it simply pushes
the food with a finger, the food will drop to the floor and be lost.
The monkey has to catch the food by holding a palm beneath the hole as the food
is being pushed out. This brain injured animal is unable to make the two complementary
movements together. It can push the food with a finger and extend an open
palm, but it cannot coordinate these actions of its two hands, as the normal monkey
on the left can. The premotor cortex also contains “mirror neurons” that discharge
when the subject performs an action, such as reaching for food, or when the subject
observes another individual performing the same movement. Mirror neurons allow
a subject to make, observe, and represent movement sequences (Fadiga & Craighero,
2004).
The premotor cortex organizes movement sequences but does not specify how
each movement is to be carried out. Those details are the responsibility of the primary
motor cortex, which is responsible for executing skilled movements. To understand its
role, consider some of the movements that we use to pick up objects.
Prefrontal
cortex plans
movements.
Motor cortex
produces specific
movements.
Premotor cortex organizes
movement sequences.
Prefrontal
cortex
plans
Premotor
cortex
sequences
Motor cortex
executes
actions
Figure 10-2
Initiating a Motor Sequence
Premotor cortex
(area of lesion)
Primary motor
cortex
Prefrontal
cortex
5 months
after lesion
Normal animal
Figure 10-3
Premotor Control On a task requiring
both hands, the normal monkey can
push the peanut out of a hole with one
hand and catch it in the other but, 5
months after lesioning of the premotor
cortex, the experimental monkey cannot.
Adapted from “Supplementary Motor
Area of the Monkey’s Cerebral Cortex:
Short- and Long-Term Effects after
Unilateral Ablation and the Effects of
Subsequent Callosal Section,” by C.
Brinkman, 1984, Journal of
Neuroscience, 4, p. 925.
CH10.qxd 1/6/05 3:19 PM Page 346

ls
le
ll
As the subjects used a finger to push a lever, increased blood flow was limited to
the primary somatosensory and primary motor cortex. As the subjects executed a sequence
of finger movements, blood flow also increased in the premotor cortex. And, as
the subjects used a finger to trace their way through a maze, a task that requires the coordination
of movements in relation to a goal, blood flow increased in the prefrontal
cortex as well.
Notice that blood flow did not increase throughout the entire frontal lobe as the
subjects were performing these tasks. Blood flow increased only in those regions taking
part in the required movements.
The Brainstem and Species-Typical Movement
Species-typical behaviors are actions displayed by every member of a species—
the pecking of a robin, the hissing of a cat, or the breaching of a whale. In a series of
studies, Swiss neuroscientist Walter Hess (1957) found that the brainstem controls
HOW DOES THE NERVOUS SYSTEM RESPOND TO STIMULATION AND PRODUCE MOVEMENT? ! 347
In using the pincer grip (Figure 10-4A), we hold an object between
the thumb and index finger. This grip not only allows small objects to
be picked up easily but also allows us to use whatever is held with considerable
skill. In contrast, in using the power grasp (Figure 10-4B), we
hold an object much less dexterously but with more power by simply
closing the fingers around it.
Clearly, the pincer grip is a more demanding movement because the two fingers
must be placed precisely on the object. People with damage to the primary motor cortex
have difficulty correctly shaping their fingers to perform the pincer grip. They also
have difficulty in performing many skilled movements of the hands, arms, or trunk
(Lang & Schieber, 2004).
In summary, the frontal lobe in each hemisphere plans, coordinates, and executes
precise movements. The regions of the frontal cortex that perform these functions are
hierarchically related. After the prefrontal cortex has formulated a plan of action, it instructs
the premotor cortex to organize the appropriate sequence of behaviors, and the
movements themselves are executed by the motor cortex.
The hierarchical organization of frontal-lobe areas in producing movements is
supported by findings from studies of cerebral blood flow, which serves as an indicator
of neural activity. Figure 10-5 shows the regions of the brain that were active as subjects
in one such study performed different tasks (Roland, 1993).
Figure 10-4
Getting a Grip In a pincer grip (A), an
object is held between the thumb and
index finger. In a power grasp, or wholehand
grip (B), an object is held against
the palm of the hand with the digits.
(A) (B)
Blood flow increased in the
premotor cortex when
subjects performed a
sequence of movements.
(C)
Blood flow increased in the
hand area of the primary
somatosensory and primary
motor cortex when subjects
used a finger to push a lever.
Blood flow increased in the
prefrontal and temporal cortex
when subjects used a finger to
find a route through a maze.
Motor cortex Sensory cortex
Dorsal premotor
cortex
Figure 10-5
Hierarchical Control of
Movement in the Brain
Adapted from Brain Activation
(p. 63), by P. E. Roland, 1993, New
York: Wiley-Liss.
The Photo Works
The Photo Works
CH10.qxd 1/6/05 3:19 PM Page 347

ls
le
ll
species-typical behaviors. Hess developed the technique of implanting
electrodes into the brains of cats and other animals and
cementing them in place. These electrodes could then be attached
to stimulating leads in the freely moving animal without causing
the animal much discomfort.
By stimulating the brainstem, Hess was able to elicit almost
every innate movement that the animal might be expected to make.
A resting cat could be induced to suddenly leap up with an arched
back and erect hair as though frightened by an approaching dog, for
example. The movements elicited began abruptly when the stimulating
current was turned on and ended equally abruptly when the
stimulating current was turned off. The animal subjects performed
such species-typical behaviors in a subdued manner when the stimulating
current was low, but they increased in vigor as the stimulating
current was turned up.
The actions varied, depending on the site that was stimulated.
Stimulating some sites produced head-turning, others produced
walking or running, and still others elicited displays of aggression or
fear.The animal’s reaction toward a particular stimulus could be modified
accordingly. For instance, when shown a stuffed toy, a cat responded
to electrical stimulation of some sites by stalking the toy and
to stimulation of other sites with a fearful response and withdrawal.
Hess’s experiments have been confirmed and expanded by other
researchers with the use of many different animal species. For instance,
Experiment 10-1 shows the effects of brainstem stimulation
on a chicken under various conditions (von Holst, 1973). Notice the
effect of context: how the site stimulated interacts both with the presence
of an object to which to react and with the stimulation’s duration.
With stimulation of a certain site alone, the chicken displays
only restless behavior.When a fist is displayed, the same stimulation
elicits slightly threatening behavior. When the object displayed is
then switched from a fist to a stuffed polecat, the chicken responds
with vigorous threats. Finally, with continued stimulation in the
presence of the polecat, the chicken flees, screeching.
Such experiments show that an important function of the brainstem
is to produce species-typical adaptive behavior. Hess’s classic
experiments also gave rise to a sizable science-fiction literature in
which “mind control” induced by brain stimulation figures centrally
in the plot.
The brainstem also controls movements used in eating and
drinking and in sexual behavior. Animals can be induced to display
these survival-related behaviors when certain areas of the brainstem
are stimulated. An animal can even be induced to eat nonfood
objects, such as chips of wood, if the part of the brainstem that triggers
eating is sufficiently stimulated. The brainstem is also important
for maintaining posture, for standing upright, for making
coordinated movements of the limbs, for swimming and walking,
and for grooming the fur and making nests.
Grooming is a particularly complex movement pattern coordinated mainly by the
brainstem (Aldridge, 2005). A grooming rat sits back on its haunches, licks its paws,
wipes its nose with its paws, then wipes its paws across its face, and finally turns to lick
the fur on its body. These movements are always performed in the same order. The next
348 ! CHAPTER 10
Stimulating electrode
in brainstem
Stimulation of some brainstem sites produces behavior that
depends on context, suggesting that an important function
of the brainstem is to produce appropriate species-typical
behavior.
Conclusion
Question: What are the effects of brainstem stimulation under
different conditions?
EXPERIMENT 10-1
Electrical stimulation alone
produces restless behavior.
Electrical stimulation in
the presence of a stuffed
polecat (a type of weasel)
produces vigorous threat.
Continued electrical stimulation in
the presence of the stuffed polecat
produces flight and screeching.
Procedures Results
Electrical stimulation and
the presence of a fist
produces slight threat.
Adapted from The Collected Papers of Erich von
Holst (p. 121), translated by R. Martin, 1973, Coral
Gables, FL: University of Miami Press.
CH10.qxd 1/6/05 3:19 PM Page 348

ls
le
ll
time you dry off after a shower or swimming, note the “grooming sequence” that you
use. Humans’ grooming sequence is very similar to the one that rats use.
The effects of damage to regions of the brainstem that organize movement sequences
can be seen in the effects of cerebral palsy. A disorder primarily of motor
function, cerebral palsy is caused by brain trauma in the course of fetal development
or at birth (recall “Cerebral Palsy” on page 000). Trauma leading to cerebral palsy can
sometimes happen in early infancy as well.
We examined E. S., a man who suffered a cold and infection when he was about 6
months old. Subsequently, he had great difficulty in coordinating his movement. As he
grew up, his hands and legs were almost useless, and his speech was extremely difficult
to understand. For most of his childhood, E. S. was considered retarded and was sent
to a custodial school.
When he was 13 years old, the school bought a computer, and one of his teachers
attempted to teach E. S. to use it by pushing the keys with a pencil that he held in his
mouth.Within a few weeks, the teacher realized that E. S. was extremely intelligent and
could communicate and complete school assignments on his computer. He was eventually
given a motorized wheelchair that he could control with finger movements of his
right hand.
Assisted by his computer and wheelchair, E. S. soon became almost self-sufficient
and eventually attended college, where he achieved excellent grades and became a student
leader. On graduation with a degree in psychology, he became a social worker and
worked with children who suffered from cerebral palsy.
Clearly, a brain injury that causes cerebral palsy can be extremely damaging to
movement while leaving sensory abilities and cognitive capacities unimpaired. Damage
to the brainstem can also cause changes in cognitive function, such as occurs in
autism. The severe symptoms of this disorder include greatly impaired social interaction,
a bizarre and narrow range of interests, marked abnormalities in language and
communication, and fixed repetitive movements (see “Autism”).
The Spinal Cord and Executing Movement
On Memorial Day weekend in 1995, Christopher Reeve, a well-known actor who portrayed
Superman in three films, was thrown from his horse at the third jump of a riding
competition in Culpeper, Virginia. Reeve’s spinal cord was severed near its upper
end, at the C1–C2 level (see Figure 2-27). The injury left Reeve’s brain intact and functioning
and his remaining spinal cord intact and functioning, too, but his brain and
spinal cord were no longer connected.
As a result, other than movements of his head and slight movement in his shoulders,
Reeve’s body was completely paralyzed. He was even unable to breathe without
assistance. A generation ago such a severe injury would have been fatal, but modern
and timely medical treatment allowed Reeve to survive.
Reeve, who died in October 2004 owing to complications from an infection, leveraged
his celebrity to campaign for disabled people, fighting to prevent lifetime caps on
insurance compensation for spinal-cord injuries, raising money for spinal-cord research,
even testifying before Congress.Reeve’s tireless pursuit toward recovery included regaining
some ability to breathe on his own, recovering sensation over 70 percent of his body,
and making stepping movements when assisted.His progress is a source of dramatic new
insight into the possibilities for functional recovery after spinal-cord injury.
The spinal cord is sometimes viewed simply as a pathway for conveying information
between the brain and the rest of the body. It does serve this function. If the spinal
cord is severed, a person loses sensation and voluntary movements below the cut.
HOW DOES THE NERVOUS SYSTEM RESPOND TO STIMULATION AND PRODUCE MOVEMENT? ! 349
Cerebral palsy. Group of brain
disorders that result from brain damage
acquired perinatally.
Autism. Cognitive disorder with severe
symptoms including greatly impaired
social interaction, a bizarre and narrow
range of interests, marked abnormalities
in language and communication, and
fixed, repetitive movements.
(Top) Christopher Reeve portraying
Superman in a 1980 still from Superman II.
(Bottom) As a result of his spinal-cord
injury, Reeve had little movement below his
neck but never stopped working toward
the day that he would walk again. He is
shown here with his wife, Dana, in 2002.
Peter Morgan/Reuters/Corbis Photofest
CH10.qxd 1/7/05 9:43 AM Page 349

ls
le
ll
If the cut is low, paraplegia results: paralysis and loss of sensation are confined to
the legs and lower body, as described in “Spinal-Cord Injury.” A cut higher on the spinal
cord, such as Christopher Reeve survived, entails paralysis and loss of sensation in the
arms as well as the legs, a condition called quadriplegia.
In addition to its role in transmitting messages to and from the brain, the spinal
cord is capable of producing many movements without any brain involvement.Movements
that depend on the spinal cord alone are collectively called spinal-cord reflexes.
Some of these automatic movements entail the limbs. For example, a light touch to the
surface of the foot causes the leg to extend reflexively to contact the object that is touching
it. This reflex aids the foot and leg in contacting the ground to bear weight in walking.
Other reflexes result in limb withdrawal. For instance, a noxious stimulus applied
to a hand causes the whole arm to reflexively pull back, thereby avoiding the injurious
object.
350 ! CHAPTER 10
Autism
Focus on Disorders
Leo Kanner and Hans Asperger first used the term autism
(from the Greek autos, meaning “self”) in the 1940s to describe
children who seem to live in their own self-created
worlds. Although some of these children were classified
as mentally retarded, others’ intellectual functioning was
preserved.
An estimated 1 in 500 people has autism. Although it
knows neither racial nor ethnic nor social boundaries, autism
is four times as prevalent in boys as in girls. Many autistic
children are noticeably different from birth. To avoid physical
contact, these babies arch their backs and pull away from
their caregivers or they become limp when held. But approximately
one-third of autistic children develop normally
until somewhere between 1 and 3 years of age. Then the
autistic symptoms emerge.
Perhaps the most recognized characteristic of autism is
a failure to interact socially with other people. Some autistic
children do not relate to other people on any level. The attachments
that they do form are to inanimate objects, not to
other human beings.
Another common characteristic of autism is an extreme
insistence on sameness. Autistic children vehemently resist
even small modifications to their surroundings or their daily
routines. Objects must always be placed in exactly the same
locations, and tasks must always be carried out in precisely
the same ways.
One reason for this insistence on sameness may be an
inability to understand and cope with novel situations. Autistic
children also have marked impairments in language development.
Many do not speak at all, and others repeat
words aimlessly with little attempt to communicate or convey
meaning.
These children also exhibit what seem like endlessly
repetitive body movements, such as rocking, spinning, or
Midbrain
Pons
Medulla
Brainstem
Cerebellum
Normal brainstem
Trapezoid body
Superior olive
Inferior olive
1.1 mm 0.2 mm
Hypoglossal
nucleus
Facial nucleus
Brainstem of person with autism
Paraplegia. Paralysis of the legs due to
spinal-cord injury.
Quadriplegia. Paralysis of the legs and
arms due to spinal-cord injury.
Visit the Web site at www.worth
publishers.com/kolb/chapter10 for
up-to-the-minute links to current research
on spinal-cord injury.
Autism affects the
brainstem, where several
nuclei in the posterior pons
are reduced in size,
including the facial nucleus,
superior olive, and
trapezoid body.

Kesehatan bagun Pagi dan Meditasi

Dimana saat bangun pagi hari dan melihat sekeliling hidup dimana setelah kedua bola mata ini mulai berkerja untuk melihat sesuatu apa saja,sehingga pikiran mulai bekerja dengan apa yang dilihat,Meditasi dapat dilakukan disaat aktifitas mata dan pikiran mulai bekerja untuk menetukan perbuatan apa yang mau dilakukan oleh pandangan dari kedua bola mata ini,jadi setiap gerakan dari lihatan mata pikiran bekerja untuk menetukan apa langka yang harus di laksanakan baik yah dan baik tidak,manusia itu sendiri yang menentukan apa yang perlu dikalukan,sampai dengan melihat dikala bangun dari tempat tidur dan berdiri untuk langka selajutnya apa,kemudian berjalan kemana yang di inginkan oleh penglihatan bola mata dan diperintakan oleh pikiran yah atau tidak,akan tetapi setiap kehidupan manusia baik wanita dan laki-laki itu sama cuman perbedaan jenis kelamin,coba pola melihat dan pikiran banyak yang menyerupai tidak bedannya dimana manusia itu hidup dalam lingkungan ,akan tetapi setiap mengerakan badan jasmani ini dapat dilihatberupa melihat dan bergerak keselulu penjuruh,Meditasi sangat mendukung dimana prilaku yang kurang sadar apabila manusia disaat mulai bergerak dari aktifitasnya,jadi disaat aktifitas dari setiap gerakan yang tidak sadar itu dapat menjadi sadar apabila di bantu dengan meditasi dan kosentrasi yang tidak sadar menjadi sadar yang muncul dari pikiran manusia itu,kadang-kadang dapat dilihat kesadaran seseorang dapat berkurang akibat banyaknya aktifitas,sibuk,strees,marah,benci,irihati,kawatir,banyak berpikir,menghayal,beragan-agan,ambisi,menagis,tertawa,derita,dan bahagia dan lain-lainnya,

Kehidupan yang sehat merupakan yang terbaik apabila pikiran yang sehat dan badan jasmani juga sehat,baik kesadaran yang baik dengan tidak terjadi kecelakaan baik yang kecil dan yang besar dan mencelakai diri sendiri itu merupakan kesehatan dari pikiran dan badan jasmani yang dalan keadaan sadar dan itu merupakan sehat dan kehidupan,Meditasi dapat menyehatkan tubuh jasmani manusia yang kurang sadar dalam kehidupannya,yang terpenting adalah konsentrasi apa saja yang dilakukan dalam kegiatan sehari-hari itu lebih baik. oleh : Tjung teck S.Ag
Powered By Blogger

kurang kesehatan dari jasmani dan rohani antara Meditasi

Banyak orang dari berbagai ragam manusia yang ada sangat saat ini kurang sehat dari semua apa yang diraih dari setiap kehidupan dimana berada,seperti dengan halnya ketidak stabilnya ekonomi masyarakat yang berkembang dengan masyarakat maju berkembang pesat,kadang-kadang dapat dilihat dari perselisihan yang terjadi akibat dari kurang sehatnya pikiran manusia yang ingin lebih baik dari sesamanya diaman berada.

Aternatip kesehatan dampak kesehatan biology dan Meditasi

kesehatan fisik (jasmani) dan rohani dari dampat yang ditimbulkan oleh system kerja dari badan fisik atau jasmani itu sanggak mendukung dimana proses demi proses setiap metabolisme dari tubuh fisik manusia itu berasal dari dirinya sendiri yang untuk menentukan dimana kemanpuan seseorang yang dinyatakan sehat, dimana harus mengetahui dari awal dan akhir terutama untuk melatih Meditasi,dimana orang yang mengalami sakit yang kornis sekalipun dapat melakuakan Meditasi itu,baik secara fisik dan rohania dapat melakukan meditasi secara konsentrasi dengan object masing-masing dari semua unsur kehidupan metabolosme dari setiap kehidupan dialam semesta ini.tentunya tidak luput dari pengaru lingkungan dimana manusia itu berada dengan sesamanya, baik kehidupan yang baik dan kehidupan yang buruk itu semua kembali dari awal dimana manusia itu tumbuh dan berkembang dimana berasal,tidak luput dari kesehatan dan terserang penyakit tidak memandang manusia apa saja itu bisa terjadi dimana saja dan kapan saja selagi manusia itu tumbuh dan berkembang biak dengan satu dengan yang lainnya.

Dampak keshatan biology dan Meditasi sangat berhubungan erat dimana dampak yang ditimbulkan berupa kesehatan dari fisik atau jasmani yang mengerakan semua kehidupan dari system tubuh monotorik dari kehidupan manusia itu,terutama kepada dirinya sendiri sebagai manusia dimana manusia itu mempunyai rohani yang disebut dengan batin dan pikiran yang timbul dan lenyap dari proses alamia dari otak besar dan otak kecil yang melalui memory-memory sensorik dari setaip saraf-saraf yang berkerja sama dengan otot-otot dan urat-urat baik besar,menegah dan kecil dari semua system anatomi tubuh metabolosme manusia itu,sehingga lebih banyak dilihat dari semua gerakan berasal dari hujut perintahan-peritahan dari pikiran itu,yang baik gerakan secara menual dan rifek dapat digerakan dengan menual dan otomatis dari setiap kehidupan tubuh manusia itu,kemudian tidak luput oleh setiap makan yang di konsumsi setiap hari oleh manusia yang dapat dilihat dari makan yang mengandung gizi atau tidak tergantung sisi kehidupan manusia itu berasal dari mana dan lingkungan dimana manusia itu hidup.

sprituality antara biology dan hubungan Meditasi kesadaran

sprituality antara biology dan hubungan Meditasi kesadaran
kunjungan rapat SAGIN

kesehatan biology dan Meditasi

My Blog List

Followers

About Me

My photo
Saya seorang Buddhist yang sedang menjalani kehidupan Spiritual sesuai dengan ajaran Buddha. Akan tetapi saya berusaha dengan tekun untuk manfaat bagi umat Buddha supaya terus melestarikan Buddha, Dharmma, dan Sangha, perbuatan karma baik dapat berbuah kebaikan serta ketenangan dan kebahagiaan diri sendiri dan semua makhluk hidup di dunia ini. Agama Buddha adalah merupakan Ajaran yang mengajarkan kita untuk melaksanakan berdana, sila, samadhi dan Panna. Kembangkan Cinta kasih kepada semua makhluk hidup, jalankan kehidupan ini sebaik-baiknya supaya kehidupan dapat mengikuti aturan-aturan kehidupan yang berkeTuhanan Yang Maha Esa.