November 28, 2010

STRINGS OF THOUGHT By:RICHARD J.KOSCIEJEW

STRINGS OF THOUGHT
by: RICHARD J.KOSCIEJEW
Although extensive research has been conducted on the brain’s role in cognition and memory, scientists have only recently begun to study the link between memory and emotions, particularly emotions such as fear. In this 1994 article for Scientific American, Joseph E. LeDoux explains what he and his fellow researchers have discovered about how animals learn fear and about how this understanding may help treat human patients with certain types of mental disorders.
 The neural routes underlying the formation of memories about primitive emotional experiences, such as fear, have been traced.
Despite millennia of preoccupation with every facet of human emotion, we are still far from explaining in a rigorous physiological sense this part of our mental experience. Neuroscientists have, in modern times, been especially concerned with the neural basis of such cognitive processes as perception and memory. They have for the most part ignored the brain's role in emotion. Yet in recent years, interest in this mysterious mental terrain has surged. Catalyzed by breakthroughs in understanding the neural basis of cognition and by an increasingly sophisticated knowledge of the anatomical organization and physiology of the brain, investigators have begun to tackle the problem of emotion.
 One quite rewarding area of research has been the inquiry into the relation between memory and emotion. Much of this examination has involved studies of one particular emotion - fear - and the manner in which specific events or stimuli come, through individual learning experiences, to evoke this state. Scientists, myself included, have been able to determine the way in which the brain shapes how we form memories about this basic, but significant, emotional event. We call this process "emotional memory."
 By uncovering the neural pathways through which a situation causes a creature to learn about fear, we hope to elucidate the general mechanisms of this form of memory. Because many human mental disorders - including anxiety, phobia, post-traumatic stress syndrome and panic attack - involve malfunctions in the brain's ability to control fear, studies of the neural basis of this emotion may help us further understand and treat these disturbances.
 Most of our knowledge about how the brain links memory and emotion has been gleaned through the study of so-called classical fear conditioning. In this process the subject, usually a rat, hears a noise or sees a flashing light that is paired with a brief, mild electric shock to its feet. After a few such experiences, the rat responds automatically to the sound or light, even in the absence of the shock. Its reactions are typical to any threatening situation: the animal freezes, its blood pressure and heart rate increase, and it startles easily. In the language of such experiments, the noise or flash is a conditioned stimulus, the foot shock is an unconditioned stimulus, and the rat's reaction is a conditioned response, which consists of readily measured behavioural and physiological changes.
 Conditioning of this kind happens quickly in rats—indeed, it takes place as rapidly as it does in humans. A single pairing of the shock to the sound or sight can bring on the conditioned effect. Once established, the fearful reaction is relatively permanent. If the noise or light is administered many times without an accompanying electric shock, the rat's response diminishes. This change is called extinction. But considerable evidence suggests that this behavioral alteration is the result of the brain's controlling the fear response rather than the elimination of the emotional memory. For example, an apparently extinguished fear response can recover spontaneously or can be reinstated by an irrelevant stressful experience. Similarly, stress can cause the reappearance of phobias in people who have been successfully treated. This resurrection demonstrates that the emotional memory underlying the phobia was rendered dormant rather than erased by treatment.
 Fear conditioning has proved an ideal starting point for studies of emotional memory for several reasons. First, it occurs in nearly every animal group in which it has been examined: fruit flies, snails, birds, lizards, fish, rabbits, rats, monkeys and people. Although no one claims that the mechanisms are precisely the same in all these creatures, it seems clear from studies to date that the pathways are very similar in mammals and possibly in all vertebrates. We therefore are confident in believing that many of the findings in animals apply to humans. In addition, the kinds of stimuli most commonly used in this type of conditioning are not signals that rats - or humans, for that matter - encounter in their daily lives. The novelty and irrelevance of these lights and sounds help to ensure that the animals have not already developed strong emotional reactions to them. So researchers are clearly observing learning and memory at work. At the same time, such cues do not require complicated cognitive processing from the brain. Consequently, the stimuli permit us to study emotional mechanisms relatively directly. Finally, our extensive knowledge of the neural pathways involved in processing acoustic and visual information serves as an excellent starting point for examining the neurological foundations of fear elicited by such stimuli.
 The cerebral roots of learning fear, specifically fear that has been induced in the rat by associating sounds with foot shock. As do most other investigators in the field, I assume that fear conditioning occurs because the shock modifies the way in which neurons in certain important regions of the brain interpret the sound stimulus. These critical neurons are thought to be located in the neural pathway through which the sound elicits the conditioned response.
 During the past 10 years, researchers in my laboratory, as well as in others, have identified major components of this system. Our study began at Cornell University Medical College, where I worked several years ago, when my colleagues and I asked a simple question: Is the auditory cortex required for auditory fear conditioning?
 In the auditory pathway, as in other sensory systems, the cortex is the highest level of processing; it is the culmination of a sequence of neural steps that starts with the peripheral sensory receptors, located, in this case, in the ear. If lesions in (or surgical removal of) parts of the auditory cortex interfered with fear conditioning, we could conclude that the region is indeed necessary for this activity. We could also deduce that the next step in the conditioning pathway would be an output from the auditory cortex. But our lesion experiments in rats confirmed what a series of other studies had already suggested: the auditory cortex is not needed in order to learn many things about simple acoustic stimuli.
 When, to make lesions in the auditory thalamus and the auditory midbrain, sites lying immediately below the auditory cortex. Both these areas process auditory signals: the midbrain provides the major input to the thalamus; the thalamus supplies the major input to the cortex. Lesions in both regions completely eliminated the rat's susceptibility to conditioning. This discovery suggested that a sound stimulus is transmitted through the auditory system to the level of the auditory thalamus but that it does not have to reach the cortex for fear conditioning to occur.
 This possibility was somewhat puzzling. We knew that the primary nerve fibres that carry signals from the auditory thalamus extend to the auditory cortex. So David A. Ruggiero, Donald J. Reis and I looked again and found that, in fact, cells in some regions of the auditory thalamus also give rise to fibres that reach several subcortical locations. Could these neural projections be the connections through which the stimulus elicits the response we identify with fear? We tested this hypothesis by making lesions in each one of the subcortical regions with which these fibers connect. The damage had an effect in only one area: the amygdala.
 That observation suddenly created a place for our findings in an already accepted picture of emotional processing. For a long time, the amygdala has been considered an important brain region in various forms of emotional behavior. In 1979 Bruce S. Kapp and his colleagues at the University of Vermont reported that lesions in the amygdala's central nucleus interfered with a rabbit's conditioned heart rate response once the animal had been given a shock paired with a sound. The central nucleus connects with areas in the brain stem involved in the control of heart rate, respiration and vasodilation. Kapp's work suggested that the central nucleus was a crucial part of the system through which autonomic conditioned responses are expressed.
 In a similar vein, we found that lesions of this nucleus prevented a rat's blood pressure from rising and limited its ability to freeze in the presence of a fearcausing stimulus. We also demonstrated, in turn, that lesions of areas to which the central nucleus connects eliminated one or the other of the two responses. Michael Davis and his associates at Yale University determined that lesions of the central nucleus, as well as lesions of another brain stem area to which the central nucleus projects, diminished yet another conditioned response: the increased startle reaction that occurs when an animal is afraid.
 The findings from various laboratories studying different species and measuring fear in different ways all implicated the central nucleus as a pivotal component of fear-conditioning circuitry. It provides connections to the various brain stem areas involved in the control of a Spectrum of responses. Despite our deeper understanding of this site in the amygdala, many details of the pathway remained hidden. Does sound, for example, reach the central nucleus directly from the auditory thalamus? We found that it does not. The central nucleus receives projections from thalamic areas next to, but not in, the auditory part of the thalamus. Indeed, an entirely different area of the amygdala, the lateral nucleus, receives inputs from the auditory thalamus. Lesions of the lateral nucleus prevented fear conditioning. Because this site gets information directly from the sensory system, we have come to think of it as the sensory interface of the amygdala in fear conditioning. In contrast, the central nucleus appears to be the interface with the systems that control responses.
 These findings seemed to place us on the threshold of being able to map the entire stimulus response pathway. But we still did not know how information received by the lateral nucleus arrived at the central nucleus. Earlier studies had suggested that the lateral nucleus projects directly to the central nucleus, but the connections were fairly sparse. Working with monkeys, David Amaral and Asla Pitkanen of the Salk Institute for Biological Studies in San Diego demonstrated that the lateral nucleus extends directly to an adjacent site, called the basal cor basolateral nucleus, which, in turn, projects to the central nucleus.
 Collaborating with Lisa Stefanacci and other members of the Salk team, Claudia R. Farb and C. Genevieve Go in my laboratory at New York University found the same connections in the rat. We then showed that these connections form synaptic contacts—in other words, they communicate directly, neuron to neuron. Such contacts indicate that information reaching the lateral nucleus can influence the central nucleus via the basolateral nucleus. The lateral nucleus can also influence the central nucleus by way of the accessory basal or basomedial nucleus. Clearly, ample opportunities exist for the lateral nucleus to communicate with the central nucleus once a stimulus has been received.
 The emotional significance of such a stimulus is determined not only by the sound itself but by the environment in which it occurs. Rats must therefore learn not only that a sound or visual cue is dangerous, but under what conditions it is so. Russell G. Phillips and I examined the response of rats to the chamber, or context, in which they had been conditioned. We found that lesions of the amygdala interfered with the animals' response to both the tone and the chamber. But lesions of the hippocampus - a region of the brain involved in declarative memory - interfered only with response to the chamber, not the tone. (Declarative memory involves explicit, consciously accessible information, as well as spatial memory.) At about the same time, Michael S. Fanselow and Jeansok J. Kim of the University of California at Los Angeles discovered that hippocampal lesions made after fear conditioning had taken place also prevented the expression of responses to the surroundings.
 These findings were consistent with the generally accepted view that the hippocampus plays an important role in processing complex information, such as details about the spatial environment where activity is taking place. Phillips and I also demonstrated that the subiculum, a region of the hippocampus that projects to other areas of the brain, communicated with the lateral nucleus of the amygdala. This connection suggests that contextual information may acquire emotional significance in the same way that other events do—via transmission to the lateral nucleus.
 Although our experiments had identified a subcortical sensory pathway that gave rise to fear conditioning, we did not dismiss the importance of the cortex. The interaction of subcortical and cortical mechanisms in emotion remains a hotly debated topic. Some researchers believe cognition is a vital precursor to emotional experience; others think that cognition—which is presumably a cortical function—is necessary to initiate emotion or that emotional processing is a type of cognitive processing. Still others question whether cognition is necessary for emotional processing.
 It became apparent that the auditory cortex is involved in, though not crucial to, establishing the fear response, at least when simple auditory stimuli are applied. Norman M. Weinberger and his colleagues at the University of California at Irvine have performed elegant studies showing that neurons in the auditory cortex undergo specific physiological changes in their reaction to sounds as a result of conditioning. This finding indicates that the cortex is establishing its own record of the event.
 Experiments by Lizabeth M. Romanski in my laboratory have determined that in the absence of the auditory cortex, rats can learn to respond fearfully to a single tone. If, however, projections from the thalamus to the amygdala are removed, projections from the thalamus to the cortex and then to the amygdala are sufficient. Romanski went on to establish that the lateral nucleus can receive input from both the thalamus and the cortex. Her work in the rat complements earlier research in primates.
 Once we had a clear understanding of the mechanism through which fear conditioning is learned, we attempted to find out how emotional memories are established and stored on a molecular level. Farb and I showed that the excitatory amino acid transmitter glutamate is present in the thalamic cells that reach the lateral nucleus. Together with Chiye J. Aoki, we showed that it is also present at synapses in the lateral nucleus. Because glutamate transmission is implicated in memory formation, we seemed to be on the right track.
 Glutamate has been observed in a process called long-term potentiation, or LTP, that has emerged as a model for the creation of memories. This process, which is most frequently studied in the hippocampus, involves a change in the efficiency of synaptic transmission along a neural pathway - in other words, signals travel more readily along this pathway once LTP has taken place. The mechanism seems to involve glutamate transmission and a class of postsynaptic
 excitatory amino acid receptors known as NMDA receptors. Various studies have found LTP in the fear-conditioning pathway. Marie-Christine Clugnet and I noted that LTP could be induced in the thalamo-amygdala pathway. Thomas H. Brown and Paul Chapman and their colleagues at Yale discovered LTP in a cortical projection to the amygdala. Other researchers, including Davis and Fanselow, have been able to block fear conditioning by blocking NMDA receptors in the amygdala. And Michael T. Rogan in my laboratory found that the processing of sounds by the thalamo-amygdala pathway is amplified after LTP has been induced. The fact that LTP can be demonstrated in a conditioning pathway offers new hope for understanding how LTP might relate to emotional memory.
 In addition, recent studies by Fabio Bordi, also in my laboratory, have suggested hypotheses about what could be going on in the neurons of the lateral nucleus during learning. Bordi monitored the electrical state of individual neurons in this area when a rat was listening to the sound and receiving the shock. He and Romanski found that essentially every cell responding to the auditory stimuli also responded to the shock. The basic ingredient of conditioning is thus
 present in the lateral nucleus. Bordi was able to divide the acoustically stimulated cells into two classes: habituating and consistently responsive. Habituating cells eventually stopped responding to the repeated sound, suggesting that they might serve to detect any sound that was unusual or different. They could permit the amygdala to ignore a stimulus once it became familiar. Sound and shock pairing at these cells might reduce habituation, thereby allowing the cells to respond to, rather than ignore, significant stimuli.
 The consistently responsive cells had high-intensity thresholds: only loud sounds could activate them. That finding is interesting because of the role loudness plays in judging distance. Nearby sources of sound are presumably more dangerous than those that are far away. Sound coupled with shock might act on these cells to lower their threshold, increasing the cells' sensitivity to the same stimulus. Consistently responsive cells were also broadly tuned. The joining of a sound and a shock could make the cells responsive to a narrower range of frequencies, or it could shift the tuning toward the frequency of the stimulus. In fact, Weinberger has recently shown that cells in the auditory system do alter their tuning to approximate the conditioned stimulus. Bordi and I have detected this effect in lateral nucleus cells as well.
 The apparent permanence of these memories raises an important clinical question: Can emotional learning be eliminated, and, if not, how can it be toned down? As noted earlier, it is actually quite difficult to get rid of emotional memories, and at best we can hope only to keep them under wraps. Studies by Maria A. Morgan in my laboratory have begun to illuminate how the brain regulates emotional expressions. Morgan has shown that when part of the prefrontal cortex is damaged, emotional memory is very hard to extinguish. This discovery indicates that the prefrontal areas - possibly by way of the amygdala - normally control expression of emotional memory and prevent emotional responses once they are no longer useful. A similar conclusion was proposed by Edmund T. Rolls and his colleagues at the University of Oxford during studies of primates. The researchers studied the electrical activity of neurons in the frontal cortex of the Animals.
 Functional variation in the pathway between this region of the cortex and the amygdala may make it more difficult for some people to change their emotional behavior. Davis and his colleagues have found that blocking NMDA receptors in the amygdala interferes with extinction. Those results hint that extinction is an active learning process. At the same time, such learning could be situated in connections between the prefrontal cortex and the amygdala. More experiments should disclose the answer.
 Placing a basic emotional memory process in the amygdalic pathway yields obvious benefits. The amygdala is a critical site of learning because of its central location between input and output stations. Each route that leads to the amygdala - sensory thalamus, sensory cortex and hippocampus - delivers unique information to the organ. Pathways originating in the sensory thalamus provide only a crude perception of the external world, but because they involve only one neural link, they are quite fast. In contrast, pathways from the cortex offer detailed and accurate representations, allowing us to recognize an object by sight or sound. But these pathways, which run from the thalamus to the sensory cortex to the amygdala, involve several
 neural links. And each link in the chain adds time. Conserving time may be the reason there are two routes - one cortical and one subcortical - for emotional learning. Animals, and humans, need a quick-and-dirty reaction mechanism. The thalamus activates the amygdala at about the same time as it activates the cortex. The arrangement may enable emotional responses to begin in the amygdala before we completely recognize what it is we are reacting to or what we are feeling. The thalamic pathway may be particularly useful in situations requiring a rapid response. Failing to respond to danger is more costly than responding inappropriately to a benign stimulus. For instance, the sound of rustling leaves is enough to alert us when we are walking in the woods without our having first to identify what is causing the sound. Similarly, the sight of a slender curved shape lying flat on the path ahead of us is sufficient to elicit defensive fear responses. We do not need to go through a detailed analysis of whether or not what we are seeing is a snake. Nor do we need to think about the fact that snakes are reptiles and that their skins can be used to make belts and boots. All these details are irrelevant and, in fact, detrimental to an efficient, speedy and potentially lifesaving reaction. The brain simply needs to be able to store primitive cues and detect them. Later, coordination of this basic information with the cortex permits verification (yes, this is a snake) or brings the response (screaming, sprinting) to a stop.  Although the amygdala stores primitive information, we should not consider it the only learning center. The establishment of memories is a function of the entire network, not just of one component. The amygdala is certainly crucial, but we must not lose sight of the fact that its functions exist only by virtue of the system to which it belongs.
 Memory is generally thought to be the process by which we bring back to mind some earlier conscious experience. The original learning and the remembering, in this case, are both conscious events. Workers have determined that declarative memory is mediated by the hippocampus and the cortex. But removal of the hippocampus has little effect on fear conditioning - except conditioning to context.
 In contrast, emotional learning that comes about through fear conditioning is not declarative learning. Rather it is mediated by a different system, which in all likelihood operates independently of our conscious awareness. Emotional information may be stored within declarative memory, but it is kept there as a cold declarative fact. For example, if a person is injured in an automobile accident in which the horn gets stuck in the on position, he or she may later have a reaction when hearing the blare of car horns. The person may remember the details of the accident, such as where and when it occurred, who else was involved and how awful it was. These are declarative memories that are dependent on the hippocampus. The individual may also become tense, anxious and depressed, as the emotional memory is reactivated through the amygdalic system.
 The declarative system has stored the emotional content of the experience, but  it has done so as a fact. Emotional and declarative memories are stored and retrieved in parallel, and their activities are joined seamlessly in our conscious experience. That does not mean that we have direct conscious access to our emotional memory; it means instead that we have access to the consequences—such as the way we behave, the way our bodies feel. These consequences combine with current declarative memory to form a new declarative memory. Emotion is not just unconscious memory: it exerts a powerful influence on declarative memory and other thought processes. As James L. McGaugh and his colleagues at the University of California at Irvine have convincingly shown, the amygdala plays an essential part in modulating the storage and strength of memories.
 The distinction between declarative memory and emotional memory is an important one. W. J. Jacobs of the University of British Columbia and Lynn Nadel of the University of Arizona have argued that we are unable to remember traumatic events that take place early in life because the hippocampus has not yet matured to the point of forming consciously accessible memories. The emotional memory system, which may develop earlier, clearly forms and stores its unconscious memories of these events. And for this reason, the trauma may affect mental and behavioural functions in later life, albeit through processes that remain inaccessible to consciousness.
 Because pairing a tone and a shock can bring about conditioned responses in animals throughout the phyla, it is clear that fear conditioning cannot be dependent on consciousness. Fruit flies and snails, for example, are not creatures known for their conscious mental processes. My way of interpreting this phenomenon is to consider fear a subjective state of awareness brought about when brain systems react to danger. Only if the organism possesses a sufficiently advanced neural mechanism does conscious fear accompany bodily response. This is not to say that only humans experience fear but, rather, that consciousness is a prerequisite to subjective emotional states.
 Thus, emotions or feelings are conscious products of unconscious processes. It is crucial to remember that the subjective experiences we call feelings are not the primary business of the system that generates them. Emotional experiences are the result of triggering systems of behavioural adaptation that have been preserved by evolution. Subjective experience of any variety is challenging turf for scientists. We have, however, gone a long way toward understanding the neural system that underlies fear responses, and this same system may in fact give rise to subjective feelings of fear. If so, studies of the neural control of emotional responses may hold the key to understanding subjective emotion as well.
 Neuronal signals conveying everything that human beings sense and think, and every motion they make, follow nerve pathways in the human body as waves of ions (atoms or groups of atoms that carry electric charges). These amounting of intricacies of this electrochemical signalling process, particularly the pivotal step in which a signal is conveyed from one nerve cell to another - the enkindling of the Synapse: How does one nerve cell transmit the nerve impulse to another cell? Electron microscopy and other methods show that it does so by means of special extensions that deliver a squirt of transmitter substance
 The human brain is the most highly organized form of matter known, and in complexity the brains of the other higher animals are not greatly inferior. For certain purposes it is expedient to regard the brain as being analogous to a machine. Even if it is so regarded, however, it is a machine of a totally different kind from those made by man. In trying to understand the workings of his own brain man meets his highest challenge. Nothing is given; there are no operating diagrams, no maker's instructions.
 The first step in trying to understand the brain is to examine its structure in order to discover the components from which it is built and how they are related to one another. After that one can attempt to understand the mode of operation of the simplest components. These two modes of investigation - the morphological and the physiological - have now become complementary. In studying the nervous system with today's sensitive electrical devices, however, it is all too easy to find physiological events that cannot be correlated with any known anatomical structure. Conversely, the electron microscope reveals many structural details whose physiological significance is obscure or unknown.
 At the close of the past century the Spanish anatomist Santiago Ramón y Cajal showed how all parts of the nervous system are built up of individual nerve cells of many different shapes and sizes. Like other cells, each nerve cell has a nucleus and a surrounding cytoplasm. Its outer surface consists of numerous fine branches - the dendrites - that receive nerve impulses from other nerve cells, and one relatively long branch - the axon - that transmits nerve impulses. Near its end the axon divides into branches that terminate at the dendrites or bodies of other nerve cells. The axon can be as short as a fraction of a millimetre or as long as a metre, depending on its place and function. It has many of the properties of an electric cable and is uniquely specialized to conduct the brief electrical waves called nerve impulses. In very thin axons these impulses travel at less than one metre per second; in others, for example in the large axons of the nerve cells that activate muscles, they travel as fast as 100 metres per second.
 The electrical impulse that travels along the axon ceases abruptly when it comes to the point where the axon's terminal fibres make contact with another nerve cell. These junction points were given the name ‘synapses’ by Sir Charles Sherrington, who laid the foundations of what is sometimes called synaptology. If the nerve impulse is to continue beyond the synapse, it must be regenerated afresh on the other side. As recently as 15 years ago some physiologists held that transmission at the synapse was predominantly, if not exclusively, an electrical phenomenon. Now, however, there is abundant evidence that transmission is effectuated by the release of specific chemical substances that trigger a regeneration of the impulse. In fact, the first strong evidence showing that a transmitter substance acts across the synapse was provided more than 40 years ago by Sir Henry Dale and Otto Loewi.
 It has been estimated that the human central nervous system, which of course includes the spinal cord as well as the brain itself, consists of about 10 billion (1010) nerve cells. With rare exceptions each nerve cell receives information directly in the form of impulses from many other nerve cells - often hundreds - and transmits information to a like number. Depending on its threshold of response, a given nerve cell may fire an impulse when stimulated by only a few incoming fibres or it may not fire until stimulated by many incoming fibres. It has long been known that this threshold can be raised or lowered by various factors. Moreover, it was conjectured some 60 years ago that some of the incoming fibres must inhibit the firing of the receiving cell rather than excite it. The conjecture was subsequently confirmed, and the mechanism of the inhibitory effect has now been clarified. This mechanism and its equally fundamental counterpart - nerve-cell excitation - are the topic of issue.
 At the level of anatomy there are some clues to indicate how the fine axon terminals impinging on a nerve cell can make the cell regenerate a nerve impulse of its own … a nerve cell and its dendrites are covered by fine branches of nerve fibres that terminate in knoblike structures. These structures are the synapses.
 The electron microscope has revealed structural details of synapses that fit in nicely with the view that a chemical transmitter is involved in nerve transmission. Enclosed in the synaptic knob are many vesicles, or tiny sacs, which appear to contain the transmitter substances that induce synaptic transmission. Between the synaptic knob and the synaptic membrane of the adjoining nerve cell is a remarkably uniform space of about 20 millimicrons that is termed the synaptic cleft. Many of the synaptic vesicles are concentrated adjacent to this cleft; it seems plausible that the transmitter substance is discharged from the nearest vesicles into the cleft, where it can act on the adjacent cell membrane. This hypothesis is supported by the discovery that the transmitter is released in packets of a few thousand molecules.
 The study of synaptic transmission was revolutionized in 1951 by the introduction of delicate techniques for recording electrically from the interior of single nerve cells. This is done by inserting into the nerve cell an extremely fine glass pipette with a diameter of .5 micron - about a fifty-thousandth of an inch. The pipette is filled with an electrically conducting salt solution such as concentrated potassium chloride. If the pipette is carefully inserted and held rigidly in place, the cell membrane appears to seal quickly around the glass, thus preventing the flow of a short-circuiting current through the puncture in the cell membrane. Impaled in this fashion, nerve cells can function normally for hours. Although there is no way of observing the cells during the insertion of the pipette, the insertion can be guided by using as clues the electric signals that the pipette picks up when close to active nerve cells.
 When the nerve cell responds to the chemical synaptic transmitter, the response depends in part on characteristic features of ionic composition that are also concerned with the transmission of impulses in the cell and along its axon. When the nerve cell is at rest, its physiological makeup resembles that of most other cells in that the water solution inside the cell is quite different in composition from the solution in which the cell is bathed. The nerve cell is able to exploit this difference between external and internal composition and use it in quite different ways for generating an electrical impulse and for synaptic transmission.
 The composition of the external solution is well established because the solution is essentially the same as blood from which cells and proteins have been removed. The composition of the internal solution is known only approximately. Indirect evidence indicates that the concentrations of sodium and chloride ions outside the cell are respectively some 10 and 14 times higher than the concentrations inside the cell. In contrast, the concentration of potassium ions inside the cell is about 30 times higher than the concentration outside.
 How can one account for this remarkable state of affairs? Part of the explanation is that the inside of the cell is negatively charged with respect to the outside of the cell by about 70 millivolts. Since like charges repel each other, this internal negative charge tends to drive chloride ions (Cl-) outward through the cell membrane and, at the same time, to impede their inward movement. In fact, a potential difference of 70 millivolts is just sufficient to maintain the observed disparity in the concentration of chloride ions inside the cell and outside it; chloride ions diffuse inward and outward at equal rates. A drop of 70 millivolts across the membrane therefore defines the ‘equilibrium potential’ for chloride ions.
 To obtain a concentration of potassium ions (K+) that is 30 times higher inside the cell than outside would require that the interior of the cell membrane be about 90 millivolts negative with respect to the exterior. Since the actual interior is only 70 millivolts negative, it falls short of the equilibrium potential for potassium ions by 20 millivolts. Evidently the thirtyfold concentration can be achieved and maintained only if there is some auxiliary mechanism for ‘pumping’ potassium ions into the cell at a rate equal to their spontaneous net outward diffusion.
 The pumping mechanism has the still more difficult task of pumping sodium ions (Na+) out of the cell against a potential gradient of 130 millivolts. This figure is obtained by adding the 70 millivolts of internal negative charge to the equilibrium potential for sodium ions, which is 60 millivolts of internal positive charge. If it were not for this postulated pump, the concentration of sodium ions inside and outside the cell would be almost the reverse of what is observed.
 In their classic studies of nerve-impulse transmission in the giant axon of the squid, A. L. Hodgkin, A. F. Huxley and Bernhard Katz of Britain demonstrated that the propagation of the impulse coincides with abrupt changes in the permeability of the axon membrane. When a nerve impulse has been triggered in some way, what can be described as a gate opens and lets sodium ions pour into the axon during the advance of the impulse, making the interior of the axon locally positive. The process is self-reinforcing in that the flow of some sodium ions through the membrane opens the gate further and makes it easier for others to follow. The sharp reversal of the internal polarity of the membrane constitutes the nerve impulse, which moves like a wave until it has travelled the length of the axon. In the wake of the impulse the sodium gate closes and a potassium gate opens, thereby restoring the normal polarity of the membrane within a millisecond or less.
 With this understanding of the nerve impulse in hand, one is ready to follow the electrical events at the excitatory synapse. One might guess that if the nerve impulse results from an abrupt inflow of sodium ions and a rapid change in the electrical polarity of the axon's interior, something similar must happen at the body and dendrites of the nerve cell in order to generate the impulse in the first place. Indeed, the function of the excitatory synaptic terminals on the cell body and its dendrites is to depolarize the interior of the cell membrane essentially by permitting an inflow of sodium ions. When the depolarization reaches a threshold value, a nerve impulse is triggered.
 As a simple instance of this phenomenon we have recorded the depolarization that occurs in a single motoneuron activated directly by the large nerve fibres that enter the spinal cord from special stretch-receptors known as annulospiral endings. These receptors in turn are located in the same muscle that is activated by the motoneuron under study. Thus the whole system forms a typical reflex arc, such as the arc responsible for the patellar reflex, or ‘knee jerk.’
 To conduct the experiment we anesthetize an animal (most often a cat) and free by dissection a muscle nerve that contains these large nerve fibres. Set to one side of wapplying a mild electric shock to the exposed nerve one can produce a single impulse in each of the fibres; since the impulses travel to the spinal cord almost synchronously they are referred to collectively as a volley. The number of impulses contained in the volley can be reduced by reducing the stimulation applied to the nerve. The volley strength is measured at a point just outside the spinal cord and is displayed on an oscilloscope. About half a millisecond after detection of a volley there is a wavelike change in the voltage inside the motoneuron that has received the volley. The change is detected by a microelectrode inserted in the motoneuron and is displayed on another oscilloscope.
 What is thought, is that the negative voltage inside the cell becomes progressively less negative as more of the fibres impinging on the cell are stimulated to fire. This observed depolarization is in fact a simple summation of the depolarizations produced by each individual synapse. When the depolarization of the interior of the motoneuron reaches a critical point, a ‘spike’ suddenly appears on the second oscilloscope, showing that a nerve impulse has been generated. During the spike the voltage inside the cell changes from about 70 millivolts negative to as much as 30 millivolts positive. The spike regularly appears when the depolarization, or reduction of membrane potential, reaches a critical level, which is usually between 10 and 18 millivolts. The only effect of a further strengthening of the synaptic stimulus is to shorten the time needed for the motoneuron to reach the firing threshold. The depolarizing potentials produced in the cell membrane by excitatory synapses are called excitatory postsynaptic potentials, or EPSP's.
 Through one barrel of a double-barreled microelectrode one can apply a background current to change the resting potential of the interior of the cell membrane, either increasing it or decreasing it. When the potential is made more negative, the EPSP rises more steeply to an earlier peak. When the potential is made less negative, the EPSP rises more slowly to a lower peak. Finally, when the charge inside the cell is reversed so as to be positive with respect to the exterior, the excitatory synapses give rise to an EPSP that is actually the reverse of the normal one.
 These observations support the hypothesis that excitatory synapses produce what amounts virtually to a short circuit in the synaptic membrane potential. When this occurs, the membrane no longer acts as a barrier to the passage of ions but lets them flow through in response to the differing electric potential on the two sides of the membrane. In other words, the ions are momentarily allowed to travel freely down their electrochemical gradients, which means that sodium ions flow into the cell and, to a lesser degree, potassium ions flow out. It is this net flow of positive ions that creates the excitatory postsynaptic potential. The flow of negative ions, such as the chloride ion, is apparently not involved. By artificially altering the potential inside the cell one can establish that there is no flow of ions, and therefore no EPSP, when the voltage drop across the membrane is zero.
 How is the synaptic membrane converted from a strong ionic barrier into an ion-permeable state? It is currently accepted that the agency of conversion is the chemical transmitter substance contained in the vesicles inside the synaptic knob. When a nerve impulse reaches the synaptic knob, some of the vesicles are caused to eject the transmitter substance into the synaptic cleft. The molecules of the substance would take only a few microseconds to diffuse across the cleft and become attached to specific receptor sites on the surface membrane of the adjacent nerve cell.
 Presumably the receptor sites are associated with fine channels in the membrane that are opened in some way by the attachment of the transmitter-substance molecules to the receptor sites. With the channels thus opened, sodium and potassium ions flow through the membrane thousands of times more readily than they normally do, thereby producing the intense ionic flux that depolarizes the cell membrane and produces the EPSP. In many synapses the current flows strongly for only about a millisecond before the transmitter substance is eliminated from the synaptic cleft, either by diffusion into the surrounding regions or as a result of being destroyed by enzymes. The latter process is known to occur when the transmitter substance is acetylcholine, which is destroyed by the enzyme acetylcholinesterase.
 The substantiation of this general picture of synaptic transmission requires the solution of many fundamental problems. Since we do not know the specific transmitter substance for the vast majority of synapses in the nervous system we do not know if there are many different substances or only a few. The only one identified with reasonable certainty in the mammalian central nervous system is acetylcholine. We know practically nothing about the mechanism by which a presynaptic nerve impulse causes the transmitter substance to be injected into the synaptic cleft. Nor do we know how the synaptic vesicles not immediately adjacent to the synaptic cleft are moved up to the firing line to replace the emptied vesicles. It is conjectured that the vesicles contain the enzyme systems needed to recharge themselves. The entire process must be swift and efficient: the total amount of transmitter substance in synaptic terminals is enough for only a few minutes of synaptic activity at normal operating rates. There are also knotty problems to be solved on the other side of the synaptic cleft. What, for example, is the nature of the receptor sites? How are the ionic channels in the membrane opened up?
 Let us turn now to the second type of synapse that has been identified in the nervous system. These are the synapses that can inhibit the firing of a nerve cell even though it may be receiving a volley of excitatory impulses. When inhibitory synapses are examined in the electron microscope, they look very much like excitatory synapses. (There are probably some subtle differences, but they need not concern us here.) Microelectrode recordings of the activity of single motoneurons and other nerve cells have now shown that the inhibitory postsynaptic potential (IPSP) is virtually a mirror image of the EPSP. Moreover, individual inhibitory synapses, like excitatory synapses, have a cumulative effect. The chief difference is simply that the IPSP makes the cell's internal voltage more negative than it is normally, which is in a direction opposite to that needed for generating a spike discharge.
 By driving the internal voltage of a nerve cell in the negative direction inhibitory synapses oppose the action of excitatory synapses, which of course drive it in the positive direction. Hence if the potential inside a resting cell is 70 millivolts negative, a strong volley of inhibitory impulses can drive the potential to 75 or 80 millivolts negative. One can easily see that if the potential is made more negative in this way the excitatory synapses find it more difficult to raise the internal voltage to the threshold point for the generation of a spike. Thus the nerve cell responds to the algebraic sum of the internal voltage changes produced by excitatory and inhibitory synapses.
 If, as in the experiment described earlier, the internal membrane potential is altered by the flow of an electric current through one barrel of a double-barreled microelectrode, one can observe the effect of such changes on the inhibitory postsynaptic potential. When the internal potential is made less negative, the inhibitory postsynaptic potential is deepened. Conversely, when the potential is made more negative, the IPSP diminishes; it finally reverses when the internal potential is driven below minus 80 millivolts.
 One can therefore conclude that inhibitory synapses share with excitatory synapses the ability to change the ionic permeability of the synaptic membrane. The difference is that inhibitory synapses enable ions to flow freely down an electrochemical gradient that has an equilibrium point at minus 80 millivolts rather than at zero, as is the case for excitatory synapses. This effect could be achieved by the outward flow of positively charged ions such as potassium or the inward flow of negatively charged ions such as chloride, or by a combination of negative and positive ionic flows such that the interior reaches equilibrium at minus 80 millivolts.
 The permeability changes associated with the inhibitory potential have altered the concentration of ions normally found in motoneurons and have introduced a variety of other ions that are not normally present. This can be done by impaling nerve cells with micropipettes that are filled with a salt solution containing the ion to be injected. The actual injection is achieved by passing a brief current through the micropipette.
 If the concentration of chloride ions within the cell is in this way increased as much as three times, the inhibitory postsynaptic potential reverses and acts as a depolarizing current; that is, it resembles an excitatory potential. On the other hand, if the cell is heavily injected with sulfate ions, which are also negatively charged, there is no such reversal. This simple test shows that under the influence of the inhibitory transmitter substance, which is still unidentified, the subsynaptic membrane becomes permeable momentarily to chloride ions but not to sulfate ions. During the generation of the IPSP the outflow of chloride ions is so rapid that it more than outweighs the flow of other ions that generate the normal inhibitory potential.
 Testing the effect of injecting motoneurons with more than 30 kinds of negatively charged ion. With one exception the hydrated ions (ions bound to water) to which the cell membrane is permeable under the influence of the inhibitory transmitter substance are smaller than the hydrated ions to which the membrane is impermeable. The exception is the formats ion (HCO2-), which may have an ellipsoidal shape and so be able to pass through membrane pores that block smaller spherical ions.
 Apart from the formate ion all the ions to which the membrane is permeable have a diameter not greater than 1.14 times the diameter of the potassium ion; that is, they are less than 2.9 angstrom units in diameter. Comparable investigations in other laboratories have found the same permeability effects, including the exceptional behaviour of the formate ion, in fishes, toads and snails. It may well be that the ionic mechanism responsible for synaptic inhibition is the same throughout the animal kingdom.
 The significance of these and other studies is that they strongly indicate that the inhibitory transmitter substance opens the membrane to the flow of potassium ions but not to sodium ions. It is known that the sodium ion is somewhat larger than any of the negatively charged ions, including the formate ion, that are able to pass through the membrane during synaptic inhibition. It is not possible, however, to test the effectiveness of potassium ions by injecting excess amounts into the cell because the excess is immediately diluted by an osmotic flow of water into the cell.
 As indicated, the concentration of potassium ions inside the nerve cell is about 30 times greater than the concentration outside, and to maintain this large difference in concentration without the help of a metabolic pump the inside of the membrane would have to be charged 90 millivolts negative with respect to the exterior. This implies that if the membrane were suddenly made porous to potassium ions, the resulting outflow of ions would make the inside potential of the membrane even more negative than it is in the resting state, and that is just what happens during synaptic inhibition. The membrane must not simultaneously become porous to sodium ions, because they exist in much higher concentration outside the cell than inside and their rapid inflow would more than compensate for the potassium outflow. In fact, the fundamental difference between synaptic excitation and synaptic inhibition is that the membrane freely passes sodium ions in response to the former and largely excludes the passage of sodium ions in response to the latter.
 This fine discrimination between ions that are not very different in size must be explained by any hypothesis of synaptic action. It is most unlikely that the channels through the membrane are created afresh and accurately maintained for a thousandth of a second every time a burst of transmitter substance is released into the synaptic cleft. It is more likely that channels of at least two different sizes are built directly into the membrane structure. In some way the excitatory transmitter substance would selectively unplug the larger channels and permit the free inflow of sodium ions. Potassium ions would simultaneously flow out and thus would tend to counteract the large potential change that would be produced by the massive sodium inflow. The inhibitory transmitter substance would selectively unplug the smaller channels that are large enough to pass potassium and chloride ions but not sodium ions.
 To explain certain types of inhibition other features must be added to this hypothesis of synaptic transmission. In the simple hypothesis chloride and potassium ions can flow freely through pores of all inhibitory synapses. It has been shown, however, that the inhibition of the contraction of heart muscle by the vagus nerve is due almost exclusively to potassium-ion flow. On the other hand, in the muscles of crustaceans and in nerve cells in the snail's brain synaptic inhibition is due largely to the flow of chloride ions. This selective permeability could be explained if there were fixed charges along the walls of the channels. If such charges were negative, they would repel negatively charged ions and prevent their passage; if they were positive, they would similarly prevent the passage of positively charged ions. One can now suggest that the channels opened by the excitatory transmitter are negatively charged and so do not permit the passage of the negatively charged chloride ion, even though it is small enough to move through the channel freely.
 One might wonder if a given nerve cell can have excitatory synaptic action at some of its axon terminals and inhibitory action at others. The answer is no. Two different kinds of nerve cell are needed, one for each type of transmission and synaptic transmitter substance. This can readily be demonstrated by the effect of strychnine and tetanus toxin in the spinal cord; they specifically prevent inhibitory synaptic action and leave excitatory action unaltered. As a result the synaptic excitation of nerve cells is uncontrolled and convulsions result. The special types of cell responsible for inhibitory synaptic action are now being recognized in many parts of the central nervous system.
This account of communication between nerve cells is necessarily oversimplified, yet it shows that some significant advances are being made at the level of individual components of the nervous system. By selecting the most favourable situations we have been able to throw light on some details of nerve-cell behaviour. We can be encouraged by these limited successes. But the task of understanding in a comprehensive way how the human brain operates staggers its own imagination.
 Although extensive research has been conducted on the brain’s role in cognition and memory, scientists have only recently begun to study the link between memory and emotions, particularly emotions such as fear. This study of the biochemistry of memory is another exciting scientific enterprise, but one that can only be touched upon here. Scientists estimate that an adult human brain contains about 100 billion neurons. Each of these is connected to hundreds or thousands of other neurons, forming trillions of neural connections. Neurons communicate by chemical messengers called neurotransmitters. An electrical signal travels along the neuron, triggering the release of neurotransmitters at the synapse, the small gap between neurons. The neurotransmitters travel across the synapse and act on the next neuron by binding with protein molecules called receptors. Most scientists believe that memories are somehow stored among the brain’s trillions of synapses, rather than in the neurons themselves.
 Scientists who study the biochemistry of learning and memory often focus on the marine snail Aplysia because its simple nervous system allows them to study the effects of various stimuli on specific synapses. A change in the snail’s behaviour due to learning can be correlated with a change at the level of the synapse. One exciting scientific frontier is discovering the changes in neurotransmitters that occur at the level of the synapse.
 The neural routes underlying the formation of memories about primitive emotional experiences, such as fear, have been traced and
despite millennia of preoccupation with every facet of human emotion, we are still far from explaining in a rigorous physiological sense this part of our mental experience. Neuroscientists have, in modern times, been especially concerned with the neural basis of such cognitive processes as perception and memory. They have for the most part ignored the brain's role in emotion. Yet in recent years, interest in this mysterious mental terrain has surged. Catalysed by breakthroughs in understanding the neural basis of cognition and by an increasingly sophisticated knowledge of the anatomical organization and physiology of the brain, investigators have begun to tackle the problem of emotion.
 Our inquiry stands with the relation between memory and emotion. Much of this examination has involved studies of one particular emotion-fear-and the manner in which specific events or stimuli come, through individual learning experiences, to evoke this state. Scientists, myself included, have been able to determine the way in which the brain shapes how we form memories about this basic, but significant, emotional event. We call this process ‘emotional memory.’
 By uncovering the neural pathways through which a situation causes a creature to learn about fear, we hope to elucidate the general mechanisms of this form of memory. Because many human mental disorders-including anxiety, phobia, post-traumatic stress syndrome and panic attack-involve malfunctions in the brain's ability to control fear, studies of the neural basis of this emotion may help us further understand and treat these disturbances.
 Most of our knowledge about how the brain links memory and emotion has been gleaned through the study of so-called classical fear conditioning. In this process the subject, usually a rat, hears a noise or sees a flashing light that is paired with a brief, mild electric shock to its feet. After a few such experiences, the rat responds automatically to the sound or light, even in the absence of the shock. Its reactions are typical to any threatening situation: the animal freezes, its blood pressure and heart rate increase, and it startles easily. In the language of such experiments, the noise or flash is a conditioned stimulus, the foot shock is an unconditioned stimulus, and the rat's reaction is a conditioned response, which consists of readily measured behavioural and physiological changes.
 Conditioning of this kind happens quickly in rats, indeed, it takes place as rapidly as it does in humans. A single pairing of the shock to the sound or sight can bring on the conditioned effect. Once established, the fearful reaction is relatively permanent. If the noise or light is administered many times without an accompanying electric shock, the rat's response diminishes. This change is called extinction. But considerable evidence suggests that this behavioural alteration is the result of the brain's controlling the fear response rather than the elimination of the emotional memory. For example, an apparently extinguished fear response can recover spontaneously or can be reinstated by an irrelevant stressful experience. Similarly, stress can cause the reappearance of phobias in people who have been successfully treated. This resurrection demonstrates that the emotional memory underlying the phobia was rendered dormant rather than erased by treatment.
 Fear conditioning has proved an ideal starting point for studies of emotional memory for several reasons. First, it occurs in nearly every animal group in which it has been examined: fruit flies, snails, birds, lizards, fish, rabbits, rats, monkeys and people. Although no one claims that the mechanisms are precisely the same in all these creatures, it seems clear from studies to date that the pathways are very similar in mammals and possibly in all vertebrates. We therefore are confident in believing that many of the findings in animals apply to humans. In addition, the kinds of stimuli most commonly used in this type of conditioning are not signals that rats - or humans, for that matter—encounter in their daily lives. The novelty and irrelevance of these lights and sounds help to ensure that the animals have not already developed strong emotional reactions to them. So researchers are clearly observing learning and memory at work. At the same time, such cues do not require complicated cognitive processing from the brain. Consequently, the stimuli permit us to study emotional mechanisms relatively directly. Finally, our extensive knowledge of the neural pathways involved in processing acoustic and visual information serves as an excellent starting point for examining the neurological foundations of fear elicited by such stimuli.
 Our focuses on the cerebral roots of learning fear, specifically fear that has been induced in the rat by associating sounds with foot shock. As do most other investigators in the field, such that we assume that fear conditioning occurs because the shock modifies the way in which neurons in certain important regions of the brain interpret the sound stimulus. These critical neurons are thought to be located in the neural pathway through which the sound elicits the conditioned response.
 During the past 10 years, researchers have identified major components of this system, which began at Cornell University Medical College, and simply questions, is the auditory cortex required for auditory fear conditioning?
 In the auditory pathway, as in other sensory systems, the cortex is the highest level of processing; it is the culmination of a sequence of neural steps that starts with the peripheral sensory receptors, located, in this case, in the ear. If lesions in (or surgical removal of) parts of the auditory cortex interfered with fear conditioning, we could conclude that the region is indeed necessary for this activity. We could also deduce that the next step in the conditioning pathway would be an output from the auditory cortex. But our lesion experiments in rats confirmed what a series of other studies had already suggested: the auditory cortex is not needed in order to learn many things about simple acoustic stimuli.
 Going on to make lesions in the auditory thalamus and the auditory midbrain, sites lying immediately below the auditory cortex. Both these areas process auditory signals: the midbrain provides the major input to the thalamus; the thalamus supplies the major input to the cortex. Lesions in both regions completely eliminated the rat's susceptibility to conditioning. This discovery suggested that a sound stimulus is transmitted through the auditory system to the level of the auditory thalamus but that it does not have to reach the cortex for fear conditioning to occur.
 This possibility was somewhat puzzling, knowing that the primary nerve fibres that carry signals from the auditory thalamus extend to the auditory cortex. So David A. Ruggiero, Donald and J. Looked again and found that, in fact, cells in some regions of the auditory thalamus also give rise to fibres that reach several subcortical locations. Could these neural projections be the connections through which the stimulus elicits the response we identify with fear? We tested this hypothesis by making lesions in each one of the subcortical regions with which these fibres connect. The damage had an effect in only one area: the amygdala.
 That observation suddenly created a place for our findings in an already accepted picture of emotional processing. For a long time, the amygdala has been considered an important brain region in various forms of emotional behaviour. In 1979 Bruce S. Kapp and his colleagues at the University of Vermont reported that lesions in the amygdala's central nucleus interfered with a rabbit's conditioned heart rate response once the animal had been given a shock paired with a sound. The central nucleus connects with areas in the brain stem involved in the control of heart rate, respiration and vasodilation. Kapp's work suggested that the central nucleus was a crucial part of the system through which autonomic conditioned responses are expressed.
 In a similar vein, it was found, that lesions of this nucleus prevented a rat's blood pressure from rising and limited its ability to freeze in the presence of a fear causing stimulus. We also demonstrated, in turn, that lesions of areas to which the central nucleus connects eliminated one or the other of the two responses. Michael Davis and his associates at Yale University determined that lesions of the central nucleus, as well as lesions of another brain stem area to which the central nucleus projects, diminished yet another conditioned response: the increased startle reaction that occurs when an animal is afraid.
 The findings from various laboratories studying different species and measuring fear in different ways all implicated the central nucleus as a pivotal component of fear-conditioning circuitry. It provides connections to the various brain stem areas involved in the control of a spectrum of responses.
 Despite our deeper understanding of this site in the amygdala, many details of the pathway remained hidden. Does sound, for example, reach the central nucleus directly from the auditory thalamus? We found that it does not. The central nucleus receives projections from thalamic areas next to, but not in, the auditory part of the thalamus. Indeed, an entirely different area of the amygdala, the lateral nucleus, receives inputs from the auditory thalamus. Lesions of the lateral nucleus prevented fear conditioning. Because this site gets information directly from the sensory system, we have come to think of it as the sensory interface of the amygdala in fear conditioning. In contrast, the central nucleus appears to be the interface with the systems that control responses.
 Again, these findings seemed to place us on the threshold of being able to map the entire stimulus response pathway. But we still did not know how information received by the lateral nucleus arrived at the central nucleus. Earlier studies had suggested that the lateral nucleus projects directly to the central nucleus, but the connections were fairly sparse. Working with monkeys, David Amaral and Asla Pitkanen of the Salk Institute for Biological Studies in San Diego demonstrated that the lateral nucleus extends directly to an adjacent site, called the basal or basolateral nucleus, which, in turn, projects to the central nucleus.
 Collaborating with Lisa Stefanacci and other members of the Salk team, Claudia R. Farb and C. Genevieve Go in my laboratory at New York University found the same connections in the rat. We then  - showed that these connections form synaptic contacts—in other words, they communicate directly, neuron to neuron. Such contacts indicate that information reaching the lateral nucleus can influence the central nucleus via the basolateral nucleus. The lateral nucleus can also influence the central nucleus by way of the accessory basal or basomedial nucleus. Clearly, ample opportunities exist for the lateral nucleus to communicate with the central nucleus once a stimulus has been received.
 The emotional significance of such a stimulus is determined not only by the sound itself but by the environment in which it occurs. Rats must therefore learn not only that a sound or visual cue is dangerous, but under what conditions it is so. Russell G. Phillips and I examined the response of rats to the chamber, or context, in which they had been conditioned. We found that lesions of the amygdala interfered with the animals' response to both the tone and the chamber. But lesions of the hippocampus - a region of the brain involved in declarative memory - interfered only with response to the chamber, not the tone. (Declarative memory involves explicit, consciously accessible information, as well as spatial memory.) At about the same time, Michael S. Fanselow and Jeansok J. Kim of the University of California at Los Angeles discovered that hippocampal lesions made after fear conditioning had taken place also prevented the expression of responses to the surroundings.
 These findings were consistent with the generally accepted view that the hippocampus plays an important role in processing complex information, such as details about the spatial environment where activity is taking place. Phillips and I also demonstrated that the subiculum, a region of the hippocampus that projects to other areas of the brain, communicated with the lateral nucleus of the amygdala. This connection suggests that contextual information may acquire emotional significance in the same way that other events do—via transmission to the lateral nucleus.
 Although, resulting experiments had identified a subcortical sensory pathway that gave rise to fear conditioning, we did not dismiss the importance of the cortex. The interaction of subcortical and cortical mechanisms in emotion remains a hotly debated topic. Some researchers believe cognition is a vital precursor to emotional experience; others think that cognition - which is presumably a cortical function - is necessary to initiate emotion or that emotional processing is a type of cognitive processing. Still others question whether cognition is necessary for emotional processing.
 It seems clearly apparent that the auditory cortex is involved in, though not crucial to, establishing the fear response, at least when simple auditory stimuli are applied. Norman M. Weinberger and his colleagues at the University of California at Irvine have performed elegant studies showing that neurons in the auditory cortex undergo specific physiological changes in their reaction to sounds as a result of conditioning. This finding indicates that the cortex is establishing its own record of the event.
 Experiments by Lizabeth M. Romanski in my laboratory have determined that in the absence of the auditory cortex, rats can learn to respond fearfully to a single tone. If, however, projections from the thalamus to the amygdala are removed, projections from the thalamus to the cortex and then to the amygdala are sufficient. Romanski went on to establish that the lateral nucleus can receive input from both the thalamus and the cortex. Her work in the rat complements earlier research in primates.
 Once having a better understanding of the mechanism through which fear conditioning is learned, we attempted to find out how emotional memories are established and stored on a molecular level. Farb has showed that the excitatory amino acid transmitter glutamate is present in the thalamic cells that reach the lateral nucleus. Together with Chiye J. Aoki, we showed that it is also present at synapses in the lateral nucleus. Because glutamate transmission is implicated in memory formation, we seemed to be on the right track.
 Glutamate has been observed in a process called long-term potentiation, or LTP, that has emerged as a model for the creation of memories. This process, which is most frequently studied in the hippocampus, involves a change in the efficiency of synaptic transmission along a neural pathway—in other words, signals travel more readily along this pathway once LTP has taken place. The mechanism seems to involve glutamate transmission and a class of postsynaptic excitatory amino acid receptors known as NMDA receptors.
 Various studies have found LTP in the fear-conditioning pathway. Marie-Christine Clugnet and I noted that LTP could be induced in the thalamo-amygdala pathway. Thomas H. Brown and Paul Chapman and their colleagues at Yale discovered LTP in a cortical projection to the amygdala. Other researchers, including Davis and Fanselow, have been able to block fear conditioning by blocking NMDA receptors in the amygdala. And Michael T. Rogan in my laboratory found that the processing of sounds by the thalamo-amygdala pathway is amplified after LTP has been induced. The fact that LTP can be demonstrated in a conditioning pathway offers new hope for understanding how LTP might relate to emotional memory.
 In addition, recent studies by Fabio Bordi, suggested hypotheses about what could be going on in the neurons of the lateral nucleus during learning. Bordi monitored the electrical state of individual neurons in this area when a rat was listening to the sound and receiving the shock. He and Romanski found that essentially every cell responding to the auditory stimuli also responded to the shock. The basic ingredient of conditioning is thus present in the lateral nucleus.
 Bordi was able to divide the acoustically stimulated cells into two classes: habituating and consistently responsive. Habituating cells eventually stopped responding to the repeated sound, suggesting that they might serve to detect any sound that was unusual or different. They could permit the amygdala to ignore a stimulus once it became familiar. Sound and shock pairing at these cells might reduce habituation, thereby allowing the cells to respond to, rather than ignore, significant stimuli.
 The consistently responsive cells had high-intensity thresholds: only loud sounds could activate them. That finding is interesting because of the role loudness plays in judging distance. Nearby sources of sound are presumably more dangerous than those that are far away. Sound coupled with shock might act on these cells to lower their threshold, increasing the cells' sensitivity to the same stimulus. Consistently responsive cells were also broadly tuned. The joining of a sound and a shock could make the cells responsive to a narrower range of frequencies, or it could shift the tuning toward the frequency of the stimulus. In fact, Weinberger has recently shown that cells in the auditory system do alter their tuning to approximate the conditioned stimulus. Bordi has, detected this effect in lateral nucleus cells as well.
 The apparent permanence of these memories raises an important clinical question: Can emotional learning be eliminated, and, if not, how can it be toned down? As noted earlier, it is actually quite difficult to get rid of emotional memories, and at best we can hope only to keep them under wraps. Studies by Maria A. Morgan in my laboratory have begun to illuminate how the brain regulates emotional expressions. Morgan has shown that when part of the prefrontal cortex is damaged, emotional memory is very hard to extinguish. This discovery indicates that the prefrontal areas—possibly by way of the amygdala—normally control expression of emotional memory and prevent emotional responses once they are no longer useful. A similar conclusion was proposed by Edmund T. Rolls and his colleagues at the University of Oxford during studies of primates. The researchers studied the electrical activity of neurons in the frontal cortex of the animals.
 Functional variation in the pathway between this region of the cortex and the amygdala may make it more difficult for some people to change their emotional behaviour. Davis and his colleagues have found that blocking NMDA receptors in the amygdala interferes with extinction. Those results hint that extinction is an active learning process. At the same time, such learning could be situated in connections between the prefrontal cortex and the amygdala. More experiments should disclose the answer.
 Placing a basic emotional memory process in the amygdalic pathway yields obvious benefits. The amygdala is a critical site of learning because of its central location between input and output stations. Each route that leads to the amygdala - sensory thalamus, sensory cortex and hippocampus - delivers unique information to the organ. Pathways originating in the sensory thalamus provide only a crude perception of the external world, but because they involve only one neural link, they are quite fast. In contrast, pathways from the cortex offer detailed and accurate representations, allowing us to recognize an object by sight or sound. But these pathways, which run from the thalamus to the sensory cortex to the amygdala, involve several neural links. And each link in the chain adds time.
 Conserving time may be the reason there are two routes—one cortical and one subcortical — for emotional learning. Animals, and humans, need a quick-and-dirty reaction mechanism. The thalamus activates the amygdala at about the same time as it activates the cortex. The arrangement may enable emotional responses to begin in the amygdala before we completely recognize what it is we are reacting to or what we are feeling.
 The thalamic pathway may be particularly useful in situations requiring a rapid response. Failing to respond to danger is more costly than responding inappropriately to a benign stimulus. For instance, the sound of rustling leaves is enough to alert us when we are walking in the woods without our having first to identify what is causing the sound. Similarly, the sight of a slender curved shape lying flat on the path ahead of us is sufficient to elicit defensive fear responses. We do not need to go through a detailed analysis of whether or not what we are seeing is a snake. Nor do we need to think about the fact that snakes are reptiles and that their skins can be used to make belts and boots. All these details are irrelevant and, in fact, detrimental to an efficient, speedy and potentially lifesaving reaction. The brain simply needs to be able to store primitive cues and detect them. Later, coordination of this basic information with the cortex permits verification (yes, this is a snake) or brings the response (screaming, sprinting) to a stop.
 Although the amygdala stores primitive information, we should not consider it the only learning center. The establishment of memories is a function of the entire network, not just of one component. The amygdala is certainly crucial, but we must not lose sight of the fact that its functions exist only by virtue of the system to which it belongs.
 Memory is generally thought to be the process by which we bring back to mind some earlier conscious experience. The original learning and the remembering, in this case, are both conscious events. Workers have determined that declarative memory is mediated by the hippocampus and the cortex. But removal of the hippocampus has little effect on fear conditioning - except conditioning to context.
 In contrast, emotional learning that comes about through fear conditioning is not declarative learning. Rather it is mediated by a different system, which in all likelihood operates independently of our conscious awareness. Emotional information may be stored within declarative memory, but it is kept there as a cold declarative fact. For example, if a person is injured in an automobile accident in which the horn gets stuck in the on position, he or she may later have a reaction when hearing the blare of car horns. The person may remember the details of the accident, such as where and when it occurred, who else was involved and how awful it was. These are declarative memories that are dependent on the hippocampus. The individual may also become tense, anxious and depressed, as the emotional memory is reactivated through the amygdalic system. The declarative system has stored the emotional content of the experience, but it has done so as a fact.
 Emotional and declarative memories are stored and retrieved in parallel, and their activities are joined seamlessly in our conscious experience. That does not mean that we have direct conscious access to our emotional memory; it means instead that we have access to the consequences—such as the way we behave, the way our bodies feel. These consequences combine with current declarative memory to form a new declarative memory. Emotion is not just unconscious memory: it exerts a powerful influence on declarative memory and other thought processes. As James L. McGaugh and his colleagues at the University of California at Irvine have convincingly shown, the amygdala plays an essential part in modulating the storage and strength of memories.
 The distinction between declarative memory and emotional memory is an important one. W. J. Jacobs of the University of Britishways Columbia and Lynn Nadel of the University of Arizona have argued that we are unable to remember traumatic events that take place early in life because the hippocampus has not yet matured to the point of forming consciously accessible memories. The emotional memory system, which may develop earlier, clearly forms and stores its unconscious memories of these events. And for this reason, the trauma may affect mental and behavioural functions in later life, albeit through processes that remain inaccessible to consciousness.
 Because pairing a tone and a shock can bring about conditioned responses in animals throughout the phyla, it is clear that fear conditioning cannot be dependent on consciousness. Fruit flies and snails, for example, are not creatures known for their conscious mental processes, however, ways of interpreting this phenomenon is to consider fear a subjective state of awareness brought about when brain systems react to danger. Only if the organism possesses a sufficiently advanced neural mechanism does conscious fear accompany bodily response. This is not to say that only humans experience fear but, rather, that consciousness is a prerequisite to subjective emotional states.
 Thus, emotions or feelings are conscious products of unconscious processes. It is crucial to remember that the subjective experiences we call feelings are not the primary business of the system that generates them. Emotional experiences are the result of triggering systems of behavioural adaptation that have been preserved by evolution. Subjective experience of any variety is challenging turf for scientists. We have, however, gone a long way toward understanding the neural system that underlies fear responses, and this same system may in fact give rise to subjective feelings of fear. If so, studies of the neural control of emotional responses may hold the key to understanding subjective emotion as well.
 In the nervous system, a message-carrying impulse travels from one end of a nerve cell to the other by means of an electrical impulse. When it reaches the terminal end of a nerve cell, the impulse triggers tiny sacs called presynaptic vessicles to release their contents, chemical messengers called neurotransmitters. The neurotransmitters float across the synapse, or gap between adjacent nerve cells. When they reach the neighbouring nerve cell, the neurotransmitters fit into specialized receptor sites much as a key fits into a lock, causing that nerve cell to ‘fire,’ or generate an electric message-carrying impulse. As the message continues through the nervous system, the presynaptic cell absorbs the excess neurotransmitters, and repackages them in presynaptic vessicles in a process called neurotransmitter reuptake.
 Neurotransmitter, chemical made by neurons, or nerve cells. Neurons send out neurotransmitters as chemical signals to activate or inhibit the function of neighboring cells.
 Within the central nervous system, which consists of the brain and the spinal cord, neurotransmitters pass from neuron to neuron. In the peripheral nervous system, which is made up of the nerves that run from the central nervous system to the rest of the body, the chemical signals pass between a neuron and an adjacent muscle or gland cell.
 Nine chemical compounds - belonging to three chemical families—are widely recognized as neurotransmitters. In addition, certain other body chemicals, including adenosine, histamine, enkephalins, endorphins, and epinephrine, have neurotransmitterlike properties. Experts believe that there are many more neurotransmitters as yet undiscovered.
 The first of the three families is composed of amines, a group of compounds containing molecules of carbon, hydrogen, and nitrogen. Among the amine neurotransmitters are acetylcholine, norepinephrine, dopamine, and serotonin. Acetylcholine is the most widely used neurotransmitter in the body, and neurons that leave the central nervous system (for example, those running to skeletal muscle) use acetylcholine as their neurotransmitter; neurons that run to the heart, blood vessels, and other organs may use acetylcholine or norepinephrine. Dopamine is involved in the movement of muscles, and it controls the secretion of the pituitary hormone prolactin, which triggers milk production in nursing mothers.
 The second neurotransmitter family is composed of amino acids, organic compounds containing both an amino group (NH2) and a carboxylic acid group (COOH). Amino acids that serve as neurotransmitters include glycine, glutamic and aspartic acids, and gamma-amino butyric acid (GABA). Glutamic acid and GABA are the most abundant neurotransmitters within the central nervous system, and especially in the cerebral cortex, which is largely responsible for such higher brain functions as thought and interpreting sensations.
 The third neurotransmitter family is composed of peptides, which are compounds that contain at least 2, and sometimes as many as 100 amino acids. Peptide neurotransmitters are poorly understood, but scientists know that the peptide neurotransmitter called substance P influences the sensation of pain.
 In general, each neuron uses only a single compound as its neurotransmitter. However, some neurons outside the central nervous system are able to release both an amine and a peptide neurotransmitter.
 Neurotransmitters are manufactured from precursor compounds like amino acids, glucose, and the dietary amine called choline. Neurons modify the structure of these precursor compounds in a series of reactions with enzymes. Neurotransmitters that come from amino acids include serotonin, which is derived from tryptophan; dopamine and norepinephrine, which are derived from tyrosine; and glycine, which is derived from threonine. Among the neurotransmitters made from glucose are glutamate, aspartate, and GABA. Choline serves as the precursor for acetylcholine.
 Neurotransmitters are released into a microscopic gap, called a synapse, that separates the transmitting neuron from the cell receiving the chemical signal. The cell that generates the signal is called the presynaptic cell, while the receiving cell is termed the postsynaptic cell.
 After their release into the synapse, neurotransmitters combine chemically with highly specific protein molecules, termed receptors, that are embedded in the surface membranes of the postsynaptic cell. When this combination occurs, the voltage, or electrical force, of the postsynaptic cell is either increased (excited) or decreased (inhibited).
 When a neuron is in its resting state, its voltage is about -70 millivolts. An excitatory neurotransmitter alters the membrane of the postsynaptic neuron, making it possible for ions (electrically charged molecules) to move back and forth across the neuron’s membranes. This flow of ions makes the neuron’s voltage rise toward zero. If enough excitatory receptors have been activated, the postsynaptic neuron responds by firing, generating a nerve impulse that causes its own neurotransmitter to be released into the next synapse. An inhibitory neurotransmitter causes different ions to pass back and forth across the postsynaptic neuron’s membrane, lowering the nerve cell’s voltage to -80 or -90 millivolts. The drop in voltage makes it less likely that the postsynaptic cell will fire.
 If the postsynaptic cell is a muscle cell rather than a neuron, an excitatory neurotransmitter will cause the muscle to contract. If the postsynaptic cell is a gland cell, an excitatory neurotransmitter will cause the cell to secrete its contents.
 While most neurotransmitters interact with their receptors to create new electrical nerve impulses that energize or inhibit the adjoining cell, some neurotransmitter interactions do not generate or suppress nerve impulses. Instead, they interact with a second type of receptor that changes the internal chemistry of the postsynaptic cell by either causing or blocking the formation of chemicals called second messenger molecules. These second messengers regulate the postsynaptic cell’s biochemical processes and enable it to conduct the maintenance necessary to continue synthesizing neurotransmitters and conducting nerve impulses. Examples of second messengers, which are formed and entirely contained within the postsynaptic cell, include cyclic adenosine monophosphate, diacylglycerol, and inositol phosphates.
 Once neurotransmitters have been secreted into synapses and have passed on their chemical signals, the presynaptic neuron clears the synapse of neurotransmitter molecules. For example, acetylcholine is broken down by the enzyme acetylcholinesterase into choline and acetate. Neurotransmitters like dopamine, serotonin, and GABA are removed by a physical process called reuptake. In reuptake, a protein in the presynaptic membrane acts as a sort of sponge, causing the neurotransmitters to reenter the presynaptic neuron, where they can be broken down by enzymes or repackaged for reuse.
 Neurotransmitters are known to be involved in a number of disorders, including Alzheimer’s disease. Victims of Alzheimer’s disease suffer from loss of intellectual capacity, disintegration of personality, mental confusion, hallucinations, and aggressive - even violent - behaviour. These symptoms are the result of progressive degeneration in many types of neurons in the brain. Forgetfulness, one of the earliest symptoms of Alzheimer’s disease, is partly caused by the destruction of neurons that normally release the neurotransmitter acetylcholine. Medications that increase brain levels of acetylcholine have helped restore short-term memory and reduce mood swings in some Alzheimer’s patients.
 Neurotransmitters also play a role in Parkinson disease, which slowly attacks the nervous system, causing symptoms that worsen over time. Fatigue, mental confusion, a masklike facial expression, stooping posture, shuffling gait, and problems with eating and speaking are among the difficulties suffered by Parkinson victims. These symptoms have been partly linked to the deterioration and eventual death of neurons that run from the base of the brain to the basal ganglia, a collection of nerve cells that manufacture the neurotransmitter dopamine. The reasons why such neurons die are yet to be understood, but the related symptoms can be alleviated. L-dopa, or levodopa, widely used to treat Parkinson disease, acts as a supplementary precursor for dopamine. It causes the surviving neurons in the basal ganglia to increase their production of dopamine, thereby compensating to some extent for the disabled neurons.
 Many other effective drugs have been shown to act by influencing neurotransmitter behaviour. Some drugs work by interfering with the interactions between neurotransmitters and intestinal receptors. For example, belladonna decreases intestinal cramps in such disorders as irritable bowel syndrome by blocking acetylcholine from combining with receptors. This process reduces nerve signals to the bowel wall, which prevents painful spasms.
 Other drugs block the reuptake process. One well-known example is the drug fluoxetine (Prozac), which blocks the reuptake of serotonin. Serotonin then remains in the synapse for a longer time, and its ability to act as a signal is prolonged, which contributes to the relief of depression and the control of obsessive-compulsive behaviours
 Comparative Anatomy, scientific study of the similarities and differences in the structure of living things. Comparative anatomy helps to show how organisms function, how they develop, and how they are linked by evolution, the process by which organisms change over many generations. The theory of evolution, one of the fundamental tenets of modern biology, states that new types of organisms develop from common ancestral types over long periods. Studying the body structures of various organisms often helps scientists determine how different species, or distinct kinds of organisms, are related to each other, as well as how and when they diverged from a common ancestor.
 Comparative anatomy can be used to investigate plants and simple microorganisms, but its most important role is in the study of animals. In animals, comparative anatomy usually focuses on living species, but scientists also investigate extinct species by examining fossils, body remains trapped in sediment or amber. With extinct animals, anatomists rarely have a chance to study soft body parts because these parts normally decay before they have a chance to fossilize. With living species, the entire body can be examined, giving a much fuller picture of how it functions. Anatomists also compare existing species with fossils to trace the path that evolution has followed and to gather information that is used in animal classification. Anatomical studies usually involve adult animals, but anatomists also investigate the way animal bodies reach their adult shape in a field of study called developmental anatomy.
 Many important physical features can be seen on the outside of animal bodies, but often the most revealing ones are hidden inside. These hidden features provide valuable clues about an animal's distant ancestors. For example, an endangered reptile from New Zealand called the tuatara looks much like a lizard, and it was originally classified as a lizard back in the early 19th century. But in 1867 anatomist Albert Gunther, working at the British Museum in London, noticed that tuataras have some unusual features. Among these features are teeth that are permanently fused to their jaws rather than separate from the jaw like the teeth of lizards. From this evidence and other anatomical observations he concluded that tuatar as are not lizards at all, but sphenodonts - the only surviving members of an ancient group of reptiles that flourished alongside the dinosaurs.
 Most comparative anatomy studies involve gross anatomy, which deals with structures that are big enough to be seen with the naked eye. Smaller structures, such as individual cells, may also be investigated using the magnifying power of various types of microscopes. This field of study is called microscopic anatomy. In recent years, progress in molecular biology has enabled scientists to investigate still smaller structures, particularly deoxyribonucleic acid (DNA), the hereditary material in all living cells. DNA is made up of strings of four different subunits called nucleotide bases. Anatomists sometimes study the arrangement or sequence of these nucleotide bases in the DNA from different animals, looking for similarities and differences that provide clues to evolutionary family trees.
 Comparative anatomy is used in the study of all animal groups, but more work has been carried out on some animals than on others. Among invertebrates, or animals that lack a backbone, anatomists focus on a few major groups. Arthropods, which include crustaceans and insects, draw the attention of anatomists who are interested in finding out how the same basic body plan of a segmented body and jointed legs could give rise to such a stunning array of variations. The mollusks, a group of invertebrates that includes snails, clams, squids, and octopuses, have also been thoroughly studied. Squids and octopuses are of particular interest to scientists because they have the most highly developed nervous systems of all invertebrates and large eyes that work very much like those of humans. The anatomy of flatworms and roundworms has been thoroughly investigated because these two groups include many parasitic species, including some that infect humans.
 Although invertebrates make up over 95 percent of animal species on Earth, work on their anatomy is still dwarfed by the studies carried out on vertebrates, animals that have internal bony skeletons. This is partly because these bony skeletons have left a fossil record of unparalleled richness, which anatomists draw on when comparing one species with another. In addition, vertebrates are the group to which humans belong, so anatomists are interested in studying them to find out how humans evolved.
 Anatomical studies of vertebrates show how the same underlying body systems have adapted to life in water, on land, and in the air. In laboratory studies, a handful of animals - including the dogfish (a type of small shark), frog, pigeon, and rat - are used as standard examples of vertebrate anatomy. Anatomists also have studied thousands of other species in detail in an effort to piece together exactly how vertebrates have evolved.
 Some of that history has been pieced together by studying tunicates and lancelets, sea-dwelling invertebrates that are closely related to vertebrates. Despite being brainless and boneless, tunicates and lancelets clearly show some of the physical characteristics that were key to vertebrate success. For example, tunicates use a series of slits for feeding. These slits developed into gills in fish, resulting in an efficient mechanism for extracting oxygen from the water. Lancelets have a stiff structure called a notochord, which enables them to swim efficiently. The vertebral column, which has replaced the notochord in vertebrates, is even more efficient. Such characteristics allowed vertebrates to diversify rapidly and become the most complex animals on Earth.
 Despite the variety and complexity of animal life, several key anatomical features divide up the animal world. One of these features is symmetry, meaning that an animal’s body parts are the same in size, shape, and position on either side of a dividing line or central axis. Several groups of marine animals—including the cnidarians (jellyfish, sea anemones, and corals), comb jellies, and echinoderms - radially symmetrical. Their body parts are arranged around a central axis like spokes in a wheel. Almost all other animals, including vertebrates, are bilaterally symmetrical, with two halves arranged on either side of a central dividing line.
 Bilateral symmetry is often not quite as perfect as it seems. The human body looks more or less symmetrical from outside, but many internal organs are arranged in an asymmetrical way. For example, the liver lies mostly on the right side of the body’s dividing line, while the stomach is mostly on the left. In some animals, asymmetry goes much further. A sperm whale has a single blowhole on the left side of its head, while a fish known as a winter flounder has both eyes on the right side. Male fiddler crabs have one small pincer, which is used for feeding, and one giant one, which is used for signalling during courtship. This giant pincer can be either on the right or the left, and it often weighs as much as the rest of the body put together.
 Some bilateral animals, notably annelid worms (such as earthworms) and arthropods, show a characteristic known as segmentation. Segments, known to biologists as metameres, repeat from front to back of the animal’s body. The segments are all built on the same plan: Each one of an earthworm’s segments contains nerves, blood vessels, and excretory organs called nephridia arranged in the same pattern.
 Many bilaterally symmetrical animals also show a feature known as cephalization, a trend toward ‘front-end’ development. Some animals with only rudimentary cephalization simply have a distinct front end that leads the way when the animal moves. But in other animals, the front region, or head, has become the part of the body that houses the brain and most of the sense organs. Particularly noticeable in arthropods and vertebrates, cephalization gives active animals the earliest possible information about food, danger, and other aspects of the environment ahead.
 In comparing two species, anatomists have to be careful to differentiate between homologous structures, which are ones that have evolved from a shared ancestor, and analogous structures, which have developed from different origins. Homologous structures are built on the same underlying plan. A human arm, a bat’s wing, and a whale’s flipper look quite different from the outside, but the bones inside reveal that these limbs all have the same basic structure. Analogous structures, by contrast, often look similar, but their similarities are only skin deep. A fish's tail fin and a whale's flukes are analogous structures - they look similar from the outside and perform similar functions, but their underlying structures are quite different. Homologous structures are evidence that two species have a shared ancestry. However, analogous structures most often indicate that two unrelated species evolved in a similar environment where both developed structures to perform the same function.
 A complete anatomical study probes more than a dozen different body systems, from the skeletal and muscular systems, which support and move the body, to the nervous and sensory systems, which enable an animal to interact with its surroundings. To anatomists interested in evolutionary relationships, the underlying structure of each system is often more significant than the exact size or shape of its parts. Evolution changes the individual parts of a system more rapidly than the underlying pattern of how a system or animal is put together. Thus, such underlying patterns often remain intact, providing clues to how species are related.
 An animal's integumentary system is the external covering that shields its body from the outside world. In addition to protecting the animal from physical damage, it can help the animal prevent loss of body heat or water. The integumentary system is particularly important for animals that live on land because air can quickly dry out and kill living cells.
 Simple invertebrates, such as sponges and cnidarians, typically have an outer body covering that is just a single cell thick. More-complex animals, including annelid worms, nematodes, and arthropods, are often protected by a nonliving outer layer called a cuticle. In worms this outer layer is thin enough to be flexible, but in arthropods it forms a rigid case around the entire animal.
 Instead of a cuticle, vertebrates have a multilayered tissue called skin. Although skin sometimes feels soft, its layered growth makes it much tougher than it may seem. In most land-dwelling species, the outermost layer of the skin, called the epidermis, is covered by a thin sheet of dead cells that acts as a weatherproof barrier. These dead cells are constantly worn away, but new cells from the epidermis below rapidly replace them, so the skin never wears through. Underneath the epidermis is the dermis, an elastic layer that contains nerves and blood vessels. Beneath the dermis is the subcutaneous layer, which often contains deposits of insulating fat.
 During their long history, vertebrates have evolved a wide range of external structures that help the skin to do its protective work. Most fish are covered by scales, which are rough in sharks and rays, but smooth and slippery in most other species. This slipperiness comes from mucus, which is produced by glands in the skin. Mucus makes it easier for a fish to slide through the water, but it also has other uses. At night, a tropical parrot fish rests in a ‘sleeping bag’ of mucus that makes it harder for predators to attack. At dawn, the fish eats its mucus bag before it swims away.
 Reptiles also have scales, which serve primarily to help prevent water loss. Birds have scales on their legs and feet, and a few mammals, such as the pangolin or scaly anteater, also rely on this form of body armor. However, birds and mammals have largely abandoned scales in favor of feathers or hair over most of the body. Unlike fish, amphibians, and reptiles, whose body temperature depends on that of the environment, birds and mammals maintain a constant, warm body temperature. Feathers and hair help them retain the heat their bodies generate. Feathers are essentially modified reptilian scales, while hair grows from a follicle within the skin. Although feathers originally evolved to retain body heat, they later developed an additional use in flight. The only major group of vertebrates with bare skin is the amphibians. Although amphibians lack the protection afforded by an outer covering of scales, feathers, or hair, they use their skin to breathe, unlike other vertebrates.
 Most fish scales are made of bone, but scales in other animals, as well as feathers and hair, are made of a tough and versatile protein called keratin. Keratin is packed into the dead cells on the surface of skin, and also makes up much tougher structures, such as nails, claws, and horns. These structures grow throughout an animal's life. In Asian water buffalo the horns can reach a length of over 1.5 m (5 feet), making these the largest horns in the world.
 A skeleton is a framework that supports an animal's body and that helps the animal move by giving its muscles something to pull against. Most skeletons are made of hard materials, although the simplest type, called a hydrostatic skeleton, is found in animals that have no hard body parts at all.
 Hydrostatic skeletons work by pressure, and they need two main components to function: a body cavity that is completely filled with fluid, and a body wall that contains wraparound sheets of muscle. The fluid pushes outward against the body wall, helping maintain the animal’s shape. When the muscles in the body wall contract, fluid is forced into other regions of the animal’s body, much as squeezing a balloon filled with water causes it to change shape. This process enables an animal with a hydrostatic skeleton to move.
 Hydrostatic skeletons are common in aquatic animals, such as jellyfish, sea anemones, and tunicates, and are also found in some small land-dwelling invertebrates, such as earthworms and onychophorans (also known as velvetworms). But although this kind of skeleton works well in water, it is not strong enough to support large animals on land—a fact demonstrated by the way jellyfish collapse when stranded out of water by the tide.
 Hard skeletons enable large animals to counteract the pull of gravity. These skeletons are of two main types. An exoskeleton supports the body from the outside and doubles as a protective barrier, while an endoskeleton supports the body from within. During the course of evolution, animals have created these frameworks from a range of different building materials, including a glasslike material called silica, various calcium-containing compounds, and a tough, waterproof carbohydrate called chitin.
 Exoskeletons are commonly built from calcium compounds, especially in sea-dwelling animals. Corals, simple invertebrates that are related to jellyfish, build their cases out of calcium carbonate; in fact, a coral reef is really the skeletons of millions of simple animals. Mollusk shells are also made of calcium carbonate, which is secreted by an area of the body surface known as the mantle.
 But the most complex exoskeletons by far are formed by arthropods. An arthropod's skeleton is built of curved or tubular plates, which hinge against each other at flexible joints. The skeleton completely covers the outer surface of the body, including the eyes, antennae, and feet, but its thickness varies from place to place, so that it provides exactly the right amount of support and protection for each part of the body. Skeletons like these allow arthropods to run, jump, swim, and fly. But these skeletons have one major disadvantage: They cannot keep growing once they have been formed. For this reason, as an arthropod grows it must periodically molt, or shed its exoskeleton, growing a new, larger version in its place.
 Unlike an exoskeleton, an endoskeleton can reach a large size without becoming too heavy and cumbersome to carry around. Endoskeletons have a wide variety of different structures and are built from many different materials. Sponges are supported by an internal network of spicules, small, pointed structures made of silica or calcium compounds. Echinoderms have internal skeletons made of small, chalky plates. Vertebrates are the only animals that have internal skeletons made of bone. Bone is a living tissue that grows in step with the rest of the body.
 The earliest vertebrates lived in water, but as they emerged onto land, their skeletons adapted to the increased effects of gravity and the demands of moving about on legs. In general, their bones became denser and stronger, and in dinosaurs and some extinct mammals the bones reached colossal sizes. But not all groups of vertebrates have followed this trend. To help them stay aloft, birds have jettisoned as much surplus weight as possible, evolving hollow, air-filled bones. Their skeletons typically make up about 4 percent of their body weight, compared with 6 percent for mammals of a similar size. Frigate birds have carried this weight saving to an extreme: they have a wingspan of 2.1 m (7 ft), but their skeletons weigh just 115 g (4 oz).
 Nearly all groups of animals, including relatively simple animals such as jellyfish and flatworms, have muscle cells, which are specialized to move parts of the body. Muscles can move an entire animal - a process called locomotion - and they play an important part in the body's internal life, helping other systems to function.
 Muscle cells, also known as muscle fibres, are usually arranged in bundles or sheets. They work by contracting, and they are triggered into action by nerves, hormones, or their own in-built rhythms. Some muscles relax almost immediately after they have contracted, while others can stay contracted for a long time. A notable example of this extended contraction is seen in clams and other bivalve mollusks, which use muscle power to keep their shells tightly shut at low tide. Once the shell-closing muscles have contracted, they can remain locked for hours without tiring. In contrast, one of the strangest forms of muscle tissue, known as electroplaque, has completely lost its power to contract. Found in electric eels, torpedo rays, and other electric fish, this kind of muscle acts as an on-board battery pack, generating an electric current. In electric eels it can deliver a 600-volt shock - enough to stun or kill fish nearby.
 Vertebrates possess three different types of muscle tissue. Skeletal muscles, of which there are over 400 in the human body, are attached to bones and move parts of the skeleton in relation to each other. These muscles are under conscious or voluntary control—that is, an animal decides when to use them. Skeletal muscles are used in running, jumping, lifting, or other movements of the body. A second type of vertebrate muscle, called smooth or visceral muscle, is not voluntarily controlled. Smooth muscle lines many hollow internal structures, such as the blood vessels and intestines, and it changes the shape of these structures when it contracts. Smooth muscle contractions push food through the digestive system and carry out other functions, such as adjusting the diameter of blood vessels to regulate blood pressure. The third type of muscle is cardiac muscle, found exclusively in the heart. Unlike skeletal and smooth muscle, cardiac muscle contracts spontaneously without needing any trigger from outside. This pattern of three distinctive muscle types has endured throughout vertebrate evolution, but the arrangement of muscles has changed in many ways. In fish, which resemble the earliest vertebrates, most of the skeletal muscles fan out from either side of the backbone. This feature is easy to see when a fish has been cooked. Muscle often makes up 60 percent of a fish’s body weight, and almost all of the muscles are involved in moving the tail and spine, with very few operating other parts of the body.
 When vertebrates took up life on land, the down-the-spine muscle plan gradually began to change because more muscle power was needed for moving the limbs. Limb muscles became not only bigger but also longer. Some muscle fibres in a frog's hind legs can be a quarter as long as the frog’s body, much longer than any muscle fibres in fish. Another important change came about in the chest, where muscles were needed for breathing. In mammals, this trend eventually led to the development of a diaphragm, a dome of muscle that separates the chest from the abdomen and helps to suck air into the lungs.
 For an animal to survive, the cells that make up its body must function in a coordinated way. In most animals coordination is achieved through two body systems: the endocrine system (described in detail in a later section) and the nervous system. The endocrine system works through relatively slow-acting chemical messengers. The nervous system transmits fast-moving signals through specialized nerve cells or neurons.
 Nerve cells are never preserved in fossils, so there is no direct evidence of how nervous systems developed. However, living animals show a range of different plans that suggest how these systems might have evolved. The simplest plan is the nerve net, in which neurons are scattered roughly equally over the body. Nerve nets are found in cnidarians and, in a more elaborate form, in echinoderms. In a nerve net, the neurons are more or less identical, and there are relatively few of them. There is little coordination of impulses from different parts of the body. Even so, this rudimentary system permits simple patterns of behaviour, such as when jellyfish pull in their tentacles if prodded or extend them if they sense food.
 Invertebrates with a distinct head have nervous systems more like those of humans. These systems are divided into two parts: a central nervous system and a peripheral nervous system. The central nervous system acts as a coordination centre and a main highway for nerve signals. The peripheral nervous system carries signals to and from all parts of the body. In this two-part kind of nervous system, the neurons are specialized and work in different ways. Sensory neurons respond to stimuli from outside the body, while motor neurons trigger responses, usually by making muscles contract. For example, sensory neurons in a bee’s eye might pick up information about flowers nearby, and motor neurons might then send impulses to various muscles in order to move the bee toward the food source. Connecting sensory to motor neurons are association neurons or interneurons, which process signals before they are passed on.
 This kind of nervous system enables animals to behave in complex ways, carrying out what look like purposeful, thought-out movements, such as mating behaviours, strategies for avoiding predators and catching prey, and communication with other animals. However, some invertebrates, particularly arthropods, are not quite as intelligent as they seem. Many of their movements are triggered not by the brain itself, but by ganglia, clusters of neurons positioned at intervals down the body. Even if the brain stops working, these animals will often continue to move, although in an uncoordinated way.
 In vertebrates, the nervous system is dominated by the brain, which controls and monitors almost all of the body's activities. The spinal cord acts primarily as a relay system, although it can activate some movements on its own. One example is the withdrawal reflex, which makes us pull our hands away from anything painful, such as a hot stove. This reflex occurs so quickly that we are often aware of it only after it has happened. In these situations, if pain impulses had to travel to the brain for processing, a burning injury could result before a message to pull away could travel from the brain to the hands.
 All vertebrates have a brain with three main parts: the hindbrain, midbrain, and forebrain. During the course of evolution, the relative proportions of these brain regions have altered dramatically, and so have some of the functions that each part performs. The hindbrain, which is responsible for basic, involuntary functions such as breathing, has changed least. However, in birds and mammals one part of the hindbrain, the cerebellum, has expanded to coordinate balance and movement. The cerebellum is particularly important in birds, because flight requires faster decision-making than any other kind of movement.
 In mammals, the forebrain has undergone an almost explosive expansion. Its folded upper region, called the cerebrum, has become so big that most of the rest of the brain is hidden beneath it. This large mass of brain tissue—the cerebrum makes up 85 percent of the brain’s weight—carries out a wide range of tasks, including processing signals from the eyes and ears, triggering voluntary movements, and storing and analyzing information.
 For a nervous system to be useful, it must enable an animal to sense changes in its environment and react to them in an appropriate way. The task of detecting such changes is carried out by specialized cells called receptors, which pass signals on to sensory neurons. Some senses, such as touch, involve receptors that are scattered over the body, while others, such as vision, involve receptors that are clustered together in a particular sense organ.
 Humans are often said to have five senses, but our sensory abilities, like those of most animals, are actually wider than this. In addition to vision, hearing, taste, smell, and touch, we also have a sense of balance or equilibrium, provided by receptors in the inner ear. This sense makes us aware of movement and the pull of gravity. We have skin receptors that respond to cold and heat, and internal receptors that assess the temperature, pressure, and chemical composition of the blood. Internal receptors also monitor our posture—essential information for any organism that walks by balancing on two feet.
 Other animals share many of the senses that we have, and some can detect additional factors that we cannot. For example, sharks and rays detect the weak electrical fields that other animals generate, while snakes detect heat given off by their prey. Both of these senses help guide predators toward their prey, allowing the animals to attack in murky water or total darkness. In rattlesnakes, the thermal sense works through a pair of heat-sensitive pits on either side of the head, and these animals can detect a temperature difference of just 0.2° C (0.35° F).
 During the history of animal life, evolution has produced many designs for sense organs. The simplest light-sensing organs, for example, consist of a bundle of neurons backed by spots of dark pigment. ‘Eyes’ like these, which are found in flatworms, simply tell an animal what direction light is coming from, so that it can either creep toward the light or move away. Image-forming eyes are much more complex and follow one of two basic patterns. Compound eyes, which are found in crustaceans and insects, are divided into hundreds or thousands of small units called ommatidia. Each unit contributes a small part of the complete picture. By contrast, the eyes of vertebrates and cephalopod mollusks have only a single unit with one lens, although the lens can change shape to focus on objects at varying distances.
 Complex sense organs such as the vertebrate eye take millions of years to develop, but they are soon abandoned if they cease to be useful. Vertebrate species such as cave salamanders that have taken up life in dark places have often lost the use of their eyes. Further back in evolutionary history, an entire sensory system was lost as animals took up life on land. This sensory system, known as the lateral line, consists of a row of sensory pits along each side of a fish’s body. The lateral line enables fish to detect pressure waves in water, but has disappeared in land vertebrates.
 Most animals rely on nerves to coordinate their responses to the world around them. To coordinate internal processes, such as metabolism, growth, and development, animals use chemical messengers called hormones. Compared to nerve impulses, hormones travel through the body and take effect fairly slowly. But their effects also last longer than nerve impulses, shaping events in the body over minutes, hours, or even weeks, rather than mere seconds.
 Hormones are produced by the endocrine system, a diverse collection of glands that empty into the bloodstream or into other body fluids. Once a hormone has been released, it travels through the body until it contacts its target cell. When this happens, the hormone triggers biochemical changes in the target cell, altering the way the cell works. Although hormones can have far-reaching effects, these chemicals are usually present in tiny amounts. In adult humans, for example, the entire bloodstream contains less than 0.0005 g (0.00002 oz) of thyroxine, the hormone that controls the body's overall metabolic rate.
 Hormones play an important a role in the lives of invertebrates, but many of the substances involved are still poorly understood. One group of hormones that has been studied in detail is the one that controls growth and molting in insects. In most insects, molting is promoted by a hormone called ecdysone, which is produced by glands in the thorax (the middle part of the body). Molting is inhibited, or prevented, by juvenile hormone, which is produced by glands in the head. The slowly changing levels of these opposing hormones make an insect molt periodically as it grows up. If any of these glands is removed, the control system breaks down. This can either make an immature insect molt too many times, so that it never grows up, or make the insect race through childhood, so that it turns into a miniature adult prematurely.
 More than 50 hormones have been identified in vertebrates. In addition to the metabolism-regulating hormone thyroxine, other key hormones include insulin, which helps regulate blood sugar; antidiuretic hormone, which adjusts the blood's water content; and growth hormone, which speeds up cell division. These hormones are released all the time. Others come into play only in particular circumstances, or at certain stages of life. For example, epinephrine or adrenaline is a hormone that is released in moments of stress. Unlike most hormones, its effects are almost instantaneous. Release of adrenaline causes the heart rate to increase, as well as other changes that prepare an animal for emergency action when it is faced by danger. Vertebrates share many hormones, although a hormone may have different effects in different species. Thyroxine from a cow, for example, can affect a tadpole. However, instead of regulating the tadpole’s metabolism, it triggers the tadpole’s metamorphosis into a frog.
 Oxygen is an essential requirement for all animal life because animal cells need it to break down food molecules and generate energy in a process called cellular respiration. Respiratory systems help animals extract oxygen from the surrounding air or water and enable animals to get rid of the waste gas carbon dioxide.
 Some animals are able to obtain enough oxygen without any specialized respiratory systems at all. Oxygen simply diffuses through the body surface, eventually reaching all the body’s cells. However, this way of obtaining oxygen works only in animals such as flatworms, which have small, thin bodies and low oxygen demand. In larger, more active animals, the body's surface is not big enough to take aboard all the oxygen that the animal needs. Extra surfaces are required, and these are provided by respiratory organs. Respiratory organs are found in both land and water animals, but the physical differences between water and air mean that respiratory systems have evolved in different ways in these two environments.
 In aquatic animals, the most common respiratory organs are gills. Gills are outgrowths of the body, and the simplest of them, seen in sea slugs and some worms, are little more than tufts that protrude into the water. In other animals, such as bivalve mollusks and fish, gills are much more elaborate, with sets of parallel plates arranged to intercept the water flow. The animal actively pumps water over these plates, which are supplied with many tiny blood vessels to pick up oxygen from the water and carry it to the rest of the body. Gills of this type are extremely delicate, and they are usually hidden away inside shells or behind protective flaps, making them invisible from outside.
 Even though air contains more oxygen than water, gills rarely work on land. This is because without water for support, the flaps of the gills stick together or collapse. Instead, all land vertebrates, together with an assortment of land-dwelling invertebrates, breathe with the help of lungs.
 While gills are outgrowths, lungs are infoldings of the body surface. In spiders and their relatives, the folds are arranged like the leaves of a book, giving these respiratory organs the name book lungs. A land snail's single lung is a simple cavity underneath its shell that works passively, capturing enough oxygen to match the snail’s sluggish lifestyle.
 Vertebrates need a much larger oxygen supply for their active lifestyles than lungs built on an invertebrate plan could provide. The air spaces in vertebrate lungs divide many times, creating a spongy tissue containing a dense network of blood vessels. The membrane separating the air and blood is often just two cells—or about 0.5 micrometers (about 0.00002 in)—thick, which means that oxygen and carbon dioxide can easily travel across this barrier. In addition, vertebrates use muscle power to pump air in and out of their lungs. Reptiles and mammals suck air into their lungs and then blow it out again, but in birds the air travels straight through the lungs in a one-way flow. Bird lungs are connected to a set of air sacs that change shape to pump the air, while the lungs stay the same shape. This unique system is an extremely effective way of extracting oxygen from the air, and it enables birds to fly at altitudes that would leave mammals gasping for breath.
 In the insect world, a very different kind of respiratory system has evolved, based not on lungs but on air-filled tubes known as tracheae. These tubes reach deep into the insect’s body from openings called spiracles, supplying oxygen directly to all the internal organs. In small insects, the airflow is completely passive, but in large ones, such as grasshoppers, it is helped by body movements. Insects that live in water also have tracheae, indicating that they originally evolved on land. Some come to the surface to breathe, but others, such as dragonfly and mayfly larvae, have developed gills connected to their tracheae. Oxygen flows through their gills and into their tracheae, and then into their bodies.
 For an animal's body to work properly, vital substances have to move about within it. These substances include oxygen, carbon dioxide, food, waste products, and hormones, as well as factors used to fight disease. In a few of the simplest animals, the body is small and thin enough that substances can simply diffuse from cell to cell. In other animals, the body is too thick for diffusion to work effectively—cells in the centre of the body would starve. In these animals, the task of collecting various substances and delivering them to different parts of the body is carried out by the circulatory system.
 Circulatory systems have two main components: a fluid that does the carrying, and a mechanism for channelling the fluid around the body. The fluid, generally known as blood, varies enormously throughout the animal world. In vertebrates, the blood contains billions of red blood cells or erythrocytes. These cells receive their color from the pigment hemoglobin, which carries oxygen. Earthworms also have hemoglobin, although it is not contained in red blood cells, while crustaceans and cephalopods have a blue-coloured pigment called hemocyanin that carries oxygen. Insects have no blood pigments at all. Their blood is clear or yellow - the red colour of the fluids from some squashed insects comes from blood they have eaten, not from their own blood.
 In insects, blood travels forward along the body through a tube called the dorsal vessel, which contains a long muscular section that acts as a heart, propelling blood along the vessel. Once the blood has left this vessel, it flows back through the body spaces, bathing all the internal organs. In this kind of system, called an open circulation, blood makes up a large percentage of the animal's total weight. The blood flows slowly, sometimes taking over an hour to complete its circuit around the body.
 Open circulatory systems are found in many invertebrate groups, including crustaceans, snails, and clams. All vertebrates, as well as annelid worms, octopuses, and squids, have an alternative pattern called closed circulation. In a closed circulation, the blood travels within a system of blood vessels. It is pumped out of the heart into a branching network of thick-walled arteries, and it returns to the heart through a network of veins. The arteries and veins are linked by microscopic vessels called capillaries, which bring the blood into close contact with all the body's cells. The capillaries have such thin walls that oxygen and other substances can easily diffuse through them and into the tissues nearby. In this kind of circulatory system, the blood volume is relatively small, but it is under high pressure. As a result, it moves quickly. Human blood, for example, speeds around the entire body in a minute or less.
 In the simplest vertebrate circulatory systems, seen in fish, blood flows in a single loop, traveling from the heart to the gills, and then on around the body. In other vertebrates, a more complex pattern has gradually evolved. Instead of flowing in a single circuit, blood flows in a double loop, first through the lungs, and then back to the heart before moving on through the rest of the body. The circuit that takes blood to and from the lungs is known as the pulmonary circulation, while the circuit that takes blood around the rest of the body is called the systemic circulation. The advantage of this double system is that the blood receives an extra push from the heart after it has picked up oxygen in the lungs. This extra push causes the blood to flow swiftly and deliver oxygen to the body more effectively.
 Along with these changes in the structure of the circulatory system have come changes in the anatomy of the vertebrate heart. A fish's heart has two main chambers, while the hearts of amphibians and most reptiles have three chambers. Birds, mammals, and crocodiles have four-chambered hearts. Their hearts work like two hearts side by side, keeping the pulmonary and systemic circulations completely separate. This separation prevents oxygenated and deoxygenated blood from mixing and enables the circulatory system to deliver more oxygen to the body tissues - a crucial feature for sustaining the high metabolic rate characteristic of birds and mammals.
 When blood flows through capillaries, some of its fluid is squeezed out and into the surrounding tissues. Without drainage, this fluid would gradually build up, and the blood's volume would steadily drop. In vertebrates, the excess fluid is collected and returned to the blood by the lymphatic system, a collection of thin-walled tubes that extend throughout the body, often shadowing the blood vessels.
 Unlike blood vessels, which form a continuous circuit throughout the body, lymphatic vessels begin as closed, fingerlike tubes throughout the body’s tissues. The fluid they contain, called lymph, is channelled through the system and eventually emptied into veins near the heart. In mammals, lymph is kept on the move by the contraction of the body's muscles, and valves at intervals along lymphatic vessels prevent the lymph from flowing backward. Other vertebrates also have these valves, and they often have lymph hearts as well, which pump the fluid along. Some birds have two lymph hearts, while frogs can have nearly a hundred.
 In addition to draining the body's tissues, the lymphatic system carries out several other functions. For example, it transports some hormones, and it ferries microscopic globules of fat from the intestines to the bloodstream, which delivers these high-energy particles to the body’s cells. But one of its most important secondary roles is in fighting disease. In mammals, this work is carried out in bean-shaped swellings called lymph nodes in which lymph is filtered and any foreign matter is engulfed and removed.
 For bacteria and other microorganisms, animal bodies can be ideal places to live. Inside an animal body, microorganisms are sheltered from the physical environment, and the animal body provides a ready source of food. But when microorganisms enter an animal’s body and become established there, many can cause disease. To counter the threat posed by such microorganisms, animals have evolved methods of keeping intruders out, and of destroying any that do manage to enter the body. Collectively, these strategies make up the immune system—a battery of defenses so complex that they are not yet fully understood.
 An animal's skin or body covering is the simplest part of the immune system and the first line of defence against microbial invaders. The body covering helps keep harmful microorganisms out, but it is not germ-free. Instead, it often harbors billions of harmless bacteria. This collection of harmless microbes, known as the animal's bacterial flora, makes it harder for dangerous species to become established because they cannot compete for space or resources with the harmless skin-dwelling bacteria.
 If any microorganisms breach this outer barrier and enter the body itself, the immune system immediately reacts. Many animals, including all vertebrates, have wandering cells called phagocytes that home in on intruders and engulf them. In humans and other vertebrates, the phagocytes circulate in the blood, reaching the site of any infection by squeezing through the walls of capillaries into the surrounding tissues. Phagocytes act rapidly and are nonspecific, meaning that they respond to any kind of foreign substance or organism in much the same way.
 Vertebrates also have a much more sophisticated defense mechanism that targets invaders with far greater precision. Known as the adaptive immune system, this mechanism enables the body to ‘memorize’ the chemical identity of any alien substance. If the same substance appears a second time, the immune system attacks it with much greater speed and efficiency than the first time. The main components of the adaptive immune system are proteins called antibodies and cells called killer T lymphocytes. Antibodies circulate in the blood and help other parts of the immune system recognize and destroy foreign substances. Lymphocytes are found in the blood, lymph nodes, spleen, and thymus gland.
 The remarkable feature of the adaptive immune system is its ability to target a vast range of alien substances, without targeting the body's own cells. So far, nothing like this mechanism has been found in the invertebrate world, and it is not clear how this elaborate system originally evolved. However, even the very simplest animals show a clear ability to distinguish between their own cells and cells from some other organism. This ability is demonstrated by experiments in which sponges are passed through fine sieves so that their cells are separated from each other. If the cells from two species of sponges are mixed together, they slowly crawl apart, forming small single-species sponges once more.
 During vertebrate evolution, the antibody system has gradually become more complex and diverse. Fish have just one class of antibodies, all sharing a common biochemical backbone. By contrast, reptiles have two classes, while humans have five. This increasing diversity has probably come about through the biological equivalent of an arms race, as animals and disease-causing organisms each struggle to gain the upper hand.
 Animals need food for energy and for raw materials to build their bodies, but before they can use food, they have to break it down into basic component molecules. This process is called digestion, and it can be carried out in two quite different ways. In one method, known as intracellular digestion, an animal's cells engulf nearby particles of food. Once a cell has swallowed a food particle, it is stored in a fluid-filled compartment called a vacuole, where digestive chemicals break it down. In the second digestive method, called extracellular digestion, the food never enters the body’s cells directly. Instead, it is broken down outside the cells, and only the digested products are absorbed.
 Intracellular digestion is widespread in protozoans but much rarer among multicellular animals. Sponges use intracellular digestion, as do cnidarians and flatworms, although they also use extracellular digestion as well. Most other animals, including all vertebrates, rely on extracellular digestion alone. The great advantage of this method is that it enables an animal to tackle much bulkier kinds of food: In the deep sea, for example, gulper eels sometimes catch and digest animals that are bigger than themselves.
 The work of breaking down and absorbing food is carried out by an animal's digestive system. The simplest kind of digestive system, seen in cnidarians and flatworms, has just a single opening or mouth that leads to a space inside the body. This arrangement means that after food has been digested, any undigested remains have to be expelled the same way they came in. A much more common plan is based on a hollow tube, called the gut or alimentary canal, that runs right through the body. Food enters the tube through one opening, the mouth, and leftover waste leaves it through another, the anus. A key feature of this system is that it works like a production line: Different parts of the tube carry out different digestive tasks while food is on the move.
 Mammals show how evolution has adapted this production-line digestive system to deal with different foods. In most mammals, apart from those that eat only insects and plankton, teeth play an important part in collecting food and getting it ready to be digested. Teeth can stab, slice, or chew food to break it into smaller pieces before it goes to the stomach. When teeth are in the form of tusks such as in elephants, they can even be used as instruments for digging or to help bring food closer to the mouth. Once food has been swallowed, it begins its journey through the digestive system.
 In carnivorous and omnivorous mammals - those that eat meat or that have a varied diet - the first stop is the stomach, where powerful acids and enzymes begin to break the food down. The food then moves on to the small intestine, where more digestive enzymes are added to the mix. Some of these enzymes come from cells lining the intestine, and many others come from the pancreas, one of several organs attached to the alimentary canal. The small intestine absorbs useful substances as the digested food travels through it. Finally, after a journey that may last several hours in larger animals, the undigested waste arrives in the large intestine, where any surplus water is absorbed. After the undigested waste passes through the large intestine, it is eliminated from the body through the anus.
 In herbivorous, or plant-eating, mammals, digestion is usually more complicated. This is because mammals do not have any enzymes that can digest cellulose, the tough, structural substance that makes up plant cell walls. To survive, they rely on symbiotic microorganisms to do this work for them. Certain hoofed mammals called ruminants store millions of these microbes in the rumen, the biggest part of their three- or four-chambered stomachs and the first stop for food after these animals swallow it. After the microorganisms have had time to break down the cellulose in the food, the animals regurgitate the food and chew it a second time. When the food is swallowed after this second chewing, it continues its journey through the stomach and into the intestines. This roundabout process allows these mammals to obtain more energy and nutrients from their food than they would be able to if cellulose passed through their bodies undigested, as it does in humans.
 Unlike mammals, birds do not have teeth, which means that they cannot chew. To grind up plant food, they have a gizzard, a muscular stomach with a hardened lining. Gizzards are also found in crocodiles and their relatives, and in some insects. Birds and crocodiles often swallow stones or pieces of grit to help the gizzard do its work. Fossilized remains from New Zealand show that moas, which were among the largest birds that have ever lived and became extinct in the early 19th century, carried over 2 kg (4.4 lb) of these stones, some as big as golf balls. Where stones are hard to find, crocodiles have been known to swallow glass or pottery instead.
 Animals generate two kinds of bodily waste. The first is waste from the digestive system, matter that has travelled through the alimentary canal without being absorbed. This type of waste is fairly easy to eliminate. It simply passes out of the body through the anus. The second type is chemical waste that is generated inside the body itself - substances including carbon dioxide, nitrogen containing compounds, a variety of salts, and surplus water. This internal waste is potentially dangerous because if it accumulates in the body, it can poison living tissue. Animals have developed a variety of strategies for excretion, the process of removing this chemical waste.
 Carbon dioxide is a product of the chemical reactions living things use to release energy. In very small animals, carbon dioxide seeps out of the body before it has a chance to build up. But in larger animals, including humans, carbon dioxide can quickly cause problems if it is not removed. The work of removing this waste gas is carried out by the circulatory and respiratory systems. In birds and mammals in particular, the level of carbon dioxide in the blood is controlled very carefully. If it rises even slightly, a part of the brain triggers faster and deeper breathing, which speeds up carbon dioxide loss through the lungs. This continues until the normal carbon dioxide level is restored.
 Nitrogen-containing, or nitrogenous, waste is a by-product of the chemical breakdown of proteins in food. This waste can be highly toxic, and it has to be removed from the body without delay. The nitrogen-containing compound ammonia is often an end product of the breakdown of proteins, so the simplest way to dispose of nitrogen is in this form. However, ammonia is so poisonous that it must be diluted in generous amounts of water to prevent it from causing harm. Aquatic animals generally excrete nitrogenous waste in the form of ammonia, because they can easily obtain enough water from the environment to safely flush this compound away.
 Land animals have evolved ways of converting ammonia into less dangerous compounds, such as urea and uric acid. Urea is water-soluble, and mammals excrete this substance in their urine. Uric acid is a much less soluble compound that can be disposed of in a semi-solid form. Birds, reptiles, and insects all excrete nitrogen-containing waste in the form of uric acid. This characteristic is linked to the fact that these animals lay shelled eggs: unlike urea, uric acid can be stowed away in an egg without poisoning the animal developing inside.
 In vertebrates, nitrogenous compounds, together with salts and other waste products, are removed from the body by the kidneys. Kidneys work like filters, removing waste and water from the blood, then returning most of the water to the bloodstream. In mammals, this waste is expelled via the bladder. In birds and reptiles, the waste is emptied into the cloaca, a chamber that serves as an exit point for the digestive and reproductive systems as well as the excretory system. Salt can also escape from the body in other ways. Some mammals, including humans, excrete salt in sweat, while seabirds and crocodiles have special glands that exude it in a salty fluid. In seabirds, these glands are behind the nostrils, and in crocodiles they are at the back of the tongue.
 Kidneys originally evolved to control the body's water balance, and this is still an important part of their function. Animals that live in dry habitats have developed highly efficient kidneys that keep water loss to a minimum. The waste substances in human urine are usually about 4 times as concentrated as they are in the blood. By contrast, in desert animals, such as kangaroo rats, waste substances can be over 15 times as concentrated. This means that a kangaroo rat uses much less water to dispose of the same amount of waste.
 Invertebrates do not have kidneys, but they do have organs that work in similar ways to remove and excrete waste from the bloodstream. These organs include nephridia in earthworms and Malpighian tubules in insects. The nephridia of earthworms are arranged in pairs, with one pair per body segment, and they open to the outside of the body through microscopic pores. Malpighian tubules attach to an insect's gut and empty directly into it, so that nitrogenous wastes leave the body through the anus. Insects that have a high-protein diet, such as blood-sucking flies, must dispose of a large amount of nitrogenous waste, and their Malpighian tubules are particularly well developed.
 While land animals must try to conserve water, surplus water can be a serious hazard for some kinds of freshwater life. Their body fluids contain more salts, proteins, and other substances than the water around them does, so water is driven into their bodies by osmosis. This process moves water molecules across cell membranes until the concentration of dissolved substances is the same on both sides of the membrane. Without a water disposal system to counteract osmosis, these animals would run the real risk of exploding. Protozoans get rid of surplus water by using contractile vacuoles, internal reservoirs that fill up with water and then pump it out of the cell. Freshwater sponges also bail out water in this way. Fish, meanwhile, use their kidneys to filter excess water from the blood. Freshwater fish never drink, except when swallowing food, but even so they have to excrete water all the time.
 Reproduction is the most important task that any animal undertakes. Reproduction ensures that the species will continue to survive even though individual animals grow old and die. It also gives species an opportunity to increase their numbers and to evolve as time goes by.
 The simplest method of reproduction involves a single parent and no specialized body parts at all. The parent divides into two or more similar pieces, each of which becomes a new animal. This method of multiplying is known as asexual reproduction. It is carried out by some simple invertebrates, including sponges, sea anemones, and flatworms, but it reaches a high point among ribbon worms. These worms periodically disintegrate into a dozen or more sections, each of which grows a new head and tail.
 Although it is a simple, reliable way of multiplying, asexual reproduction has one very important disadvantage. Only one parent is involved, so all the offspring are genetically identical, both to each other and to the parent. This lack of genetic variability means that the offspring are all equally vulnerable to disease or other hazards. If faced with some disease or change in the physical environment, all the offspring may die. This is the reason why most animals reproduce sexually, a process that ensures genetic variability in the offspring. If the offspring have different combinations of genes, it is more likely that at least some of them will have traits that enable them to survive a disease or other hazard in the environment.
 Sexual reproduction is a much more complex process than asexual reproduction and requires two partners. For sexual reproduction to occur, specialized sex cells or gametes are also needed. These cells are made in sex organs called gonads. Male sex cells, or sperm, are produced in testes, while female sex cells, or eggs, are produced in ovaries. Sex cells have half the number of chromosomes, the units that contain heredity material, found in normal body cells. During sexual reproduction, an egg cell joins with a sperm in a process called fertilization, creating a cell with the normal number of chromosomes. The cell then divides to become an embryo and eventually develops into a fully formed animal. The combination of chromosomes from the sperm and egg gives the offspring a new and unique genetic makeup, different from that of either parent.
 Egg cells are typically large compared to normal body cells. The size disparity is particularly marked in reptiles and birds. In these groups, an egg cell sometimes weighs over a billion times as much as a body cell. Sperm cells, on the other hand, are little more than packages of genes, typically powered by a hairlike flagellum that pushes them along. In many animals, sperm are produced in much larger quantities than eggs are.
 Fertilization must take place in watery surroundings because otherwise sex cells would soon dry out and die. For animals that live in water, this requirement poses no problems. Most of them release their sex cells into the water, so fertilization takes place outside their bodies. On land, almost all animals use internal fertilization, in which the male introduces his sperm directly into the female's body. This need for direct physical contact has generated a vast range of complex patterns of behaviour. The male and female have to locate each other, and each has to demonstrate suitability as a partner. Elaborate courtship rituals have evolved to defuse an animal’s instinctive fear of being approached, and in animals that pair up for life, ritual behavior maintains the bond between the two partners (see Animal Courtship and Mating).
 Most animals are either male or female, but this is not always the case. Many earthworms and snails are hermaphrodites, which means that each worm has both male and female sex organs. The advantage of this system is that any two partners can mate. To reproduce, an earthworm need not find another earthworm of a specific sex - any earthworm will do. In some animals, particularly sap-sucking insects such as aphids, females are able to produce young without having their eggs fertilized by a male. This method of asexual reproduction, called parthenogenesis, allows the animals to boost their numbers very quickly when environmental conditions are beneficial. However, few animals that are capable of parthenogenesis rely solely on this way of reproducing. Most also have a sexual phase in their life cycles, which creates genetic variety in their young.
 Some animals, including humans, hyenas, and domesticated cattle and pigs, breed all year round. Most, however, breed in step with the seasons, so their reproductive systems are used only at particular times of the year. In many animals, seasonal changes trigger the release of hormones that bring the reproductive system into action. Hormones may trigger mating behaviour, such as territorial behaviour in males or nesting behaviour in females. They may also trigger other changes in a female that signal to males that she is fertile, such as the release of pheromones or the genital swelling seen in some female primates. One of the most dramatic effects of reproductive hormones is seen in male starlings. In these birds, the testes can grow 200 times bigger at the onset of the breeding season, shrinking again once the season is over. By shutting down the reproductive system when it is not needed, a starling sheds surplus weight, saving energy when it fores.
 Knowledge of anatomy began in prehistoric times, when people cut up carcasses of animals they hunted, fished, or herded before cooking them. Primitive artists made crude drawings of animals, such as the images preserved in cave paintings, but very little of this ancient knowledge was recorded in writing. The ancient Egyptians probably had some knowledge of the internal anatomies of humans, cats, and other species because they mummified these animals. In the practice of mummifying, the Egyptians removed the internal organs from a dead body and filled the internal cavities of the body with materials that retard decay .
 The first scholar to produce a large body of writing on comparative anatomy was the Greek philosopher Aristotle, who described and classified about 540 different kinds of animals during the 300's Bc. Most other early writings on anatomy dealt primarily with the human body. However, much of the information in these writings was gathered by dissecting animals, so the first writers in human anatomy were, in fact, comparative anatomists. Early anatomists relied on animal dissections in part because the human body was held to be sacred by many ancient peoples, and its dissection was forbidden by law. Although the Greeks began to ease many of these restrictions after about 400 Bc, much knowledge of human anatomy was still gleaned from dissections of domestic animals and monkeys. The preeminent anatomist of the ancient world, the Greek physician Galen, probably never dissected a human body. He dissected various domestic animals, monkeys, apes, and even some exotic species killed in the gladiatorial ring. His writings remained the primary authority on human anatomy for nearly 1,500 years, until the Belgian anatomist Andreas Vesalius pointed out that many of Galen’s observations on human anatomy were inaccurate because they were based on animal dissections.
 The Renaissance in Europe (14th to 16th centuries ad) was a period of rapidly increasing knowledge about human anatomy, but some influential scientists continued to be interested in comparative anatomy. English physician William Harvey, best known for his studies on the circulation of the blood, also dissected many animals and advocated the study of comparative anatomy.
 The term comparative anatomy was first used by English scientist Nehemiah Grew, who published a book in 1681 describing the anatomy of stomachs and intestines in several different species. During the 18th century, knowledge of comparative anatomy advanced rapidly. The French naturalist Louis Jean-Marie Daubenton compared the anatomies of many different animals in a section of Buffon’s Natural History (a 36-volume work published between 1749 and 1789 that contained observations about the mineralogical, botanical, and zoological characteristics of the Earth).
 During the 19th century, comparative anatomy studies helped British scientist Charles Darwin to develop the modern theory of evolution. On a voyage to the Galápagos Islands off the western coast of South America, Darwin saw more than a dozen different species of finches living on various islands. All the finches were similar in size and in their dull, blackish or brownish gray coloring, but their beaks varied widely in size and shape. These similarities and differences suggested to Darwin that the various finch species might be related to one another and that they had all arisen from the same ancestral species.
 Around the same time, modern concepts of comparative anatomy were developing from the work of many great zoologists. Richard Owen, a British biologist known for his studies of the fossil birdlike dinosaur Archaeopteryx, published the third edition of his Comparative Anatomy in 1871. He also developed the concepts of homology and analogy. Thomas H. Huxley, another British biologist, published his Comparative Anatomy of Vertebrated Animals in 1871. He also established the modern concept of the evolution of the vertebrate skull. German biologist Ernst H. Haeckel contributed to the knowledge of the three germ layers that are found in the early embryos of most animals and develop into the organs of adults. He also established the biogenetic law, which states that during their development from fertilized egg to adult, animals pass through stages that recapitulate their evolutionary development. Although it is now known that this law does not hold absolutely (Haeckel constructed evolutionary trees based entirely on embryology that are now known to be false), Haeckel’s idea has remained profoundly influential.
 Anatomical research constantly refines our knowledge of how animals are related. Until recently, anatomists relied almost entirely on the evidence of physical features to understand evolutionary relationships, but today they use information from DNA as well. This biochemical evidence has helped to answer several questions about animal evolution. For example, in the late 1980s some scientists put forward a theory that large fruit-eating bats evolved separately from other bats. However, DNA evidence suggests that all bats evolved from primitive insect-eating ancestors, contradicting that theory.
 The explosive growth in molecular biology has also increased our understanding of how animal bodies develop, and how cells and tissues become specialized, a process known as differentiation. Differentiation has been studied in meticulous detail in one particular animal, a tiny, transparent nematode worm called Caenorhabditis elegans. Scientists have also identified genes in this animal that control the timing of differentiation in separate groups of cells. Another interesting discovery from this research is that cell death is a normal part of forming the body's organs. In Caenorhabditis elegans, over 100 of the animal's cells are programmed to die before the adult body is complete.
 In 1995, three biologists - Edward B. Lewis and Eric F. Wieschaus of the United States, and Christiane Nüsslein-Volhard of Germany - were awarded the Nobel Prize in physiology or medicine for their discovery of master genes that control the position of different body parts. If one of these master genes is defective, the wrong kind of body part may develop, or the same part may be duplicated in several different places. Their work was originally carried out on fruit flies, but it has since been discovered that similar master genes occur in a wide range of animals, including nematode worms, frogs, and humans. Many anatomists believe that these genes may turn out to play an important part in animal evolution.
 The study of the biochemistry of memory is another exciting scientific enterprise, but one that can only be touched upon here. Scientists estimate that an adult human brain contains about 100 billion neurons. Each of these is connected to hundreds or thousands of other neurons, forming trillions of neural connections. Neurons communicate by chemical messengers called neurotransmitters. An electrical signal travels along the neuron, triggering the release of neurotransmitters at the synapse, the small gap between neurons. The neurotransmitters travel across the synapse and act on the next neuron by binding with protein molecules called receptors. Most scientists believe that memories are somehow stored among the brain’s trillions of synapses, rather than in the neurons themselves.
 Scientists who study the biochemistry of learning and memory often focus on the marine snail Aplysia because its simple nervous system allows them to study the effects of various stimuli on specific synapses. A change in the snail’s behaviour due to learning can be correlated with a change at the level of the synapse. One exciting scientific frontier is discovering the changes in neurotransmitters that occur at the level of the synapse.
 Memory (in the sense of a psychology), processes by which people and other organisms encode, store, and retrieve information. Encoding refers to the initial perception and registration of information. Storage is the retention of encoded information over time. Retrieval refers to the processes involved in using stored information. Whenever people successfully recall a prior experience, they must have encoded, stored, and retrieved information about the experience. Conversely, memory failure - for example, forgetting an important fact - reflects a breakdown in one of these stages of memory.
 Memory is critical to humans and all other living organisms. Practically all of our daily activities - talking, understanding, reading, socializing - depend on our having learned and stored information about our environments. Memory allows us to retrieve events from the distant past or from moments ago. It enables us to learn new skills and to form habits. Without the ability to access past experiences or information, we would be unable to comprehend language, recognize our friends and family members, find our way home, or even tie a shoe. Life would be a series of disconnected experiences, each one new and unfamiliar. Without any sort of memory, humans would quickly perish.
 Philosophers, psychologists, writers, and other thinkers have long been fascinated by memory. Among their questions: How does the brain store memories? Why do people remember some bits of information but not others? Can people improve their memories? What is the capacity of memory? Memory also is frequently a subject of controversy because of questions about its accuracy. An eyewitness’s memory of a crime can play a crucial role in determining a suspect’s guilt or innocence. However, psychologists agree that people do not always recall events as they actually happened, and sometimes people mistakenly recall events that never happened.
 Memory and learning are closely related, and the terms often describe roughly the same processes. The term learning is often used to refer to processes involved in the initial acquisition or encoding of information, whereas the term memory more often refers to later storage and retrieval of information. However, this distinction is not hard and fast. After all, information is learned only when it can be retrieved later, and retrieval cannot occur unless information was learned. Thus, psychologists often refer to the learning/memory process as a means of incorporating all facets of encoding, storage, and retrieval.
 Although the English language uses a single word for memory, there are actually many different kinds. Most theoretical models of memory distinguish three main systems or types: sensory memory, short-term or working memory, and long-term memory. Within each of these categories are further divisions.
 Sensory memory refers to the initial, momentary recording of information in our sensory systems. When sensations strike our eyes, they linger briefly in the visual system. This kind of sensory memory is called iconic memory and refers to the usually brief visual persistence of information as it is being interpreted by the visual system. Echoic memory is the name applied to the same phenomenon in the auditory domain: the brief mental echo that persists after information has been heard. Similar systems are assumed to exist for other sensory systems (touch, taste, and smell), although researchers have studied these senses less thoroughly.
 American psychologist George Sperling demonstrated the existence of sensory memory in an experiment in 1960. Sperling asked subjects in the experiment to look at a blank screen. Then he flashed an array of 12 letters on the screen for one-twentieth of a second, arranged in the following pattern:
 Subjects were then asked to recall as many letters from the image as they could. Most could only recall four or five letters accurately. Subjects knew they had seen more letters, but they were unable to name them. Sterling hypothesized that the entire letter-array image registered briefly in sensory memory, but the image faded too quickly for subjects to ‘see’ all the letters. To test this idea, he conducted another experiment in which he sounded a tone immediately after flashing the image on the screen. A high tone directed subjects to report the letters in the top row, a medium tone cued subjects to report the middle row, and a low tone directed subjects to report letters in the bottom row. Sperling found that subjects could accurately recall the letters in each row most of the time, no matter which row the tone specified. Thus, all of the letters were momentarily available in sensory memory.
 Sensory memory systems typically function outside of awareness and store information for only a very short time. Iconic memory seems to last less than a second. Echoic memory probably lasts a bit longer; estimates range up to three or four seconds. Usually sensory information coming in next replaces the old information. For example, when we move our eyes, new visual input masks or erases the first image. The information in sensory memory vanishes unless it captures our attention and enters working memory.
 Psychologists originally used the term short-term memory to refer to the ability to hold information in mind over a brief period of time. As conceptions of short-term memory expanded to include more than just the brief storage of information, psychologists created new terminology. The term working memory is now commonly used to refer to a broader system that both stores information briefly and allows manipulation and use of the stored information.
 We can keep information circulating in working memory by rehearsing it. For example, suppose you look up a telephone number in a directory. You can hold the number in memory almost indefinitely by saying it over and over to yourself. But if something distracts you for a moment, you may quickly lose it and have to look it up again. Forgetting can occur rapidly from working memory. For more information on the duration of working memory.
 Psychologists often study working memory storage by examining how well people remember a list of items. In a typical experiment, people are presented with a series of words, one every few seconds. Then they are instructed to recall as many of the words as they can, in any order. Most people remember the words at the beginning and end of the series better than those in the middle. This phenomenon is called the serial position effect because the chance of recalling an item is related to its position in the series. The results from one such experiment are shown in the accompanying chart entitled ‘Serial Position Effect.’ In this experiment, recall was tested either immediately after presentation of the list items or after 30 seconds. Subjects in both conditions demonstrated what is known as the primacy effect, which is better recall of the first few list items. Psychologists believe this effect occurs because people tend to process the first few items more than later items. Subjects in the immediate-recall condition also showed the recency effect, or better recall of the last items on the list. The recency effect occurs because people can store recently presented information temporarily in working memory. When the recall test is delayed for 30 seconds, however, the information in working memory fades, and the recency effect disappears.
 Working memory has a basic limitation: It can hold only a limited amount of information at one time. Early research on short-term storage of information focused on memory span—how many items people can correctly recall in order. Researchers would show people increasingly long sequences of digits or letters and then ask them to recall as many of the items as they could. In 1956 American psychologist George Miller reviewed many experiments on memory span and concluded that people could hold an average of seven items in short-term memory. He referred to this limit as ‘the magical number seven, plus or minus two’ because the results of the studies were so consistent. More recent studies have attempted to separate true storage capacity from processing capacity by using tests more complex than memory span. These studies have estimated a somewhat lower short-term storage capacity than did the earlier experiments. People can overcome such storage limitations by grouping information into chunks, or meaningful units. This topic is discussed in the Encoding and Recoding section of this article.
 Working memory is critical for mental work, or thinking. Suppose you are trying to solve the arithmetic problem 64 × 9 in your head. You probably would need to perform some intermediate calculations in your head before arriving at the final answer. The ability to carry out these kinds of calculations depends on working memory capacity, which varies individually. Studies have also shown that working memory changes with age. As children grow older, their working memory capacity increases. Working memory declines in old age and in some types of brain diseases, such as Alzheimer’s disease.
 Working memory capacity is correlated with intelligence (as measured by intelligence tests). This correlation has led some psychologists to argue that working memory abilities are essentially those that underlie general intelligence. The more capacity people have to hold information in mind while they think, the more intelligent they are. In addition, research suggests that there are different types of working memory. For example, the ability to hold visual images in mind seems independent from the ability to retain verbal information.
 The term long-term memory is somewhat of a catch-all phrase because it can refer to facts learned a few minutes ago, personal memories many decades old, or skills learned with practice. Generally, however, long-term memory describes a system in the brain that can store vast amounts of information on a relatively enduring basis. When you play soccer, remember what you had for lunch yesterday, recall your first birthday party, play a trivia game, or sing along to a favorite song, you draw on information and skills stored in long-term memory.
 Psychologists have different theories about how information enters long-term memory. The traditional view is that that information enters short-term memory and, depending on how it is processed, may then transfer to long-term memory. However, another view is that short-term memory and long-term memory are arranged in a parallel rather than sequential fashion. That is, information may be registered simultaneously in the two systems.
 There seems to be no finite capacity to long-term memory. People can learn and retain new facts and skills throughout their lives. Although older adults may show a decline in certain capacities - for example, recalling recent events - they can still profit from experience even in old age. For example, vocabulary increases over the entire life span. The brain remains plastic and capable of new learning throughout one’s lifetime, at least under normal conditions. Certain neurological diseases, such as Alzheimer’s disease, can greatly diminish the capacity for new learning.
 Psychologists once thought of long-term memory as a single system. Today, most researchers distinguish three long-term memory systems: episodic memory, semantic memory, and procedural memory.
 Episodic memory refers to memories of specific episodes in one’s life and is what most people think of as memory. Episodic memories are connected with a specific time and place. If you were asked to recount everything you did yesterday, you would rely on episodic memory to recall the events. Similarly, you would draw on episodic memory to describe a family vacation, the way you felt when you won an award, or the circumstances of a childhood accident. Episodic memory contains the personal, autobiographical details of our lives.
 Semantic memory refers to our general knowledge of the world and all of the facts we know. Semantic memory allows a person to know that the chemical symbol for salt is NaCl, that dogs have four legs, that Thomas Jefferson was president of the United States, that 3 × 3 equals 9, and thousands of other facts. Semantic memories are not tied to the particular time and place of learning. For example, in order to remember that Thomas Jefferson was president, people do not have to recall the time and place that they first learned this fact. The knowledge transcends the original context in which it was learned. In this respect, semantic memory differs from episodic memory, which is closely related to time and place. Semantic memory also seems to have a different neural basis than episodic memory. Brain-damaged patients who have great difficulties remembering their own recent personal experiences often can access their permanent knowledge quite readily. Thus, episodic memory and semantic memory seem to represent independent capacities.
 Procedural memory refers to the skills that humans possess. Tying shoelaces, riding a bicycle, swimming, and hitting a baseball are examples of procedural memory. Procedural memory is often contrasted with episodic and semantic memory. Episodic and semantic memory are both classified as types of declarative memory because people can consciously recall facts, events, and experiences and then verbally declare or describe their recollections. In contrast, nondeclarative, or procedural, memory is expressed through performance and typically does not require a conscious effort to recall.
 Could you learn how to tie your shoelaces or to swim through purely declarative  - say, by reading or listening to descriptions of how to do it? If it would be possible at all, the process would be slow, difficult, and unnatural. People best gain procedural knowledge by practicing the procedures directly, not via instructions given in words. Verbal coaching in sports is partly a case of trying to impart procedural knowledge through declarative means, although coaching by example (and videotape) may work better. Still, in most cases there is no substitution for practice. Procedural learning may take considerable effort, and improvements can occur over a long period of time. The accompanying chart, entitled ‘Practice and Speed in Cigar-Making,’ shows the effect of practice on Cuban factory workers making cigars. The performance of the workers continued to improve even after they had produced more than 100,000 cigars.
 Although long-term episodic, semantic, and procedural memory all represent independent systems, it would usually be wrong to think of a particular task as relying exclusively on one type. The examples used above (remembering yesterday’s events, knowing that Thomas Jefferson was president, or tying shoes) represent relatively pure cases. However, most human activities rely on the interaction of long-term memory systems. Consider the expression of social skills or, more specifically, table manners. If you know to set the dinner table with the fork to the left of each plate, is this an example of procedural memory, semantic memory, or even episodic memory from having witnessed a past example? Probably the answer is some blend of all three. In addition, procedural memory does not apply only to physical skills, as in the previous examples. Complex cognitive behavior, such as reading or remembering, also has a procedural component - the mental procedures we execute to perform these activities. Thus, the separation of procedural and declarative memory from one another is not clear-cut in all cases.
 Encoding is the process of perceiving information and bringing it into the memory system. Encoding is not simply copying information directly from the outside world into the brain. Rather, the process is properly conceived as recoding, or converting information from one form to another. The human visual system provides an example of how information can change forms. Light from the outside world enters the eye in the form of waves of electromagnetic radiation. The retina of the eye transduces (converts) this radiation to bioelectrical signals that the brain interprets as visual images. Similarly, when people encode information into memory, they convert it from one form to another to help them remember it later. For example, a simple digit, such as 7, can be recoded in many ways: as the word seven, the roman numeral VII, a prime number, the square root of 49, and so on. Recoding is routine in memory. Each of us has a unique background and set of experiences that help or hinder us in learning new information. An ornithologist could learn a list of obscure bird names much more easily than most of us due to his or her prior knowledge about birds, which would permit efficient recoding.
 Recoding is often the key to efficient remembering. To understand the concept of recoding, first try to remember the following series of numbers by reading it once out loud, closing your eyes, and trying to recall the items in their correct order: one, four, nine, one, six, two, five, three, six, four, nine, six, four, eight, one. Test yourself now. If you are like most people, you might have recalled around 7 of the 15 digits in their correct order. However, a simple recoding strategy would have helped you to recall them effortlessly. Write the numbers out in digits and you may notice that they represent the squares of the numbers of 1 to 9: 1, 4, 9, 16, 25, 36, 49, 64, 81. That is, 1 squared is 1, 2 squared is 4, 3 squared is 9, 4 squared is 16, and so on. Recoding the series of numbers as a meaningful rule—the squares of the numbers 1 to - 9 would have permitted you to remember all 15 digits. Although this example is contrived, the principle that underlies it is universally valid: How well a person remembers information depends on how the information is recoded. Recoding is sometimes called chunking, because separate bits of information can be grouped into meaningful units, or chunks. For example, the five letters e, t, s, e, and l can be rearranged into sleet and one word remembered instead of five individual units.
 Psychologists have studied many different recoding strategies. One common strategy that people often use to remember items of information is to rehearse them, or to repeat them mentally. However, simply repeating information over and over again rarely aids long-term retention - although it works perfectly well to hold information, such as a phone number, in working memory. A more effective way to remember information is through effortful or elaborative processing, which involves thinking about information in a meaningful way and associating it with existing information in long-term memory.
 One effective form of effortful processing is turning information into mental imagery. For example, one experiment compared two groups of people that were given different instructions on how to encode a list of words into memory. Some people were told to repeat the words over and over, and some were told to form mental pictures of the words. For words referring to concrete objects, such as truck and volleyball, forming mental images of each object led to better later recall than did rote rehearsal.
 Thinking about the meaning of information is also a good technique for most memory tasks. Studies have found that the more deeply we process information, the more likely we are to recall it later. In 1975 Canadian psychologists Fergus Craik and Endel Tulving conducted a set of experiments that demonstrated this effect. The experimenters asked subjects to answer questions about a series of words, such as bear, which were flashed one at a time. For each word, subjects were asked one of three types of questions, each requiring a different level of processing or analysis. Sometimes subjects were asked about the word’s visual appearance: ‘Is the word in upper case letters?’ For other words, subjects were asked to focus on the sound of the word: ‘Does it rhyme with chair?’ The third type of question required people to think about the meaning of the word: ‘Is it an animal?’ When subjects were later given a recognition test for the words they had seen, they were poor at recognizing words they had encoded superficially by visual appearance or sound. They were far better at recognizing words they had encoded for meaning. (See the accompanying chart entitled ‘Depth of Processing and Memory.’)
 Although some information requires deliberate, effortful processing to store in long-term memory, a vast amount of information is encoded automatically, without effort or awareness. Every day each of us encodes and stores thousands of events and facts, most of which we will never need to recall. For example, people do not have to make a conscious effort to remember the face of a person they meet for the first time. They can easily recognize the person’s face in future encounters. Studies have shown that people also encode information about spatial locations, time, and the frequency of events without intending to. For instance, people can recognize how many times a certain word was presented in a long series of words with relative accuracy.
 People have developed many elaborate and imaginative recoding strategies, known as mnemonic devices, to aid them in remembering information.
 Encoding and storage are necessary to acquire and retain information. But the crucial process in remembering is retrieval, without which we could not access our memories. Unless we retrieve an experience, we do not really remember it. In the broadest sense, retrieval refers to the use of stored information.
 For many years, psychologists considered memory retrieval to be the deliberate recollection of facts or past experiences. However, in the early 1980s psychologists began to realize that people can be influenced by past experiences without any awareness that they are remembering. For example, a series of experiments showed that brain-damaged amnesic patients - who lose certain types of memory function - were influenced by previously viewed information even though they had no conscious memory of having seen the information before. Based on these and other findings, psychologists now distinguish two main classes of retrieval processes: explicit memory and implicit memory.
 Explicit memory refers to the deliberate, conscious recollection of facts and past experiences. If someone asked you to recall everything you did yesterday, this task would require explicit memory processes. There are two basic types of explicit memory tests: recall tests and recognition tests.
 In recall tests, people are asked to retrieve memories without the benefit of any hints or cues. A request to remember everything that happened to you yesterday or to recollect all the words in a list you just heard would be an example of a recall test. Suppose you were briefly shown a series of words: cow, prize, road, gem, hobby, string, weather. A recall test would require you to write down or say as many of the words as you could. If you were instructed to recall the words in any order, the test would be one of free recall. If you were directed to recall the words in the order they were presented, the test would one of serial recall or ordered recall. Another type of test is cued recall, in which people are given cues or prompts designed to aid recall. Using the above list as an example, a cued recall test might ask, ‘What word on the list was related to car?’ In school, tests that require an essay or fill-in-the-blank response are examples of recall tests. All recall tests require people to explicitly retrieve events from memory.
 Recognition tests require people to examine a list of items and identify those they have seen before, or to determine whether they have seen a single item before. Multiple-choice and true-false exams are types of recognition tests. For example, a recognition test on the list of words above might ask, ‘Which of the following words appeared on the list? (a) plant (b) driver (c) string (d) radio.’ People can often recognize items that they cannot recall. You have probably had the experience of not being able to answer a question but then recognizing an answer as correct when someone else supplies it. Likewise, adults shown yearbook pictures of their high-school classmates often have difficulty recalling the classmates’ names, but they can easily pick the classmates’ names out of a list.
 In some cases, recall can be better than recognition. For example, if asked, ‘Do you know a famous person named Cooper?’ you might answer ‘no.’ However, given the cue ‘James Fenimore,’ you might recall American writer James Fenimore Cooper, even though you did not recognize the surname by itself.
 Implicit memory refers to using stored information without trying to retrieve it. People often retain and use prior experiences without realizing it. For example, suppose that the word serendipity is not part of your normal working vocabulary, and one day you hear the word used in a conversation. A day later you find yourself using the word in conversation and wonder why. The earlier exposure to the word primed you to retrieve it automatically in the right situation without intending to do so.
 Another example of implicit memory in everyday life is unintentional plagiarism. That is, people can copy the ideas of others without being aware they are doing so. The most famous case involved British singer-songwriter George Harrison, formerly of the Beatles. Harrison was sued because his 1970 hit song ‘My Sweet Lord’ sounded strikingly similar to ‘He’s So Fine,’ a 1963 hit by The Chiffons. Harrison denied that he had intentionally copied the earlier song but admitted that he had heard it before writing ‘My Sweet Lord.’ In 1976 a judge ruled against Harrison, concluding that the singer had been unconsciously influenced by his memory.
 Psychologists use the term priming to describe the relatively automatic change in performance resulting from prior exposure to information. Priming occurs even when people do not consciously remember being exposed to the information. One way to look for evidence of implicit memory, therefore, is to measure priming effects. In typical implicit memory experiments, subjects study a long list of words, such as assassin and boyhood. Later, subjects are presented with a series of word fragments (such as a_ _a_ _in and b_ _ho_d) or word ‘stems’ (as______ or bo_____) and are instructed to complete the fragment or stem with the first word that comes to mind. The subjects are not explicitly asked to recall the list words. Nevertheless, the previous presentation of assassin and boyhood primes subjects to complete the fragments with these words more often than would be expected by guessing. This priming effect occurs even if the subjects do not remember studying the words before—strong evidence of implicit memory. The hallmark of all implicit memory tests is that people are not required to remember; rather, they are given a task, and past experience is expressed on the test relatively automatically.
 Remarkably, even amnesic individuals show implicit memory. In one experiment, amnesic patients and normal subjects studied lists of words and then were given both an explicit memory test (free recall) and an implicit memory test (word-stem completion). Relative to control subjects, the amnesic patients failed miserably at the free-recall test. Due to their memory disorder, they could consciously remember very few of the list words. On the implicit test, however, the amnesic patients performed as well or better than the normal subjects (see the accompanying chart entitled ‘Word Memory in Amnesia’). Even though the amnesic patients could not consciously access the desired information, they expressed prior learning in the form of priming on the implicit memory test. They retained the information without knowing it.
 Studies have found that a person’s performance on implicit memory tests can be relatively independent of his or her performance on explicit tests. Some factors that have large effects on explicit memory test performance have no effect—or even the opposite effect—on implicit memory test performance. For example, whether people pay attention to the appearance, the sound, or the meaning of words has a huge effect on how well they can explicitly recall the words later. But this variable has practically no effect on their implicit memory test performance, these implicit tests seem to tap a different form of memory.
 One fascinating feature of remembering is how a cue from the external world can cause us to suddenly remember something from years ago. For example, returning to where you once lived or went to school may bring back memories of events experienced long ago. Sights, sounds, and smells can all trigger recall of long dormant events. These experiences point to the critical nature of retrieval in remembering.
 A retrieval cue is any stimulus that helps us recall information in long-term memory. The fact that retrieval cues can provoke powerful recollections has led some researchers to speculate that perhaps all memories are permanent. That is, perhaps nearly all experiences are recorded in memory for a lifetime, and all forgetting is due not to the actual loss of memories but to our inability to retrieve them. This idea is an interesting one, but most memory researchers believe it is probably wrong.
 Two general principles govern the effectiveness of retrieval cues. One is called the encoding specificity principle. According to this principle, stimuli may act as retrieval cues for an experience if they were encoded with the experience. Pictures, words, sounds, or smells will cause us to remember an experience to the extent that they are similar to the features of the experience that we encoded into memory. For example, the smell of cotton candy may trigger your memory of a specific amusement park because you smelled cotton candy there.
 Distinctiveness is another principle that determines the effectiveness of retrieval cues. Suppose a group of people is instructed to study a list of 100 items. Ninety-nine are words, but one item in the middle of the list is a picture of an elephant. If people were given the retrieval cue ‘Which item was the picture?’ almost everyone would remember the elephant. However, suppose another group of people was given a different 100-item list in which the elephant picture appeared in the same position, but all the other items were also pictures of other objects and animals. Now the retrieval cue would not enable people to recall the picture of the elephant because the cue is no longer distinctive. Distinctive cues specify one or a few items of information.
 Overt cues such as sights and sounds can clearly induce remembering. But evidence indicates that more subtle cues, such as moods and physiological states, can also influence our ability to recall events. State-dependent memory refers to the phenomenon in which people can retrieve information better if they are in the same physiological state as when they learned the information. The initial observations that aroused interest in state-dependent memory came from therapists working with alcoholic patients. When sober, patients often could not remember some act they performed when intoxicated. For example, they might put away a paycheck while intoxicated and then forget where they put it. This memory failure is not surprising, because alcohol and other depressant drugs (such as marijuana, sedatives, and even antihistamines) are known to impair learning and memory. However, in the case of the alcoholics, if they got drunk again after a period of abstinence, they sometimes recovered the memory of where the paycheck was. This observation suggested that perhaps drug-induced states function as a retrieval cue.
 A number of studies have confirmed this hypothesis. In one typical experiment, volunteers drank an alcoholic or nonalcoholic beverage before studying a list of words. A day later, the same subjects were asked to recall as many of the words as they could, either in the same state as they were in during the learning phase (intoxicated or sober) or in a different state. Not surprisingly, individuals intoxicated during learning but sober during the test did worse at recall than those sober during both phases. In addition, people who studied material sober and then were tested while intoxicated did worse than those sober for both phases. The most interesting finding, however, was that people intoxicated during both the learning and test phase did much better at recall than those who were intoxicated only during learning, showing the effect of state-dependent memory (see the chart entitled ‘State-Dependent Memory’). When people are in the same state during study and testing, their recall is better than those tested in a different state. However, one should not conclude that alcohol improves memory. As noted, alcohol and other depressant drugs usually impair memory and most other cognitive processes. Those who had alcohol during both phases remembered less than those who were sober during both phases.
 Psychologists have also studied the topic of mood-dependent memory. If people are in a sad mood when exposed to information, will they remember it better later if they are in a sad mood when they try to retrieve it? Although experiments testing this idea have produced mixed results, most find evidence for mood-dependent memory. Recall tests are usually more sensitive to mood- and state-dependent effects than are recognition or implicit memory tests. Recognition tests may provide powerful retrieval cues that overshadow the effects of more subtle state and mood cues.
 Mood- and state-dependent memory effects are further examples of the encoding specificity principle. If mood or drug state is encoded as part of the learning experience, then providing this cue during retrieval enhances performance.
 Psychologists have explored several puzzling phenomena of retrieval that nearly everyone has experienced. These include déjà vu, jamais vu, flashbulb memories, and the tip-of-the-tongue state.
 The sense of déjà vu (French for ‘seen before’) is the strange sensation of having been somewhere before, or experienced your current situation before, even though you know you have not. One possible explanation of déjà vu is that aspects of the current situation act as retrieval cues that unconsciously evoke an earlier experience, resulting in an eerie sense of familiarity. Another puzzling phenomenon is the sense of jamais vu (French for ‘never seen’). This feeling arises when people feel they are experiencing something for the first time, even though they know they must have experienced it before. The encoding specificity principle may partly explain jamais vu; despite the overt similarity of the current and past situations, the cues of the current situation do not match the encoded features of the earlier situation.
 A flashbulb memory is an unusually vivid memory of an especially emotional or dramatic past event. For example, the death of Princess Diana in 1997 created a flashbulb memory for many people. People remember where they were when they heard the news, whom they heard it from, and other seemingly fine details of the event and how they learned of it. Examples of other public events for which many people have flashbulb memories are the assassination of U.S. President John F. Kennedy in 1963, the explosion of the space shuttle Challenger in 1986, and the bombing of the Oklahoma City federal building in 1995. Flashbulb memories may also be associated with vivid emotional experiences in one’s own life: the death of a family member or close friend, the birth of a baby, being in a car accident, and so on.
 Are flashbulb memories as accurate as they seem? In one study, people were asked the day after the Challenger explosion to report how they learned about the news. Two years later the same people were asked the same question. One-third of the people gave answers different from the ones they originally reported. For example, some people initially reported hearing about the event from a friend, but then two years later claimed to have gotten the news from television. Therefore, flashbulb memories are not faultless, as is often supposed.
 Flashbulb memories may seem particularly vivid for a variety of reasons. First, the events are usually quite distinctive and hence memorable. In addition, many studies show that events causing strong emotion (either positive or negative) are usually well remembered. Finally, people often think about and discuss striking events with others, and this periodic rehearsal may help to increase retention of the memory.
 Another curious phenomenon is the tip-of-the-tongue state. This term refers to the situation in which a person tries to retrieve a relatively familiar word, name, or fact, but cannot quite do so. Although the missing item seems almost within grasp, its retrieval eludes the person for some time. The feeling has been described as like being on the brink of a sneeze. Most people regard the tip-of-the-tongue state as mildly unpleasant and its eventual resolution, if and when it comes, as a relief. Studies have shown that older adults are more prone to the tip-of-the-tongue phenomenon than are younger adults, although people of all ages report the experience.
 Often when a person cannot retrieve the correct bit of information, some other wrong item intrudes into one’s thoughts. For example, in trying to remember the name of a short, slobbering breed of dog with long ears and a sad face, a person might repeatedly retrieve beagle but know that it is not the right answer. Eventually the person might recover the sought-after name, basset hound.
 One theory of the tip-of-the tongue state is that the intruding item essentially clogs the retrieval mechanism and prevents retrieval of the correct item. That is, the person cannot think of basset hound because beagle gets in the way and blocks retrieval of the correct name. Another idea is that the phenomenon occurs when a person has only partial information that is simply insufficient to retrieve the correct item, so the failure is one of activation of the target item (basset hound in this example). Both the partial activation theory and the blocking theory could be partly correct in explaining the tip-of-the-tongue phenomenon.
 One of the most controversial issues in the study of memory is the accuracy of recollections, especially over long periods of time. We would like to believe that our cherished memories of childhood and other periods in our life are faithful renditions of the past. However, several case studies and many experiments show that memories - even when held with confidence - can be quite erroneous.
 The Swiss psychologist Jean Piaget reported a striking case from his own past. He had a firm memory from early childhood of his nurse fending off an attempted kidnapping, with himself as the potential victim. He remembered his nanny pushing him in his carriage when a man came up and tried to kidnap him. He had a detailed memory of the man, of the location of the event, of scratches that his nanny received when she fended off the villain, and finally, of a police officer coming to the rescue. However, when Piaget was 15 years old, his nanny decided to confess her past sins. One of these was that she had made up the entire kidnapping story to attract sympathy and scratched herself to make it seem real. The events Piaget so vividly remembered from his childhood had never actually occurred! Piaget concluded that the false memory was probably implanted by the nanny’s frequent retelling of the original story over the years. Eventually, the scene became rooted in Piaget’s memory as an actual event.
 Psychologists generally accept the idea that long-term memories are reconstructive. That is, rather than containing an exact and detailed record of our past, like a video recording, our memories are instead more generic. As a better analogy, consider paleontologists who must reconstruct a dinosaur from bits and pieces of actual bones. They begin with a general idea or scheme of what the dinosaur looked like and then fit the bits and pieces into the overall framework. Likewise, in remembering, we begin with general themes about past events and later weave in bits and pieces of detail to develop a coherent story. Whether the narrative that we weave today can faithfully capture the distant past is a matter of dispute. In many cases psychologists have discovered that recollections can deviate greatly from the way the events actually occurred, just as in the anecdote about Piaget.
 Sir Frederic Bartlett, a British psychologist, argued for the reconstructive nature of memory in the 1930s. He introduced the term schema and its plural form schemata to refer to the general themes that we retain of experience. For example, if you wanted to remember a new fairy tale, you would try to integrate information from the new tale into your general schema for what a fairy tale is. Many researchers have showed that schemata can distort the memories that people form of events. That is, people will sometimes remove or omit details of an experience from memory if they do not fit well with the schema. Similarly, people may confidently remember details that did not actually occur because they are consistent with the schema.
 Another way our cognitive system introduces error is by means of inference. Whenever humans encode information, they tend to make inferences and assumptions that go beyond the literal information given. For example, one study showed that if people read a sentence such as ‘The karate champion hit the cinder block,’ they would often remember the sentence as ‘The karate champion broke the cinder block.’ The remembered version of the events is implied by the original sentence but is not literally stated there (the champion may have hit the block and not broken it). Many memory distortions arise from these errors of encoding, in which the information encoded into memory is not literally what was perceived but is some extension of it.
 The question of memory distortion has particular importance in the courtroom. Each year thousands of people are charged with crimes solely on the basis of eyewitness testimony, and in many trials an eyewitness’s testimony is the main evidence by which juries decide a suspect’s guilt or innocence. Are eyewitnesses’ memories accurate? Although eyewitness testimony is often correct, psychologists agree that witnesses are not always accurate in their recollections of events. We have already described how people often remember events in a way that fits with their expectations or schema for a situation. In addition, evidence shows that memories may be distorted after an event has occurred. After experiencing or seeing a crime, an eyewitness is exposed to a great deal of further information related to the crime. The witness may be interrogated by police, by attorneys, and by friends. He or she may also read information related to the case. Such information, coming weeks or months after the crime, can cause witnesses to reconstruct their memory of the crime and change what they say on the witness stand.
 American psychologist Elizabeth Loftus has conducted many experiments that demonstrate how eyewitnesses can reconstruct their memories based on misleading information. In one study, subjects watched a videotape of an automobile accident involving two cars. Later they were given a questionnaire about the incident, one item of which asked, ‘About how fast were the cars going when they hit each other?’ For some groups of subjects, however, the verb hit was replaced by smashed, collided, bumped, or contacted. Although all subjects viewed the same videotape, their speed estimates differed considerably as a function of how the question was asked. The average speed estimate was 32 mph when the verb was contacted, 34 mph when it was hit, 38 mph when it was bumped, 39 mph when it was collided, and 41 mph when it was smashed. In a follow-up study, subjects were asked a week later whether there was any broken glass at the accident scene. In reality, the film showed no broken glass. Those questioned with the word smashed were more than twice as likely to ‘remember’ broken glass than those asked the question with hit. The information coming in after the original event was integrated with that event, causing it to be remembered in a different way.
 This study, and dozens of others like it, shows the power of leading questions: The form in which the question is asked helps determine its answer. Our memories are not encapsulated little packets lying in the brain undisturbed until they are needed for retrieval. Rather, people are prone to the misinformation effect - the tendency to distort one’s memory of an event when later exposed to misleading information about it. Eyewitnesses’ testimony can be tainted and altered by information they hear or see after the critical event in question. Therefore, in court cases one must carefully consider whether the testimony of an eyewitness could possibly have been altered through misleading suggestions provided between the time of the crime and the court case.
 The problem of determining whether memories are accurate is even more difficult when children are the witnesses. Research shows that in some situations children are more prone to memory distortions than are young adults. In addition, older adults (over 70 years of age) often show a greater tendency to memory distortion than do younger adults.
 Even though psychologists have shown that memories can be distorted and that people can remember things that never occurred, our memories are certainly not totally faulty. Usually memory does capture the gist of events that have occurred to us, even if details may be readily distorted.
 Can people recover memories of childhood experiences in adulthood, ones that they had never thought about since childhood? Can a powerful retrieval cue suddenly trigger a memory for some long-lost event? Although these questions are interesting, scientific evidence does not yet exist to answer them convincingly. Of course, people often do remember childhood experiences quite clearly, but these memories are usually of significant events that have been repeatedly retrieved over the years. The questions above, on the other hand, pertain to unique events that have not been repeatedly retrieved. Can people remember something when they are 40 years old that happened to them when they were 10 years old—something that they have never thought about during the intervening 30 years?
 Such questions take on renewed relevance in what is called the recovered memory controversy. Although the term recovered memory could be applied to retrieval of any memory from the distant past, it is normally used to refer to a particular type of case in contemporary psychology: the long-delayed recovery of sexual abuse in childhood. In a typical case, a person - often, but not always, undergoing psychotherapy—claims to recover a memory of some horrific childhood event. The prototypical case involves an adult woman recovering a memory of being sexually abused by a male figure from her childhood, such as being raped by a father, uncle, or teacher. Sometimes the memory is recovered suddenly, but often the recovery is gradual, occurring over days and weeks. After recovering the memory, the person may confront and accuse the individual deemed responsible, or even take the person to court. The accused person almost always vehemently denies the allegation and claims the events never took place. Whom is to be believed?
 A huge debate swirls over the accuracy of recovered memories. Proponents of their accuracy believe in the theory of repression, which is discussed in a subsequent section of this article. According to this theory, memories for terrible events (especially of a sexual nature) can be repressed, or banished to an unconscious state. The memories may lie dormant for years, but with great effort and appropriate cues, they can be retrieved with relative accuracy. Critics point out that there is little evidence supporting the concept of repression, aside from some reports on individual cases. The critics believe that the processes that give rise to false memories - suggestion and imagination—may better explain the phenomenon of recovered memories.
 Without corroborating evidence, there is no way to check the accuracy of recovered memories. Thus, even though people may sincerely believe they have recovered a memory of an event from their distant past, the event usually remains a matter of belief, not of fact. Because psychologists know so little about recovery of distant memories, even of normal experiences, the debate over recovered memories is not likely to be resolved soon. For more detail on the recovered memory controversy,
 Forgetting is defined as the loss of information over time. Under most conditions, people recall information better soon after learning it than after a long delay; as time passes, they forget some of the information. We have all failed to remember some bit of information when we need it, so we often see forgetting as a bother. However, forgetting can also be useful because we need to continually update our memories. When we move and receive a new telephone number, we need to forget the old one and learn the new one. If you park your car every day on a large lot, you need to remember where you parked it today and not yesterday or the day before. Thus, forgetting can have an adaptive function.
 The subject of forgetting is one of the oldest topics in experimental psychology. German philosopher Hermann Ebbinghaus initiated the scientific study of human memory in experiments that he began in 1879 and published in 1885 in his book, On Memory. Ebbinghaus developed an ingenious way to measure forgetting. In order to avoid the influence of familiar material, he created dozens of lists of nonsense syllables, which consisted of pronounceable but meaningless three-letter combinations such as XAK or CUV. He would learn a list by repeating the items in it over and over, until he could recite the list once without error. He would note how many trials or how long it took him to learn the list. He then tested his memory of the list after an interval ranging from 20 minutes to 31 days. He measured how much he had forgotten by the amount of time or the number of trials it took him to relearn the list. By conducting this experiment with many lists, Ebbinghaus found that the rate of forgetting was relatively consistent. Forgetting occurred relatively rapidly at first and then seemed to level off over time (see the accompanying chart entitled ‘Forgetting Curve’). Other psychologists have since confirmed that the general shape of the forgetting curve holds true for many different types of material. Some researchers have argued that with very well learned material, the curve eventually flattens out, showing no additional forgetting over time.
 Ebbinghaus’s forgetting curve illustrated the loss of information from long-term memory. Researchers have also studied rate of forgetting for short-term or working memory. In one experiment, subjects heard an experimenter speak a three-letter combination (such as CYG or FTQ). The subjects’ task was to repeat back the three letters after a delay of 3, 6, 9, 12, 15, or 18 seconds. To prevent subjects from mentally rehearsing the letters during the delay, they were instructed to count backward by threes from a random three-digit number, such as 361, until signaled to recall the letters. As shown in the accompanying chart entitled ‘Duration of Working Memory,’ forgetting occurs very rapidly in this situation. Nevertheless, it follows the same general pattern as in long-term memory, with sharp forgetting at first and then a declining rate of forgetting. Psychologists have debated for many years whether short-term and long-term forgetting have similar or different explanations.
 The oldest idea about forgetting is that it is simply caused by decay. That is, memory traces are formed in the brain when we learn information, and they gradually disintegrate over time. Although decay theory was accepted as a general explanation of forgetting for many years, most psychologists do not lend it credence today for several reasons. First, decay theory does not really provide an explanation of forgetting, but merely a description. That is, time by itself is not a causative agent; rather, processes operating over time cause effects. Consider a bicycle left out in the rain that has rusted. If someone asked why it rusted, he or she would not be satisfied with the answer of ‘time out in the rain.’ A more accurate explanation would refer to oxidation processes operating over time as the cause of the rusty bicycle. Likewise, memory decay merely describes the fact of forgetting, not the processes that cause it.
 The second problem for decay theory is the phenomenon of reminiscence, the fact that sometimes memories actually recover over time. Experiments confirm an observation experienced by most people: One can forget some information at one point in time and yet be able to retrieve it perfectly well at a later point. This feat would be impossible if memories inevitably decayed further over time. A final reason that decay theory is no longer accepted is that researchers accumulated support for a different theory—that interference processes cause forgetting.
 According to many psychologists, forgetting occurs because of interference from other information or activities over time. A now-classic experiment conducted in 1924 by two American psychologists, John Jenkins and Karl Dallenbach, provided the first evidence for the role of interference in forgetting. The experimenters enlisted two students to learn lists of nonsense syllables either late at night (just before going to bed) or the first thing in the morning (just after getting up). The researchers then tested the students’ memories of the syllables after one, two, four, or eight hours. If the students learned the material just before bed, they slept during the time between the study session and the test. If they learned the material just after waking, they were awake during the interval before testing. The researchers’ results are shown in the accompanying chart entitled, ‘Forgetting in Sleep and Waking.’ The students forgot significantly more while they were awake than while they were asleep. Even when wakened from a sound sleep, they remembered the syllables better than when they returned to the lab for testing during the day. If decay of memories occurred automatically with the passage of time, the rate of forgetting should have been the same during sleep and waking. What seemed to cause forgetting was not time itself, but interference from activities and events occurring over time.
 There are two types of interference. Proactive interference occurs when prior learning or experience interferes with our ability to recall newer information. For example, suppose you studied Spanish in tenth grade and French in eleventh grade. If you then took a French vocabulary test much later, your earlier study of Spanish vocabulary might interfere with your ability to remember the correct French translations. Retroactive interference occurs when new information interferes with our ability to recall earlier information or experiences. For example, try to remember what you had for lunch five days ago. The lunches you have had for the intervening four days probably interfere with your ability to remember this event. Both proactive and retroactive interference can have devastating effects on remembering.
 Another possible cause of forgetting resides in the concept of repression, which refers to forgetting an unpleasant event or piece of information due to its threatening quality. The idea of repression was introduced in the late 19th century by Austrian physician Sigmund Freud, the founder of psychoanalysis. According to Freudian theory, people banish unpleasant events into their unconscious mind. However, repressed memories may continue to unconsciously influence people’s attitudes and behaviors and may result in unpleasant side effects, such as unusual physical symptoms and slips of speech. A simple example of repression might be forgetting a dentist appointment or some other unpleasant daily activity. Some theorists believe that it is possible to forget entire episodes of the past—such as being sexually abused as a child—due to repression. The concept of repression is complicated and difficult to study scientifically. Most evidence exists in the form of case studies that are usually open to multiple interpretations. For this reason, many memory researchers are skeptical of repression as an explanation of forgetting, although this verdict is by no means unanimous. For further information on repressed memories, see the sidebar ‘Recovered Memories and False Memories’ that accompanies this article.
 One of the most exciting topics of scientific investigation lies in cognitive neuroscience: How do physical processes in the brain give rise to our psychological experiences? In particular, a great deal of research is trying to uncover the biological basis of learning and memory. How does the brain code experience so that it can be later remembered? Where do memory processes occur in the brain?
 In the early and mid-1900s, psychologists engaged in the ‘search for the engram.’ They used the term engram to refer to the physical change in the nervous system that occurs as a result of experience. (Today most psychologists use the term memory trace to describe the same thing.) The researchers hoped to find some particular location in the brain where memories were stored. This early work, conducted mostly with animals, failed to find a specific locus of memory in the brain. For example, American psychologist Karl Lashley trained rats to solve a maze, then surgically removed various parts of the rats’ brains. No matter what part of the brain he removed, the rats always retained at least some ability to solve the maze. From such research, psychologists concluded that memory is distributed across the brain, not localized in one place.
 Modern research confirms the hypothesis that memories are not localized in one place in the brain, but rather involve interacting circuits operating across the brain. Many of the neural regions used in perceiving and attending to information seem also to be involved in the encoding and subsequent retrieval of information. Thus, although different brain regions perform different memory-related processes, the memories themselves do not appear to reside in any particular place.
 The hippocampus is thought to be one of the most important brain structures involved in memory. The case of the patient H.M. (only his initials were used to preserve his anonymity), one of the most famous case studies in neuropsychology, strikingly demonstrates the importance of the hippocampus. In 1953, as a 27-year-old man, H.M. underwent brain surgery to control severe epileptic seizures. The surgeons removed his medial temporal lobes, which included most of the hippocampus, the amygdala, and surrounding structures. Although the operation successfully controlled H.M.’s seizures, it had an altogether unexpected and devastating side effect: H.M. was unable to form new long-term memories in a way that he could later retrieve them. That is, he could not remember anything that happened to him after the surgery. His memory of events prior to the surgery was mostly intact, and his reasoning and thinking skills remained strong. But he could not remember meeting new people or new experiences for more than a few minutes. Researchers concluded that the hippocampus and its surrounding structures in the medial temporal lobe play a critical role in the encoding of episodic memories, especially in binding elements of memories together to locate the memories in particular times and places.
 Further evidence for the importance of the hippocampus and other regions of the brain in human memory has been provided by advanced brain imaging techniques, such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). Brain imaging methods allow researchers to see the activity of the living human brain on a computer screen as a person engages in different types of cognitive tasks, such as reading, solving math problems, or memorizing a list of words. These scanning methods take advantage of the fact that when a brain region becomes active, the rate at which neurons (brain cells) fire increases within this region. Increased neuronal firing in a region causes an increase in blood flow to that region, which the scanners can measure. Therefore, if a person is encoding new information into memory and the hippocampus is active during encoding, we would expect to see increased blood flow to the hippocampus. This is exactly the pattern observed in most studies.
 Neuroimaging techniques have revealed other brain regions involved in memory. The frontal lobes play an important role in encoding and retrieving memories. For example, certain areas of the left frontal lobe seem especially active during encoding of memories, whereas those in the right frontal lobe are more active during retrieval. An area in the right anterior prefrontal cortex becomes active when a person is trying to retrieve a previously experienced episode. Some evidence indicates that this region may be even more active when the retrieval attempt is successful - that is, when the person not only attempts to remember but is able to remember some previous occurrence.
 The study of the biochemistry of memory is another exciting scientific enterprise, but one that can only be touched upon here. Scientists estimate that an adult human brain contains about 100 billion neurons. Each of these is connected to hundreds or thousands of other neurons, forming trillions of neural connections. Neurons communicate by chemical messengers called neurotransmitters. An electrical signal travels along the neuron, triggering the release of neurotransmitters at the synapse, the small gap between neurons. The neurotransmitters travel across the synapse and act on the next neuron by binding with protein molecules called receptors. Most scientists believe that memories are somehow stored among the brain’s trillions of synapses, rather than in the neurons themselves.
 Scientists who study the biochemistry of learning and memory often focus on the marine snail Aplysia because its simple nervous system allows them to study the effects of various stimuli on specific synapses. A change in the snail’s behavior due to learning can be correlated with a change at the level of the synapse. One exciting scientific frontier is discovering the changes in neurotransmitters that occur at the level of the synapse.
 Other researchers have implicated glucose (a sugar) and insulin (a hormone secreted by the pancreas) as important to learning and memory. Humans and other animals given these substances show an improved capacity to learn and remember. Typically, when animals or humans ingest glucose, the pancreas responds by increasing insulin production, so it is difficult to determine which substance contributes to improved performance. Some studies in humans that have systematically varied the amount of glucose and insulin in the blood have shown that insulin may be the more important of the two substances for learning.
 Scientists also have examined the influence of genes on learning and memory. In one study, scientists bred strains of mice with extra copies of a gene that helps build a protein called N-methyl-D-aspartate, or NMDA. This protein acts as a receptor for certain neurotransmitters. The genetically altered mice outperformed normal mice on a variety of tests of learning and memory. In addition, other studies have found that chemically blocking NMDA receptors impairs learning in laboratory rats. Future discoveries from genetic and biochemical studies may lead to treatments for memory deficits from Alzheimer’s disease and other conditions that affect memory.
 Amnesia means loss of memory. There are many different types of amnesias, but they fall into two major classes according to their cause: functional amnesia and organic amnesia. Functional amnesia refers to memory disorders that seem to result from psychological trauma, not an injury to the brain. Organic amnesia involves memory loss caused by specific malfunctions in the brain. Another type of amnesia is infantile amnesia, which refers to the fact that most people lack specific memories of the first few years of their life.
 Severe psychological trauma can sometimes cause functional amnesia. People with functional amnesia seem to have nothing physically wrong with their brain, even though the traumatic event presumably affects their brain in some way. In dissociative amnesia (sometimes called limited amnesia), a person loses memory of some important past experiences. For example, a person victimized by a crime may lose his or her memory for the event. Soldiers returning from battle sometimes experience similar symptoms.
 Another type of functional amnesia is dissociative fugue, also referred to as functional retrograde amnesia. People with this disorder have much more extensive forgetting that may obscure their whole past. They commonly forget their personal identity and personal memories, and they often unexpectedly wander away from home. Typically the fugue state ends by itself within a few days or weeks. Often, after recovery the individual fails to remember anything that occurred during the fugue state.
 Dissociative identity disorder, also called multiple personality disorder, is a type of amnesia in which a person appears to have two or more distinct personal identities. These identities alternate in their control of the individual’s conscious experiences, thoughts, and actions. In many cases, the person’s primary identity cannot recall what happened while the individual was controlled by another identity.
 Although functional amnesias are a recurrent theme of television shows and movies, relatively few well-documented cases exist in the scientific literature. Most experts believe that these conditions do exist, but that they are exceedingly rare.
 Organic amnesia refers to any traumatic forgetting that is produced by specific brain damage. Typically, these amnesias occur as part of brain disorders caused by tumors, strokes, head trauma, or degenerative diseases, such as Alzheimer’s disease. However, certain psychoactive drugs (drugs affecting mood or behavior) can cause amnesia, as can certain dietary deficiencies and electroconvulsive therapy for depression. Organic amnesias may be temporary or permanent. Amnesia resulting from a mild concussion or from electroconvulsive therapy is usually temporary, whereas severe head injuries may lead to permanent memory loss.
 The case of the patient H.M., described earlier in this article, is an example of organic amnesia. In 1953 brain surgery for epilepsy left H.M. with dramatic anterograde amnesia, meaning he was unable to remember new information and events that occurred after his operation. Somewhat surprisingly, this severe impairment in the ability to learn new information was accompanied by no detectable impairment in his general intellectual ability or in his ability to use or understand language. H.M. also showed some retrograde amnesia, or inability to remember events before the onset of the surgery. For example, he could not recall that his favourite uncle had died three years earlier. Still, most of his general knowledge was intact, and he performed well on a test of famous faces (of people who had become famous prior to 1950).
 Studies of H.M. and other amnesic patients have provided surprising insights into the workings of memory. One remarkable finding is that even though H.M. had severe anterograde amnesia, he (and other amnesic patients like him) still performed normally on tests of implicit memory. For example, H.M. could learn new motor skills, even though he would have no conscious memory of doing so. Even in dense, or severe, amnesias, not all memory abilities are impaired. For more information on implicit memory, see the Implicit Memory section of this article.
 Korsakoff’s syndrome, also called Korsakoff’s psychosis, is a disorder that produces severe and often permanent amnesia. In this condition, years of chronic alcoholism and thiamine (vitamin B1) deficiency cause brain damage, particularly to the thalamus, which helps process sensory information, and to the mammillary bodies, which lie beneath the thalamus. Some patients also have damage to the cortex and cerebellum. Korsakoff’s patients show severe anterograde amnesia, or difficulty learning anything new. In addition, most suffer from retrograde amnesia ranging from mild to severe and typically cannot remember recent experiences. The condition is also associated with other intellectual deficits, such as confusion and disorientation. Korsakoff’s syndrome is named after Sergei Korsakov (Korsakoff), the Russian neurologist who first described it in the late 19th century.
 Amnesia also occurs in Alzheimer’s disease, a condition in which the neurons in the brain gradually degenerate, hindering brain function. Damage to the hippocampus and frontal lobes impairs memory. Many other types of organic amnesias exist. For example, in large doses, most depressant drugs can cause acute loss of memory. With severe alcohol or marijuana intoxication, people often forget events that occurred while under influence of the drug.
 Infantile amnesia, also called childhood amnesia, refers to the fact that people can remember very little about the first few years of their life. Surveys have shown that most people report their earliest memory to be between their third and fourth birthdays. Furthermore, people’s memories of childhood generally do not become a continuous narrative until after about seven years of age.
 Psychologists do not know what causes infantile amnesia, but they have several theories. One view is that brain structures critical to memory are too immature during the first few years of life to record long-term memories. Another theory is that children cannot remember events that occurred before they mastered language. In this view, language provides a system of symbolic representation by which people develop narrative stories of their lives. Such a narrative framework may be necessary for people to remember autobiographical events in a coherent context.
 The phenomenon of infantile amnesia does not mean that infants and young children cannot learn. After all, babies learn to stand, walk, and talk. Scientific evidence indicates that even young infants can learn and retain information well. For example, one experiment found that three-month-old babies could learn that kicking their legs moves a mobile over their crib. Up to a month later, the babies could still demonstrate their knowledge that kicking moved the mobile. Infants and toddlers seem to retain implicit memories of their experiences.
 All people differ somewhat in their ability to remember information. However, some individuals have remarkable memories and perform feats that normal individuals could never hope to achieve. These individuals, sometimes called mnemonists (pronounced ‘nih-MAHN-ists’), are considered to have exceptional memory.
 Psychologists have described several cases of exceptional memory. Aleksandr R. Luria, a Russian neuropsychologist, described one of the most famous cases in his book The Mind of a Mnemonist (1968). Luria recounted the abilities of S. V. Shereshevskii, a man he called S. Luria studied Shereshevskii over many years and watched him perform remarkable memory feats. However, until Luria began studying these feats, Shereshevskii was unaware of how extraordinary his talents were. For example, Shereshevskii could study a blackboard full of nonsense material and then reproduce it at will years later. He could also memorize long lists of nonsense syllables, extremely complex scientific formulas, and numbers more than 100 digits long. In each case, Shereshevskii could recall the information flawlessly, even if asked to produce it in reverse order. Luria reported one instance in which Shereshevskii was able to recall a 50-word list when the test was given without warning 15 years after presentation of the list! He recalled all 50 words without a single error.
 The primary technique Shereshevskii used was mental imagery. He generated very rich mental images to represent information. In addition, part of his ability might have been due to his astonishing capacity for synesthesia. Synesthesia occurs when information coming into one sensory modality, such as a sound, evokes a sensation in another sensory modality, such as a sight, taste, smell, feel, or touch. All people have synesthesia to a slight degree. For example, certain colours may ‘feel’ warm or cool. However, Shereshevskii’s synesthesia was extremely vivid and unusual. For example, Shereshevskii once told a colleague of Luria’s, ‘What a crumbly yellow voice you have.’ He also associated numbers with shapes, colors, and even people. Synesthetic reactions probably improved Shereshevskii’s memory because he could encode events in a very elaborate way. But they often caused him confusion, too. For example, reading was difficult because each word in a sentence evoked its own mental image, interfering with comprehension of the sentence as a whole.
 A second case of exceptional memory illustrates the talent some people display for remembering certain types of material. In a series of tests in the 1980s and 1990s, Rajan Srinavasen Mahadevan (known as Rajan) demonstrated a remarkable talent for remembering numbers, but for other types of material, his memory ability tested in the normal range. Rajan memorized the mathematical ratio pi, which begins 3.14159 and continues indefinitely with no known pattern, to nearly 32,000 decimal places! If given a string of digits, within a few seconds he could accurately say whether or not the string appears in the first 32,000 digits of pi. He could also rapidly identify any of the first 10,000 digits of pi when given a specific decimal place. For example, he could tell what digit is in decimal place 6,243 in about 12 seconds, and he rarely made errors on this task. Rajan demonstrated great skill at learning new numerical information.
 Shereshevskii and Rajan scored in the normal range on standard intelligence tests. Another group of people, those with savant syndrome (formerly called idiot savants), usually score low on intelligence tests but have one ‘island’ of outstanding cognitive ability. Many children and adults who are deemed savants have extraordinary memory. Psychologists have studied many cases of savant syndrome, but its nature remains a mystery.
 Cases of exceptional memory stand as remarkable puzzles whose implications for normal memory functioning are unclear. In some cases the remarkable talents exemplify techniques (such as mental imagery) that are known to magnify normal memory ability. These striking cases have not been integrated well into the scientific study of memory, but generally stand apart as curiosities that cannot yet be explained in any meaningful way.
 Memory improvement techniques are called mnemonic devices or simply mnemonics. Mnemonics have been used since the time of the ancient Greeks and Romans. In ancient times, before writing was easily accomplished, educated people were trained in the art of memorizing. For example, orators had to remember points they wished to make in long speeches. Many of the techniques developed thousands of years ago are still used today. Modern research has allowed psychologists to better understand and refine the techniques.
 All mnemonic devices depend upon two basic principles discussed earlier in this article: (1) recoding of information into forms that are easy to remember, and (2) supplying oneself with excellent retrieval cues to recall the information when it is needed. For example, many schoolchildren learn the colors of the visible spectrum by learning the imaginary name ROY G. BIV, which stands for red, orange, yellow, green, blue, indigo, violet. Similarly, to remember the names of the Great Lakes, remember HOMES (Huron, Ontario, Michigan, Erie, and Superior). Both of these examples illustrate the principle of recoding. Several bits of information are repackaged into an acronym that is easier to remember. The letters of the acronym serve as retrieval cues that enable recall of the desired information.
 Psychologists and others have devised much more elaborate recoding and decoding schemes. Three of the most common mnemonic techniques are the method of loci, the pegword method, and the PQ4R method. Research has shown that mnemonic devices such as these permit greater recall than do strategies that people usually use, such as ordinary rehearsal (repeating information to oneself).
 One of the oldest mnemonics is the method of loci (loci is a Latin word meaning ‘places’). This method involves forming vivid interactive images between specific locations and items to be remembered. The first step is to learn a set of places. For instance, you might familiarize yourself with various locations around your house: the front sidewalk, the front doorstep, the front door, the foyer and so on. Once you have permanently memorized the locations, you can then use them to recode experiences for later recall. You can use the method of loci to remember any set of information, such as a grocery list or points in a speech. The best strategy is to convert each item of information into a vivid mental image by putting it at a familiar location where it can be ‘seen’ in the mind. So, for example, you might remember a grocery list as bread on the front sidewalk, milk on the front porch, bananas hanging from the front door, and so on. When you are at the grocery store and need to remember the list, you can mentally walk through the house and see what object is in each spot. The locations serve as retrieval cues for the desired information. Although this technique may seem far-fetched, with a little practice it can prove quite effective. In fact, the amount of information one can remember using this method is limited only by the number of locations one has memorized.
 Another mnemonic that relies on the power of visual imagery is called the pegword method. There are many variations on the pegword method, but they are all based on the same general principle. People learn a series of words that serve as ‘pegs’ on which memories can be ‘hung.’ In one popular scheme, the pegwords rhyme with numbers to make the words easy to remember: One is a gun, two is a shoe, three is a tree, four is a door, five is a hive, six is sticks, seven is heaven, eight is a plate, nine is wine, and ten is a hen. To learn the same grocery list, one might associate gun and bread by imagining the gun shooting the bread. Two is a shoe, so one would imagine a milk carton sitting in a giant shoe, and so on. When you need to remember the list of groceries, you simply recall the pegwords associated with each number; the pegwords then serve as retrieval cues for the groceries. Peg methods such as this one permit more flexible access to information than does the method of loci. For example, if you want to recite the items backwards for some reason, you can do so just as easily as in the forward direction. If you need to know the eighth item, you can say ‘eight is a plate’ and mentally look at your image for the item on the plate.
 The PQ4R method is a mnemonic technique used for remembering text material. The name is itself a mnemonic device for the steps involved. If you are interested in better remembering a chapter from a textbook, you should first Preview the information by skimming quickly through the chapter and looking at the headings. The next step is to form Questions about the information. One way to do this is by simply converting headings to questions. Using this article as an example, you might ask, ‘What are the ways to improve memory?’ The third step is to Read the text carefully trying to answer the questions. After reading, the next step is to Reflect on the material. One way would be to create your own examples of how the principles you are reading could be applied. The next step is to Recite the material after reading it. That is, put the book aside or look away and try to recall or to recite what you have just read. If you cannot bring it to mind now, you will have little chance later. The last step in PQ4R is to Review. After you have read the entire chapter, go through it again trying to recall and to summarize its main points.
 Tests of the PQ4R method of reading text material have shown its advantages over the way people normally read. However, PQ4R method slows reading considerably, so students may not use the technique, even though it is more effective. Most mnemonic devices involve additional work, but they are well worth the investment for improving memory.
 The principles of encoding, recoding, and retrieval discussed elsewhere in this article suggest other ways that memory can be improved. For example, encoding information in an elaborate, meaningful way helps in retention. There are many ways to encode information meaningfully. When possible, try to convert verbal information into mental images. When learning about events and facts, try to focus on their meaning rather than their superficial characteristics. Relating new information to your personal experiences or to what you already know also makes it easier to retain the information.
 Spacing out study sessions is another way to improve your memory. That is, if you are going to read a chapter twice before a test, retention is better if you allow some time to pass between readings, instead of reading the chapter twice in one sitting. Overall, spaced learning or spaced practice (learning opportunities that are spread out in time) is better than massed practice (back-to-back practice, in immediate succession) for retaining facts and skills over longer intervals. However, if a test occurs soon after learning, massed practice is as good as or better than spaced practice.
 If you are having difficulty retrieving facts from your memory, try to remember the setting in which you originally learned them. This advice capitalizes on the encoding specificity principle. The more similar the retrieval environment is to the learning environment, the easier it will be to retrieve the information learned.
 Neural Network, in computer science, highly interconnected network of information-processing elements that mimics the connectivity and functioning of the human brain. Neural networks address problems that are often difficult for traditional computers to solve, such as speech and pattern recognition. They also provide some insight into the way the human brain works. One of the most significant strengths of neural networks is their ability to learn from a limited set of examples.
 Neural networks were initially studied by computer and cognitive scientists in the late 1950s and early 1960s in an attempt to model sensory perception in biological organisms. Neural networks have been applied to many problems since they were first introduced, including pattern recognition, handwritten character recognition, speech recognition, financial and economic modeling, and next-generation computing models.
 Neural networks fall into two categories: artificial neural networks and biological neural networks. Artificial neural networks are modelled on the structure and functioning of biological neural networks. The most familiar biological neural network is the human brain. The human brain is composed of approximately 100 billion nerve cells called neurons that are massively interconnected. Typical neurons in the human brain are connected to on the order of 10,000 other neurons, with some types of neurons having more than 200,000 connections. The extensive number of neurons and their high degree of interconnectedness are part of the reason that the brains of living creatures are capable of making a vast number of calculations in a short amount of time.
 Biological neurons have a fairly simple large-scale structure, although their operation and small-scale structure is immensely complex. Neurons have three main parts: a central cell body, called the soma, and two different types of branched, treelike structures that extend from the soma, called dendrites and axons. Information from other neurons, in the form of electrical impulses, enters the dendrites at connection points called synapses. The information flows from the dendrites to the soma, where it is processed. The output signal, a train of impulses, is then sent down the axon to the synapses of other neurons.
 Artificial neurons, like their biological counterparts, have simple structures and are designed to mimic the function of biological neurons. The main body of an artificial neuron is called a node or unit. Artificial neurons may be physically connected to one another by wires that mimic the connections between biological neurons, if, for instance, the neurons are simple integrated circuits. However, neural networks are usually simulated on traditional computers, in which case the connections between processing nodes are not physical but are instead virtual.
 Artificial neurons may be either discrete or continuous. Discrete neurons send an output signal of 1 if the sum of received signals is above a certain critical value called a threshold value, otherwise they send an output signal of 0. Continuous neurons are not restricted to sending output values of only 1s and 0s; instead they send an output value between 1 and 0 depending on the total amount of input that they receive - the stronger the received signal, the stronger the signal sent out from the node and vice-versa. Continuous neurons are the most commonly used in actual artificial neural networks.
 The architecture of a neural network is the specific arrangement and connections of the neurons that make up the network. One of the most common neural network architectures has three layers. The first layer is called the input layer and is the only layer exposed to external signals. The input layer transmits signals to the neurons in the next layer, which is called a hidden layer. The hidden layer extracts relevant features or patterns from the received signals. Those features or patterns that are considered important are then directed to the output layer, the final layer of the network. Sophisticated neural networks may have several hidden layers, feedback loops, and time-delay elements, which are designed to make the network as efficient as possible in discriminating relevant features or patterns from the input layer.
 Neural networks differ greatly from traditional computers (for example personal computers, workstations, mainframes) in both form and function. While neural networks use a large number of simple processors to do their calculations, traditional computers generally use one or a few extremely complex processing units. Neural networks also do not have a centrally located memory, nor are they programmed with a sequence of instructions, as are all traditional computers.
 The information processing of a neural network is distributed throughout the network in the form of its processors and connections, while the memory is distributed in the form of the weights given to the various connections. The distribution of both processing capability and memory means that damage to part of the network does not necessarily result in processing dysfunction or information loss. This ability of neural networks to withstand limited damage and continue to function well is one of their greatest strengths.
 Neural networks also differ greatly from traditional computers in the way they are programmed. Rather than using programs that are written as a series of instructions, as do all traditional computers, neural networks are ‘taught’ with a limited set of training examples. The network is then able to ‘learn’ from the initial examples to respond to information sets that it has never encountered before. The resulting values of the connection weights can be thought of as a ‘program’.
 Neural networks are usually simulated on traditional computers. The advantage of this approach is that computers can easily be reprogrammed to change the architecture or learning rule of the simulated neural network. Since the computation in a neural network is massively parallel, the processing speed of a simulated neural network can be increased by using massively  - parallel computers - computers that link together hundreds or thousands of CPUs in parallel to achieve very high processing speeds.
 In all biological neural networks the connections between particular dendrites and axons may be reinforced or discouraged. For example, connections may become reinforced as more signals are sent down them, and may be discouraged when signals are infrequently sent down them. The reinforcement of certain neural pathways, or dendrite-axon connections, results in a higher likelihood that a signal will be transmitted along that path, further reinforcing the pathway. Paths between neurons that are rarely used slowly atrophy, or decay, making it less likely that signals will be transmitted along them.
 The role of connection strengths between neurons in the brain is crucial; scientists believe they determine, to a great extent, the way in which the brain processes the information it takes in through the senses. Neuroscientists studying the structure and function of the brain believe that various patterns of neurons firing can be associated with specific memories. In this theory, the strength of the connections between the relevant neurons determines the strength of the memory. Important information that needs to be remembered may cause the brain to constantly reinforce the pathways between the neurons that form the memory, while relatively unimportant information will not receive the same degree of reinforcement.
 To mimic the way in which biological neurons reinforce certain axon-dendrite pathways, the connections between artificial neurons in a neural network are given adjustable connection weights, or measures of importance. When signals are received and processed by a node, they are multiplied by a weight, added up, and then transformed by a nonlinear function. The effect of the nonlinear function is to cause the sum of the input signals to approach some value, usually +1 or 0. If the signals entering the node add up to a positive number, the node sends an output signal that approaches +1 out along all of its connections, while if the signals add up to a negative value, the node sends a signal that approaches 0. This is similar to a simplified model of a how a biological neuron functions - the larger the input signal, the larger the output signal.
 Computer scientists teach neural networks by presenting them with desired input-output training sets. The input-output training sets are related patterns of data. For instance, a sample training set might consist of ten different photographs for each of ten different faces. The photographs would then be digitally entered into the input layer of the network. The desired output would be for the network to signal one of the neurons in the output layer of the network per face. Beginning with equal, or random, connection weights between the neurons, the photographs are digitally entered into the input layer of the neural network and an output signal is computed and compared to the target output. Small adjustments are then made to the connection weights to reduce the difference between the actual output and the target output. The input-output set is again presented to the network and further adjustments are made to the connection weights because the first few times that the input is entered, the network will usually choose the incorrect output neuron. After repeating the weight-adjustment process many times for all input-output patterns in the training set, the network learns to respond in the desired manner.
 A neural network is said to have learned when it can correctly perform the tasks for which it has been trained. Neural networks are able to extract the important features and patterns of a class of training examples and generalize from these to correctly process new input data that they have not encountered before. For a neural network trained to recognize a series of photographs, generalization would be demonstrated if a new photograph presented to the network resulted in the correct output neuron being signalled.
 A number of different neural network learning rules, or algorithms, exist and use various techniques to process information. Common arrangements use some sort of system to adjust the connection weights between the neurons automatically. The most widely used scheme for adjusting the connection weights is called error back-propagation, developed independently by American computer scientists Paul Werbos (in 1974), David Parker (in 1984/1985), and David Rumelhart, Ronald Williams, and others (in 1985). The back-propagation learning scheme compares a neural network’s calculated output to a target output and calculates an error adjustment for each of the nodes in the network. The neural network adjusts the connection weights according to the error values assigned to each node, beginning with the connections between the last hidden layer and the output layer. After the network has made adjustments to this set of connections, it calculates error values for the next previous layer and makes adjustments. The back-propagation algorithm continues in this way, adjusting all of the connection weights between the hidden layers until it reaches the input layer. At this point it is ready to calculate another output.
 Neural networks have been applied to many tasks that are easy for humans to accomplish, but difficult for traditional computers. Because neural networks mimic the brain, they have shown much promise in so-called sensory processing tasks such as speech recognition, pattern recognition, and the transcription of hand-written text. In some settings, neural networks can perform as well as humans. Neural-network-based backgammon software, for example, rivals the best human players.
 While traditional computers still outperform neural networks in most situations, neural networks are superior in recognizing patterns in extremely large data sets. Furthermore, because neural networks have the ability to learn from a set of examples and generalize this knowledge to new situations, they are excellent for work requiring adaptive control systems. For this reason, the United States National Aeronautics and Space Administration (NASA) has extensively studied neural networks to determine whether they might serve to control future robots sent to explore planetary bodies in our solar system. In this application, robots could be sent to other planets, such as Mars, to carry out significant and detailed exploration autonomously.
 An important advantage that neural networks have over traditional computer systems is that they can sustain damage and still function properly. This design characteristic of neural networks makes them very attractive candidates for future aircraft control systems, especially in high performance military jets. Another potential use of neural networks for civilian and military use is in pattern recognition software for radar, sonar, and other remote-sensing devices.
 Pain, unpleasant sensory and emotional experience caused by real or potential injury or damage to the body or described in terms of such damage. Scientists believe that pain evolved in the animal kingdom as a valuable three-part warning system. First, it warns of injury. Second, pain protects against further injury by causing a reflexive withdrawal from the source of injury. Finally, pain leads to a period of reduced activity, enabling injuries to heal more efficiently.
 Pain is difficult to measure in humans because it has an emotional, or psychological component as well as a physical component. Some people express extreme discomfort from relatively small injuries, while others show little or no pain even after suffering severe injury. Sometimes pain is present even though no injury is apparent at all, or pain lingers long after an injury appears to have healed.
 The signals that warn the body of tissue damage are transmitted through the nervous system. In this system, the basic unit is the nerve cell or neuron. A nerve cell is composed of three parts: a central cell body, a single major branching fiber called an axon, and a series of smaller branching fibers known as dendrites. Each nerve cell meets other nerve cells at certain points on the axons and dendrites, forming a dense network of interconnected nerve fibers that transmit sensory information about touch, pressure, or warmth, as well as pain.
 Sensory information is transmitted from the different parts of the body to the brain via the spinal cord, which is a complex set of nerves that extends from the brain down along the back, protected by the bones of the spine. About as wide as a finger, the spinal cord is like a cable packed with many bundles of wires. The bundles are nerve pathways for transmitting information. But the spinal cord is more than just a message transmitter, it is also an extension of the brain. It contains neurons that process incoming sensory information, and generates messages to be sent back down to cells in other parts of the body.
 Information being transmitted between and within the brain and spinal cord travels through the nervous system using both chemical and electrical mechanisms. A message-carrying impulse travels from one end of a nerve cell to another by means of an electric signal. When the electric signal reaches the terminal end of a nerve cell, a gap called a synapse prevents the electric signal from crossing to the next cell. The electric signal triggers the cell to release chemicals called neurotransmitters, which float across the synapse to the neighboring nerve cell. These neurotransmitters fit into specialized receptors found on the adjacent nerve cell, much as a key fits into a lock, generating an electric impulse in the neighboring cell. This new impulse travels to the end of the long cell, in turn triggering the release of neurotransmitters to carry the message across the next synapse. Not all neurotransmitters initiate a message in a neighboring nerve cell. Some specialize in preventing neighboring cells from generating an electrical signal, while others function as helpers, fascilitating the message's journey to the brain.
`While most of the sensory nerves in the skin and other body tissues have special structures covering their nerve endings, those nerves that signal injury have free nerve endings. These simple nerve endings specialize in detecting noxious stimuli—a catchall term for injury-causing stimuli such as intense heat, extreme pressure, or sharp pricks or cuts. The nerve endings that detect pain are called nociceptors, and the process of transmitting pain signals when harmful stimulation occurs is called nociception. Several million nociceptors are interlaced through the tissues and organs of the body.
 An injury triggers pain signals in two types of nociceptors, one with large, insulated axons known as A-delta fibers and one with small, uninsulated axons known as C fibers. The large A-delta fibers conduct signals quickly, and the smaller C fibers transmit information slowly. The difference in the functions of these two fibers becomes obvious to a person who stubs a toe. At first the injured person is aware of a sharp, flashing pain at the point of injury. Generated by the A-delta fibers, this short-lived pain intrudes upon the thoughts and perceptions occurring in the brain. Just as this first pain subsides, a second pain begins that is vague, throbbing, and persistent. This sensation is derived from the C fibers.
 Pain information from the A-delta and C fibers travels through the spinal cord to the brain. When it receives the pain message, the spinal cord generates impulses that travel back down to muscles, which lead to a reflexive contraction that pulls the body away from the source of injury. Other reflexes may affect skin temperature, blood flow, sweating, and other changes.
 While this reflex action is underway, the pain message continues up the spinal cord to relay centers in the brain. The sensory information is routed to many other parts of the brain, including the cortex, where thinking processes occur.
 When messages from pain-generating nerve endings finally reach higher centers in the brain, they are processed much like other forms of perception—that is, the sensory information is integrated with memories, expectations, emotions, and thoughts in order to form a complete perceptual experience. While it seems convenient to think of pain as a simple message that sounds an alarm in the brain, contemporary understanding stresses that pain is much more complicated. The emotional aspects of an injury may be more significant than the extent of the physical damage in determining the perceived intensity of pain.
 Each person perceives pain a little differently, and as a result, each person also responds to painful stimulation differently. Pain research specialists have observed a wide variety of subtle variations in pain response. For instance, children are quicker to cry after a relatively minor injury than are adults. Learned cultural behaviors often dominate the way individuals express pain. Older children and young adults are often taught that crying, sometimes viewed as a sign of weakness, is inappropriate behavior, while younger children have no such understanding. Some people are more willing to express pain than others, but this does not mean they hurt more.
 Broad cultural differences in pain responsiveness have also been documented. In some aboriginal societies, extreme tissue injury is often incurred willingly by people undergoing important rituals, and typically, pain is not expressed. Aboriginal men in Australia, for instance, traditionally celebrated passage into manhood with a ritual that involved circumcision, extensive scarring of the chest, and extraction of the two upper front teeth. The initiate was expected to show no reaction to the injury. It may be that the person undergoing the rite managed to suppress expressions of suffering, but it may also be that the individual was able to perceive less pain by making use of natural pain control mechanisms.
 The body has many mechanisms that amplify or reduce pain. When cells are damaged, they release chemicals, such as bradykinins and prostaglandins. These chemicals intensify pain sensation both by making nociceptor nerve endings more sensitive and by causing inflammation around the damaged cells. Without these chemicals, nociceptors would cease transmitting pain information as soon as the source of injury was removed. Some scientists suspect that bradykinins activate nociceptors in the first place.
 Other mechanisms reduce pain sensation by blocking, or inhibiting, the transmission of the pain message to the brain. To alter the pain sensation, the brain and spinal cord release specialized neurotransmitters called endorphins and enkephalins. These chemicals interfere with pain impulse transmission by occupying the nerve cell receptors required to send the impulse across the synapse. By making the pain impulse travel less efficiently, endorphins and enkephalins can significantly lessen the perception of pain. In extreme circumstances, they can even make severe injuries nearly painless. If an athlete is injured during the height of competition, or a soldier injured during combat, they may not realize they have been injured until after the stressful situation has ended. This happens because the brain produces abnormally high levels of endorphins or enkephalins in periods of intense stress or excitement.
 In addition to the body’s own mechanisms, humans have devised many different ways to manipulate the body’s ability to control pain. Drugs that relieve pain, known as analgesics, usually interfere with pain impulse transmission in the nervous system. Narcotic analgesics, such as codeine, have chemical structures that are similar to the pain-blocking neurotransmitter endorphin. Other drugs that relieve pain alter the way damaged nerves transmit information. Nonsteroidal anti-inflammatory drugs, such as aspirin and ibuprofen, are analgesics that reduce pain by inhibiting the synthesis of prostaglandins, the body chemicals that intensify pain and cause inflammation.
 Another way humans control pain is by injection of drugs that temporarily deaden the nerves that transmit pain signals. These drugs bring about anesthesia, a loss of sensation that renders the body completely or partially insensitive to pain, or even touch. Local anesthetics, such as procaine, deaden nerves in a particular area of the body but interfere little with other body functions. General anesthesia renders people unconscious so they do not feel pain at all. People who undergo general anesthesia also have no memory of events that occurred while they were unconscious.
 Many people learn to control their pain with strategies that do not rely on drugs or surgery. Some people control the normally involuntary components of pain message transmission using a behavior modification technique called biofeedback. Acupuncture is widely used for pain relief. Many scientists now believe that this ancient medical procedure may trigger the release of endorphins and enkephalins, the body’s own pain-inhibiting neurotransmitters. Others suspect that the pain-relieving attributes of acupuncture are due, in part, to a patient’s expectation of relief. Although it is not completely understood, physicians and pain specialists have found that when a person suffering from pain expects that a particular procedure—in this case acupuncture - will make their pain subside, it actually does.
 In cases where no treatment effectively relieves pain, doctors may recommend a surgical procedure in which pain-transmitting nerves in the brain or spinal cord are severed. Only a small fraction of pain sufferers need such surgical treatment. Another pain-relieving procedure involves placing electrical stimulators on the skin, nerves, spinal cord, or brain to reduce pain sensation.
 Some injuries take a long time to heal, and even then, pain does not always completely subside. People suffering from this condition, known as chronic pain, may continue to experience debilitating pain for years, without having any apparent tissue damage. This may be the result of permanent damage to the nervous system. There is new evidence that the nerves in the spinal cord and brain can alter their connections after severe pain - that is, even after healing, the nervous system never returns to normal. Pain that subsides and then returns periodically, such as headaches or low back pain, also falls under the category of chronic pain. In their search for pain relief, many chronic pain sufferers become dependent on strong painkilling medicines, and they often fall into an endless cycle of pain, depression, and inactivity.
 The complexity of human pain often requires a combination of pain therapies to achieve relief. Pain management specialists are usually medical doctors with specialized training in neurology, psychiatry, or surgery who have restricted their practice to the analysis and treatment of pain. Psychologists are usually important members of a pain management team. Many people are turning to alternative healthcare practitioners, such as those that specialize in acupuncture or chiropractic, for pain relief. Often, pain management specialists and practitioners of alternative pain therapies join forces in multidisciplinary pain clinics.
 Attention-Deficit Hyperactivity Disorde in  recent research has indicated that ADHD is indeed caused by impaired functioning of the brain pathways governing inhibition and self-control. Contrary to earlier theories that implicated dietary sugar or parenting techniques, ADHD appears most often to be genetically based and inheritable.
 Attention-Deficit Hyperactivity Disorder calls on or upon A new theory that suggests the disorder results from a failure in self-control. ADHD may arise when key brain circuits do not develop properly, perhaps because of an altered gene or genes
 Since the 1940s, psychiatrists have applied various labels to children who are hyperactive and inordinately inattentive and impulsive. Such youngsters have been considered to have "minimal brain dysfunction," "brain-injured child syndrome," "hyperkinetic reaction of childhood," "hyperactive child syndrome" and, most recently, "attention-deficit disorder." The frequent name changes reflect how uncertain researchers have been about the underlying causes of, and even the precise diagnostic criteria for, the disorder.
 Within the past several years, however, those of us who study ADHD have begun to clarify its symptoms and causes and have found that it may have a genetic underpinning. Today's view of the basis of the condition is strikingly different from that of just a few years ago. We are finding that ADHD is not a disorder of attention per se, as had long been assumed. Rather it arises as a developmental failure in the brain circuitry that underlies inhibition and self-control. This loss of self-control in turn impairs other important brain functions crucial for maintaining attention, including the ability to defer immediate rewards for later, greater gain.
ADHD involves two sets of symptoms: inattention and a combination of hyperactive and impulsive behaviors. Most children are more active, distractible and impulsive than adults. And they are more inconsistent, affected by momentary events and dominated by objects in their immediate environment. The younger the children, the less able they are to be aware of time or to give priority to future events over more immediate wants. Such behaviors are signs of a problem, however, when children display them significantly more than their peers do….
 To help children (and adults) with ADHD, psychiatrists and psychologists must better understand the causes of the disorder. Because researchers have traditionally viewed ADHD as a problem in the realm of attention, some have suggested that it stems from an inability of the brain to filter competing sensory inputs, such as sights and sounds. But recently scientists led by Joseph A. Sergeant of the University of Amsterdam have shown that children with ADHD do not have difficulty in that area; instead they cannot inhibit their impulsive motor responses to such input. Other researchers have found that children with ADHD are less capable of preparing motor responses in anticipation of events and are insensitive to feedback about errors made in those responses. For example, in a commonly used test of reaction time, children with ADHD are less able than other children to ready themselves to press one of several keys when they see a warning light. They also do not slow down after making mistakes in such tests in order to improve their accuracy.
 No one knows the direct and immediate causes of the difficulties experienced by children with ADHD, although advances in neurological imaging techniques and genetics promise to clarify this issue over the next five years. Already they have yielded clues, albeit ones that do not yet fit together into a coherent picture.
 Imaging studies over the past decade have indicated which brain regions might malfunction in patients with ADHD and thus account for the symptoms of the condition. That work suggests the involvement of the prefrontal cortex, part of the cerebellum, and at least two of the clusters of nerve cells deep in the brain that are collectively known as the basal ganglia. In a 1996 study F. Xavier Castellanos, Judith L. Rapoport and their colleagues at the National Institute of Mental Health found that the right prefrontal cortex and two basal ganglia called the caudate nucleus and the globus pallidus are significantly smaller than normal in children with ADHD. Earlier this year Castellanos's group found that the vermis region of the cerebellum is also smaller in ADHD children.
 The imaging findings make sense because the brain areas that are reduced in size in children with ADHD are the very ones that regulate attention. The right prefrontal cortex, for example, is involved in "editing" one's behavior, resisting distractions and developing an awareness of self and time. The caudate nucleus and the globus pallidus help to switch off automatic responses to allow more careful deliberation by the cortex and to coordinate neurological input among various regions of the cortex. The exact role of the vermis region is unclear, but early studies suggest it may play a role in regulating motivation.
 What causes these structures to shrink in the brains of those with ADHD? No one knows, but many studies have suggested that mutations in several genes that are normally very active in the prefrontal cortex and basal ganglia might play a role. Most researchers now believe that ADHD is a polygenic disorder—that is, that more than one gene contributes to it.
Early tips that faulty genetics underlie ADHD came from studies of the relatives of children with the disorder. For instance, the siblings of children with ADHD are between five and seven times more likely to develop the syndrome than children from unaffected families. And the children of a parent who has ADHD have up to a 50 percent chance of experiencing the same difficulties.
 The most conclusive evidence that genetics can contribute to ADHD, however, comes from studies of twins. Jacquelyn J. Gillis, then at the University of Colorado, and her colleagues reported in 1992 that the ADHD risk of a child whose identical twin has the disorder is between 11 and 18 times greater than that of a nontwin sibling of a child with ADHD; between 55 and 92 percent of the identical twins of children with ADHD eventually develop the condition.
 One of the largest twin studies of ADHD was conducted by Helene Gjone and Jon M. Sundet of the University of Oslo with Jim Stevenson of the University of Southampton in England. It involved 526 identical twins, who inherit exactly the same genes, and 389 fraternal twins, who are no more alike genetically than siblings born years apart. The team found that ADHD has a heritability approaching 80 percent, meaning that up to 80 percent of the differences in attention, hyperactivity and impulsivity between people with ADHD and those without the disorder can be explained by genetic factors.
 Nongenetic factors that have been linked to ADHD include premature birth, maternal alcohol and tobacco use, exposure to high levels of lead in early childhood and brain injuries - especially those that involve the prefrontal cortex. But even together, these factors can account for only between 20 and 30 percent of ADHD cases among boys; among girls, they account for an even smaller percentage. (Contrary to popular belief, neither dietary factors, such as the amount of sugar a child consumes, nor poor child-rearing methods have been consistently shown to contribute to ADHD.)
 Which genes are defective? Perhaps those that dictate the way in which the brain dopamine, one of the chemicals known as neurotransmitters that convey messages from one nerve cell, or neuron, to another. Dopamine is secreted by neurons in specific parts of the brain to inhibit or modulate the activity of other neurons, particularly those involved in emotion and movement. The movement disorders of Parkinson's disease, for example, are caused by the death of dopamine-secreting neurons in a region of the brain underneath the basal ganglia called the substantia nigra.
 Some impressive studies specifically implicate genes that encode, or serve as the blueprint for, dopamine receptors and transporters; these genes are very active in the prefrontal cortex and basal ganglia. Dopamine receptors sit on the surface of certain neurons. Dopamine delivers its message to those neurons by binding to the receptors. Dopamine transporters protrude from neurons that secrete the neurotransmitter; they take up unused dopamine so that it can be used again. Mutations in the dopamine receptor gene can render receptors less sensitive to dopamine. Conversely, mutations in the dopamine transporter gene can yield overly effective transporters that scavenge secreted dopamine before it has a chance to bind to dopamine receptors on a neighboring neuron.
 In 1995 Edwin H. Cook and his colleagues at the University of Chicago reported that children with ADHD were more likely than others to have a particular variation in the dopamine transporter gene DAT1. Similarly, in 1996 Gerald J. LaHoste of the University of California at Irvine and his co-workers found that a variant of the dopamine receptor gene D4 is more common among children with ADHD. But each of these studies involved 40 or 50 children—a relatively small number - so their findings are now being confirmed in larger studies.
 How do the brain-structure and genetic defects observed in children with ADHD lead to the characteristic behaviors of the disorder? Ultimately, they might be found to underlie impaired behavioral inhibition and self-control, which I have concluded are the central deficits in ADHD.
 Self-control - or the capacity to inhibit or delay one's initial motor (and perhaps emotional) responses to an event - is a critical foundation for the performance of any task. As most children grow up, they gain the ability to engage in mental activities, known as executive functions, that help them deflect distractions, recall goals and take the steps needed to reach them. To achieve a goal in work or play, for instance, people need to be able to remember their aim (use hindsight), prompt themselves about what they need to do to reach that goal (use forethought), keep their emotions reined in and motivate themselves. Unless a person can inhibit interfering thoughts and impulses, none of these functions can be carried out successfully.
 In the early years, the executive functions are performed externally: children might talk out loud to themselves while remembering a task or puzzling out a problem. As children mature, they internalize, or make private, such executive functions, which prevents others from knowing their thoughts. Children with ADHD, in contrast, seem to lack the restraint needed to inhibit the public performance of these executive functions.
 The executive functions can be grouped into four mental activities. One is the operation of working memory - holding information in the mind while working on a task, even if the original stimulus that provided the information is gone. Such remembering is crucial to timeliness and goal-directed behavior: it provides the means for hindsight, forethought, preparation and the ability to imitate the complex, novel behavior of others—all of which are impaired in people with ADHD.
 The internalization of self-directed speech is another executive function. Before the age of six, most children speak out loud to themselves frequently, reminding themselves how to perform a particular task or trying to cope with a problem, for example. ("Where did I put that book? Oh, I left it under the desk.") In elementary school, such private speech evolves into inaudible muttering; it usually disappears by age 10. Internalized, self-directed speech allows one to reflect to oneself, to follow rules and instructions, to use self-questioning as a form of problem solving and to construct "meta-rules," the basis for understanding the rules for using rules—all quickly and without tipping one's hand to others. Laura E. Berk and her colleagues at Illinois State University reported in 1991 that the internalization of self-directed speech is delayed in boys with ADHD.
 A third executive mental function consists of controlling emotions, motivation and state of arousal. Such control helps individuals achieve goals by enabling them to delay or alter potentially distracting emotional reactions to a particular event and to generate private emotions and motivation. Those who rein in their immediate passions can also behave in more socially acceptable ways.
 The final executive function, reconstitution, actually encompasses two separate processes: breaking down observed behaviors and combining the parts into new actions not previously learned from experience. The capacity for reconstitution gives humans a great degree of fluency, flexibility and creativity; it allows individuals to propel themselves toward a goal without having to learn all the needed steps by rote. It permits children as they mature to direct their behavior across increasingly longer intervals by combining behaviors into ever longer chains to attain a goal. Initial studies imply that children with ADHD are less capable of reconstitution than are other children.
 Like self-directed speech, the other three executive functions become internalized during typical neural development in early childhood. Such privatization is essential for creating visual imagery and verbal thought. As children grow up, they develop the capacity to behave covertly, to mask some of their behaviors or feelings from others. Perhaps because of faulty genetics or embryonic development, children with ADHD have not attained this ability and therefore display too much public behavior and speech. It is my assertion that the inattention, hyperactivity and impulsivity of children with ADHD are caused by their failure to be guided by internal instructions and by their inability to curb their own inappropriate behaviors.
 If, as I have outlined, ADHD is a failure of behavioral inhibition that delays the ability to privatize and execute the four executive mental functions I have described, the finding supports the theory that children with ADHD might be helped by a more structured environment. Greater structure can be an important complement to any drug therapy the children might receive. Currently children (and adults) with ADHD often receive drugs such as Ritalin that boost their capacity to inhibit and regulate impulsive behaviors. These drugs act by inhibiting the dopamine transporter, increasing the time that dopamine has to bind to its receptors on other neurons.
 Such compounds (which, despite their inhibitory effects, are known as psychostimulants) have been found to improve the behavior of between 70 and 90 percent of children with ADHD older than five years. Children with ADHD who take such medication not only are less impulsive, restless and distractible but are also better able to hold important information in mind, to be more productive academically, and to have more internalized speech and better self-control. As a result, they tend to be liked better by other children and to experience less punishment for their actions, which improves their self-image.
 In addition to psychostimulants - and perhaps antidepressants, for some children -treatment for ADHD should include training parents and teachers in specific and more effective methods for managing the behavioral problems of children with the disorder. Such methods involve making the consequences of a child's actions more frequent and immediate and increasing the external use of prompts and cues about rules and time intervals. Parents and teachers must aid children with ADHD by anticipating events for them, breaking future tasks down into smaller and more immediate steps, and using artificial immediate rewards. All these steps serve to externalize time, rules and consequences as a replacement for the weak internal forms of information, rules and motivation of children with ADHD.
 In some instances, the problems of ADHD children may be severe enough to warrant their placement in special education programs. Although such programs are not intended as a cure for the child's difficulties, they typically do provide a smaller, less competitive and more supportive environment in which the child can receive individual instruction. The hope is that once children learn techniques to overcome their deficits in self-control, they will be able to function outside such programs.
 There is no cure for ADHD, but much more is now known about effectively coping with and managing this persistent and troubling developmental disorder. The day is not far off when genetic testing for ADHD may become available and more specialized medications may be designed to counter the specific genetic deficits of the children who suffer from it.
 And finally, the brain portion of the central nervous system is contained within the skull. The brain is the control center for movement, sleep, hunger, thirst, and virtually every other vital activity necessary to survival. All human emotions - including love, hate, fear, anger, elation, and sadness - are controlled by the brain. It also receives and interprets the countless signals that are sent to it from other parts of the body and from the external environment. The brain makes us conscious, emotional, and intelligent.
 The adult human brain is a 1.3-kg (3-lb) mass of pinkish-gray jellylike tissue made up of approximately 100 billion nerve cells, or neurons; neuroglia (supporting-tissue) cells; and vascular (blood-carrying) and other tissues.
 Between the brain and the cranium - the part of the skull that directly covers the brain- - are three protective membranes, or meninges. The outermost membrane, the dura mater, is the toughest and thickest. Below the dura mater is a middle membrane, called the arachnoid layer. The innermost membrane, the pia mater, consists mainly of small blood vessels and follows the contours of the surface of the brain.
 A clear liquid, the cerebrospinal fluid, bathes the entire brain and fills a series of four cavities, called ventricles, near the center of the brain. The cerebrospinal fluid protects the internal portion of the brain from varying pressures and transports chemical substances within the nervous system.
 From the outside, the brain appears as three distinct but connected parts: the cerebrum (the Latin word for brain)—two large, almost symmetrical hemispheres; the cerebellum (“little brain”) - two smaller hemispheres located at the back of the cerebrum; and the brain stem - a central core that gradually becomes the spinal cord, exiting the skull through an opening at its base called the foramen magnum. Two other major parts of the brain, the thalamus and the hypothalamus, lie in the midline above the brain stem underneath the cerebellum.
 The brain and the spinal cord together make up the central nervous system, which communicates with the rest of the body through the peripheral nervous system. The peripheral nervous system consists of 12 pairs of cranial nerves extending from the cerebrum and brain stem; a system of other nerves branching throughout the body from the spinal cord; and the autonomic nervous system, which regulates vital functions not under conscious control, such as the activity of the heart muscle, smooth muscle (involuntary muscle found in the skin, blood vessels, and internal organs), and glands.
 Most high-level brain functions take place in the cerebrum. Its two large hemispheres make up approximately 85 percent of the brain's weight. The exterior surface of the cerebrum, the cerebral cortex, is a convoluted, or folded, grayish layer of cell bodies known as the gray matter. The gray matter covers an underlying mass of fibers called the white matter. The convolutions are made up of ridgelike bulges, known as gyri, separated by small grooves called sulci and larger grooves called fissures. Approximately two-thirds of the cortical surface is hidden in the folds of the sulci. The extensive convolutions enable a very large surface area of brain cortex - about 1.5 m2 (16 ft2) in an adult - to fit within the cranium. The pattern of these convolutions is similar, although not identical, in all humans..
 The two cerebral hemispheres are partially separated from each other by a deep fold known as the longitudinal fissure. Communication between the two hemispheres is through several concentrated bundles of axons, called commissures, the largest of which is the corpus callosum.
 Several major sulci divide the cortex into distinguishable regions. The central sulcus, or Rolandic fissure, runs from the middle of the top of each hemisphere downward, forward, and toward another major sulcus, the lateral (“side”), or Sylvian, sulcus. These and other sulci and gyri divide the cerebrum into five lobes: the frontal, parietal, temporal, and occipital lobes and the insula.
 The frontal lobe is the largest of the five and consists of all the cortex in front of the central sulcus. Broca's area, a part of the cortex related to speech, is located in the frontal lobe. The parietal lobe consists of the cortex behind the central sulcus to a sulcus near the back of the cerebrum known as the parieto-occipital sulcus. The parieto-occipital sulcus, in turn, forms the front border of the occipital lobe, which is the rearmost part of the cerebrum. The temporal lobe is to the side of and below the lateral sulcus. Wernicke's area, a part of the cortex related to the understanding of language, is located in the temporal lobe. The insula lies deep within the folds of the lateral sulcus.
 The cerebrum receives information from all the sense organs and sends motor commands (signals that result in activity in the muscles or glands) to other parts of the brain and the rest of the body. Motor commands are transmitted by the motor cortex, a strip of cerebral cortex extending from side to side across the top of the cerebrum just in front of the central sulcus. The sensory cortex, a parallel strip of cerebral cortex just in back of the central sulcus, receives input from the sense organs.
 Many other areas of the cerebral cortex have also been mapped according to their specific functions, such as vision, hearing, speech, emotions, language, and other aspects of perceiving, thinking, and remembering. Cortical regions known as associative cortex are responsible for integrating multiple inputs, processing the information, and carrying out complex responses.
 The cerebellum coordinates body movements. Located at the lower back of the brain beneath the occipital lobes, the cerebellum is divided into two lateral (side-by-side) lobes connected by a fingerlike bundle of white fibers called the vermis. The outer layer, or cortex, of the cerebellum consists of fine folds called folia. As in the cerebrum, the outer layer of cortical gray matter surrounds a deeper layer of white matter and nuclei (groups of nerve cells). Three fiber bundles called cerebellar peduncles connect the cerebellum to the three parts of the brain stem - the midbrain, the pons, and the medulla oblongata.
 The cerebellum coordinates voluntary movements by fine-tuning commands from the motor cortex in the cerebrum. The cerebellum also maintains posture and balance by controlling muscle tone and sensing the position of the limbs. All motor activity, from hitting a baseball to fingering a violin, depends on the cerebellum.
 The thalamus and the hypothalamus lie underneath the cerebrum and connect it to the brain stem. The thalamus consists of two rounded masses of gray tissue lying within the middle of the brain, between the two cerebral hemispheres. The thalamus is the main relay station for incoming sensory signals to the cerebral cortex and for outgoing motor signals from it. All sensory input to the brain, except that of the sense of smell, connects to individual nuclei of the thalamus.
 The hypothalamus lies beneath the thalamus on the midline at the base of the brain. It regulates or is involved directly in the control of many of the body's vital drives and activities, such as eating, drinking, temperature regulation, sleep, emotional behavior, and sexual activity. It also controls the function of internal body organs by means of the autonomic nervous system, interacts closely with the pituitary gland, and helps coordinate activities of the brain stem.
 The brain stem is evolutionarily the most primitive part of the brain and is responsible for sustaining the basic functions of life, such as breathing and blood pressure. It includes three main structures lying between and below the two cerebral hemispheres—the midbrain, pons, and medulla oblongata.
 The topmost structure of the brain stem is the midbrain. It contains major relay stations for neurons transmitting signals to the cerebral cortex, as well as many reflex centers—pathways carrying sensory (input) information and motor (output) commands. Relay and reflex centers for visual and auditory (hearing) functions are located in the top portion of the midbrain. A pair of nuclei called the superior colliculus control reflex actions of the eye, such as blinking, opening and closing the pupil, and focusing the lens. A second pair of nuclei, called the inferior colliculus, control auditory reflexes, such as adjusting the ear to the volume of sound. At the bottom of the midbrain are reflex and relay centers relating to pain, temperature, and touch, as well as several regions associated with the control of movement, such as the red nucleus and the substantia nigra.
 Continuous with and below the midbrain and directly in front of the cerebellum is a prominent bulge in the brain stem called the pons. The pons consists of large bundles of nerve fibers that connect the two halves of the cerebellum and also connect each side of the cerebellum with the opposite-side cerebral hemisphere. The pons serves mainly as a relay station linking the cerebral cortex and the medulla oblongata.
 The long, stalklike lowermost portion of the brain stem is called the medulla oblongata. At the top, it is continuous with the pons and the midbrain; at the bottom, it makes a gradual transition into the spinal cord at the foramen magnum. Sensory and motor nerve fibers connecting the brain and the rest of the body cross over to the opposite side as they pass through the medulla. Thus, the left half of the brain communicates with the right half of the body, and the right half of the brain with the left half of the body.
 Running up the brain stem from the medulla oblongata through the pons and the midbrain is a netlike formation of nuclei known as the reticular formation. The reticular formation controls respiration, cardiovascular function (see Heart), digestion, levels of alertness, and patterns of sleep. It also determines which parts of the constant flow of sensory information into the body are received by the cerebrum.
 There are two main types of brain cells: neurons and neuroglia. Neurons are responsible for the transmission and analysis of all electrochemical communication within the brain and other parts of the nervous system. Each neuron is composed of a cell body called a soma, a major fiber called an axon, and a system of branches called dendrites. Axons, also called nerve fibers, convey electrical signals away from the soma and can be up to 1 m (3.3 ft) in length. Most axons are covered with a protective sheath of myelin, a substance made of fats and protein, which insulates the axon. Myelinated axons conduct neuronal signals faster than do unmyelinated axons.
 Dendrites convey electrical signals toward the soma, are shorter than axons, and are usually multiple and branching.Neuroglial cells are twice as numerous as neurons and account for half of the brain's weight. Neuroglia (from glia, Greek for “glue”) provide structural support to the neurons. Neuroglial cells also form myelin, guide developing neurons, take up chemicals involved in cell-to-cell communication, and contribute to the maintenance of the environment around neurons.
 Twelve pairs of cranial nerves arise symmetrically from the base of the brain and are numbered, from front to back, in the order in which they arise. They connect mainly with structures of the head and neck, such as the eyes, ears, nose, mouth, tongue, and throat. Some are motor nerves, controlling muscle movement; some are sensory nerves, conveying information from the sense organs; and others contain fibers for both sensory and motor impulses. The first and second pairs of cranial nerves - the olfactory (smell) nerve and the optic (vision) nerve - carry sensory information from the nose and eyes, respectively, to the undersurface of the cerebral hemispheres. The other ten pairs of cranial nerves originate in or end in the brain stem.
 The brain functions by complex neuronal, or nerve cell, circuits (see Neurophysiology). Communication between neurons is both electrical and chemical and always travels from the dendrites of a neuron, through its soma, and out its axon to the dendrites of another neuron.
 Dendrites of one neuron receive signals from the axons of other neurons through chemicals known as neurotransmitters. The neurotransmitters set off electrical charges in the dendrites, which then carry the signals electrochemically to the soma. The soma integrates the information, which is then transmitted electrochemically down the axon to its tip.
 At the tip of the axon, small, bubblelike structures called vesicles release neurotransmitters that carry the signal across the synapse, or gap, between two neurons. There are many types of neurotransmitters, including norepinephrine, dopamine, and serotonin. Neurotransmitters can be excitatory (that is, they excite an electrochemical response in the dendrite receptors) or inhibitory (they block the response of the dendrite receptors).
 One neuron may communicate with thousands of other neurons, and many thousands of neurons are involved with even the simplest behavior. It is believed that these connections and their efficiency can be modified, or altered, by experience.
 Scientists have used two primary approaches to studying how the brain works. One approach is to study brain function after parts of the brain have been damaged. Functions that disappear or that are no longer normal after injury to specific regions of the brain can often be associated with the damaged areas. The second approach is to study the response of the brain to direct stimulation or to stimulation of various sense organs.
 Neurons are grouped by function into collections of cells called nuclei. These nuclei are connected to form sensory, motor, and other systems. Scientists can study the function of somatosensory (pain and touch), motor, olfactory, visual, auditory, language, and other systems by measuring the physiological (physical and chemical) changes that occur in the brain when these senses are activated. For example, electroencephalography (EEG) measures the electrical activity of specific groups of neurons through electrodes attached to the surface of the skull. Electrodes inserted directly into the brain can give readings of individual neurons. Changes in blood flow, glucose (sugar), or oxygen consumption in groups of active cells can also be mapped.
 Although the brain appears symmetrical, how it functions is not. Each hemisphere is specialized and dominates the other in certain functions. Research has shown that hemispheric dominance is related to whether a person is predominantly right-handed or left-handed (see Handedness). In most right-handed people, the left hemisphere processes arithmetic, language, and speech. The right hemisphere interprets music, complex imagery, and spatial relationships and recognizes and expresses emotion. In left-handed people, the pattern of brain organization is more variable.
 Hemispheric specialization has traditionally been studied in people who have sustained damage to the connections between the two hemispheres, as may occur with stroke, an interruption of blood flow to an area of the brain that causes the death of nerve cells in that area. The division of functions between the two hemispheres has also been studied in people who have had to have the connection between the two hemispheres surgically cut in order to control severe epilepsy, a neurological disease characterized by convulsions and loss of consciousness.
 The visual system of humans is one of the most advanced sensory systems in the body.. More information is conveyed visually than by any other means. In addition to the structures of the eye itself, several cortical regions - collectively called primary visual and visual associative cortex - as well as the midbrain are involved in the visual system. Conscious processing of visual input occurs in the primary visual cortex, but reflexive - that is, immediate and unconscious - responses occur at the superior colliculus in the midbrain. Associative cortical regions - specialized regions that can associate, or integrate, multiple inputs - in the parietal and frontal lobes along with parts of the temporal lobe are also involved in the processing of visual information and the establishment of visual memories.
 Language involves specialized cortical regions in a complex interaction that allows the brain to comprehend and communicate abstract ideas. The motor cortex initiates impulses that travel through the brain stem to produce audible sounds. Neighboring regions of motor cortex, called the supplemental motor cortex, are involved in sequencing and coordinating sounds. Broca's area of the frontal lobe is responsible for the sequencing of language elements for output. The comprehension of language is dependent upon Wernicke's area of the temporal lobe. Other cortical circuits connect these areas.
 Memory is usually considered a diffusely stored associative process - that is, it puts together information from many different sources. Although research has failed to identify specific sites in the brain as locations of individual memories, certain brain areas are critical for memory to function. Immediate recall - the ability to repeat short series of words or numbers immediately after hearing them - is thought to be located in the auditory associative cortex. Short-term memory - the ability to retain a limited amount of information for up to an hour—is located in the deep temporal lobe. Long-term memory probably involves exchanges between the medial temporal lobe, various cortical regions, and the midbrain.
 The autonomic nervous system regulates the life support systems of the body reflexively - that is, without conscious direction. It automatically controls the muscles of the heart, digestive system, and lungs; certain glands; and homeostasis - that is, the equilibrium of the internal environment of the body (see Physiology). The autonomic nervous system itself is controlled by nerve centers in the spinal cord and brain stem and is fine-tuned by regions higher in the brain, such as the midbrain and cortex. Reactions such as blushing indicate that cognitive, or thinking, centers of the brain are also involved in autonomic responses.
 The brain is guarded by several highly developed protective mechanisms. The bony cranium, the surrounding meninges, and the cerebrospinal fluid all contribute to the mechanical protection of the brain. In addition, a filtration system called the blood-brain barrier protects the brain from exposure to potentially harmful substances carried in the bloodstream.
 Brain disorders have a wide range of causes, including head injury, stroke, bacterial diseases, complex chemical imbalances, and changes associated with aging.
 Head injury can initiate a cascade of damaging events. After a blow to the head, a person may be stunned or may become unconscious for a moment. This injury, called a concussion, usually leaves no permanent damage. If the blow is more severe and hemorrhage (excessive bleeding) and swelling occur, however, severe headache, dizziness, paralysis, a convulsion, or temporary blindness may result, depending on the area of the brain affected. Damage to the cerebrum can also result in profound personality changes.
 Damage to Broca's area in the frontal lobe causes difficulty in speaking and writing, a problem known as Broca's aphasia. Injury to Wernicke's area in the left temporal lobe results in an inability to comprehend spoken language, called Wernicke's aphasia.
 An injury or disturbance to a part of the hypothalamus may cause a variety of different symptoms, such as loss of appetite with an extreme drop in body weight; increase in appetite leading to obesity; extraordinary thirst with excessive urination (diabetes insipidus); failure in body-temperature control, resulting in either low temperature (hypothermia) or high temperature (fever); excessive emotionality; and uncontrolled anger or aggression. If the relationship between the hypothalamus and the pituitary gland is damaged (see Endocrine System), other vital bodily functions may be disturbed, such as sexual function, metabolism, and cardiovascular activity.
 Injury to the brain stem is even more serious because it houses the nerve centers that control breathing and heart action. Damage to the medulla oblongata usually results in immediate death.
 A stroke is damage to the brain due to an interruption in blood flow. The interruption may be caused by a blood clot (see Embolism; Thrombosis), constriction of a blood vessel, or rupture of a vessel accompanied by bleeding. A pouchlike expansion of the wall of a blood vessel, called an aneurysm, may weaken and burst, for example, because of high blood pressure.
 Sufficient quantities of glucose and oxygen, transported through the bloodstream, are needed to keep nerve cells alive. When the blood supply to a small part of the brain is interrupted, the cells in that area die and the function of the area is lost. A massive stroke can cause a one-sided paralysis (hemiplegia) and sensory loss on the side of the body opposite the hemisphere damaged by the stroke.
 Epilepsy is a broad term for a variety of brain disorders characterized by seizures, or convulsions. Epilepsy can result from a direct injury to the brain at birth or from a metabolic disturbance in the brain at any time later in life.
 Some brain diseases, such as multiple sclerosis and Parkinson disease, are progressive, becoming worse over time. Multiple sclerosis damages the myelin sheath around axons in the brain and spinal cord. As a result, the affected axons cannot transmit nerve impulses properly. Parkinson disease destroys the cells of the substantia nigra in the midbrain, resulting in a deficiency in the neurotransmitter dopamine that affects motor functions.
 Cerebral palsy is a broad term for brain damage sustained close to birth that permanently affects motor function. The damage may take place either in the developing fetus, during birth, or just after birth and is the result of the faulty development or breaking down of motor pathways.  - Cerebral palsy is nonprogressivethat is, it does not worsen with time.
 A bacterial infection in the cerebrum or in the coverings of the brain (see Meningitis), swelling of the brain (see Edema), or an abnormal growth of healthy brain tissue can all cause an increase in intracranial pressure and result in serious damage to the brain.
 Scientists are finding that certain brain chemical imbalances are associated with mental disorders such as schizophrenia and depression. Such findings have changed scientific understanding of mental health and have resulted in new treatments that chemically correct these imbalances.
 During childhood development, the brain is particularly susceptible to damage because of the rapid growth and reorganization of nerve connections. Problems that originate in the immature brain can appear as epilepsy or other brain-function problems in adulthood.
 Several neurological problems are common in aging. Alzheimer's disease damages many areas of the brain, including the frontal, temporal, and parietal lobes. The brain tissue of people with Alzheimer's disease shows characteristic patterns of damaged neurons, known as plaques and tangles. Alzheimer's disease produces a progressive dementia (see Senile Dementia), characterized by symptoms such as failing attention and memory, loss of mathematical ability, irritability, and poor orientation in space and time.
 Several commonly used diagnostic methods give images of the brain without invading the  - skull. Some portray anatomythat is, the structure of the brain - whereas others measure brain function. Two or more methods may be used to complement each other, together providing a more complete picture than would be possible by one method alone.
 Magnetic resonance imaging (MRI), introduced in the early 1980s, beams high-frequency radio waves into the brain in a highly magnetized field that causes the protons that form the nuclei of hydrogen atoms in the brain to reemit the radio waves. The reemitted radio waves are analyzed by computer to create thin cross-sectional images of the brain. MRI provides the most detailed images of the brain and is safer than imaging methods that use X rays. However, MRI is a lengthy process and also cannot be used with people who have pacemakers or metal implants, both of which are adversely affected by the magnetic field.
 Computed tomography (CT), also known as CT scans, developed in the early 1970s. This imaging method X-rays the brain from many different angles, feeding the information into a computer that produces a series of cross-sectional images. CT is particularly useful for diagnosing blood clots and brain tumors. It is a much quicker process than magnetic resonance imaging and is therefore advantageous in certain situations—for example, with people who are extremely ill.
 Changes in brain function due to brain disorders can be visualized in several ways. Magnetic resonance spectroscopy measures the concentration of specific chemical compounds in the brain that may change during specific behaviors. Functional magnetic resonance imaging (fMRI) maps changes in oxygen concentration that correspond to nerve cell activity.
 Positron emission tomography (PET), developed in the mid-1970s, uses computed tomography to visualize radioactive tracers (see Isotopic Tracer), radioactive substances introduced into the brain intravenously or by inhalation. PET can measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. Single photon emission computed tomography (SPECT), developed in the 1950s and 1960s, uses radioactive tracers to visualize the circulation and volume of blood in the brain.
 Brain-imaging studies have provided new insights into sensory, motor, language, and memory processes, as well as brain disorders such as epilepsy; cerebrovascular disease; Alzheimer's, Parkinson, and Huntington's diseases (see Chorea); and various mental disorders, such as schizophrenia.
 In lower vertebrates, such as fish and reptiles, the brain is often tubular and bears a striking resemblance to the early embryonic stages of the brains of more highly evolved animals. In all vertebrates, the brain is divided into three regions: the forebrain (prosencephalon), the midbrain (mesencephalon), and the hindbrain (rhombencephalon). These three regions further subdivide into different structures, systems, nuclei, and layers.
 The more highly evolved the animal, the more complex is the brain structure. Human beings have the most complex brains of all animals. Evolutionary forces have also resulted in a progressive increase in the size of the brain. In vertebrates lower than mammals, the brain is small. In meat-eating animals, particularly primates, the brain increases dramatically in size.
 The cerebrum and cerebellum of higher mammals are highly convoluted in order to fit the most gray matter surface within the confines of the cranium. Such highly convoluted brains are called gyrencephalic. Many lower mammals have a smooth, or lissencephalic (“smooth head”), cortical surface.
 There is also evidence of evolutionary adaption of the brain. For example, many birds depend on an advanced visual system to identify food at great distances while in flight. Consequently, their optic lobes and cerebellum are well developed, giving them keen sight and outstanding motor coordination in flight. Rodents, on the other hand, as nocturnal animals, do not have a well-developed visual system. Instead, they rely more heavily on other sensory systems, such as a highly developed sense of smell and facial whiskers.
 Recent research in brain function suggests that there may be sexual differences in both brain anatomy and brain function. One study indicated that men and women may use their brains differently while thinking. Researchers used functional magnetic resonance imaging to observe which parts of the brain were activated as groups of men and women tried to determine whether sets of nonsense words rhymed. Men used only Broca's area in this task, whereas women used Broca's area plus an area on the right side of the brain.