The title of this lecture is at best premature and more likely absurd, but I have adopted it for two reasons. In the first place, I want to emphasize the continuing tension between biologic and psychologic explanations of behavior. Secondly, I want to consider the simplistic but perhaps useful idea that the ultimate level of resolution for understanding how psychotherapeutic intervention works is identical with the level at which we are currently seeking to understand how psychopharmacologic intervention works—the level of individual nerve cells and their synaptic connections.
I will discuss the second issue later. First, I should like to consider the tension within psychiatry. Although this tension is longstanding and almost universal, I first encountered it in 1960, when I entered psychiatric residency training at the Massachusetts Mental Health Center. In looking about I was struck by the fact that our residency cohort, a very congenial and intelligent group, was nonetheless split in a fundamental way on one basic issue: the degree to which we accepted the current psychoanalytic view of the mind as providing the adequate conceptual framework for future work in psychiatry. On this issue we were divided into two groups: the hard-nosed and the soft-nosed.
The hard-nosed residents, many of whom were attracted to the humane and existential aspects of the analytic perspective, thought that the psychoanalytic view of the mind was slightly vague, difficult to verify (or discredit) and therefore limited in its powers. The hard-nosed yearned for more substantial knowledge and were drawn to new ways of thought. In particular, many were drawn to biology. By contrast, most soft-nosed residents had little direct interest in the biology of the brain, which they thought had promised much to psychiatry but delivered little. The soft-nosed saw the future of psychiatry not simply in the development of a better body of knowledge but in the development of better therapists—therapists qualified to provide more effective treatment to very disturbed patients. Needless to say, this distinction is drawn too boldly. Many residents then held, and probably still hold, aspects of both views. But the distinction does draw attention to a fundamental tension, a difference in world view that existed in the psychiatric world around us as well as in ourselves. I think that most of us at that time simply failed to appreciate two aspects of the relation between biology and psychiatry: we failed to appreciate that the conflicted relation between biology and psychiatry is not unique but is characteristic of the interaction between closely related fields of science, and we did not know that in other fields of science this relation has often aided the advancement of knowledge. People (and noses) may fall by the wayside, but the related scientific disciplines usually profit and move on.
As pointed out by a number of students of science, most recently by the biologist E. O. Wilson,1 there exists for most parent disciplines in science an antidiscipline. The antidiscipline generates creative tension within the parent discipline by challenging the precision of its methods and its claims. For example, for my own parent discipline, cellular neurobiology, there stands at a more fundamental level the antidiscipline of molecular biology, and for molecular biology there stands at a more fundamental level structural (physical) chemistry. In this context it is clear that neurobiology is the new antidiscipline for which psychology in general and psychiatry in particular are the parent disciplines.
I say "new" antidiscipline because, as knowledge advances and scientific disciplines change, so do the disciplines impinging on them. In the period from 1920 to 1960 psychiatry derived its main intellectual impetus from psychoanalysis. During this phase its most powerful antidisciplines were philosophy and the social sciences.2 Since 1960, psychiatry has begun (again) to derive its main intellectual challenge from biology, with the result that neurobiology has been thrust into the position of the new antidiscipline for psychiatry. Modern neurobiology had its first impact on psychiatry when it provided insights into the actions of psychotherapeutic drugs. But most of us believe that this is only the beginning and that in the near future neurobiology will address a matter of more general and fundamental importance: the biology of human mental processes. When it comes to mental function, however, biologists are badly in need of guidance. It is here that psychiatry, as guide and tutor of its antidiscipline, can make a particularly valuable contribution to neurobiology. Psychology and psychiatry can illuminate and define for biology the mental functions that need to be studied if we are to have a meaningful and sophisticated understanding of the biology of the human mind.
Given the potential power of neurobiology and the vision of psychiatry, we may well ask why this type of complementarity was not viable before. The answer to that question is surprisingly simple. The relevant branches of biology—ethology and neurobiology—were, until recently, simply not mature enough, either technically or philosophically, to address higher-order problems related to mental processes. On the appropriate level of resolution, the cellular level, neurobiology has only recently become capable of accomplishing for psychology and psychiatry what other antidisciplines have traditionally accomplished for their parent disciplines—to expand and enlighten the discipline by providing a new level of mechanistic understanding.
I hasten to emphasize that I do not mean "to displace." As Wilson has pointed out, an antidiscipline is usually narrower in scope than its parent discipline. The antidiscipline can succeed in revitalizing and reorienting the parent discipline. It forces a new set of approaches, new methodologies and new insights, but it does not provide a broader, more coherent framework; it does not produce richer paradigms. Although neurobiology can provide key insights into the human mind, psychology and psychoanalysis are potentially deeper in content. The hard-nosed propositions of neurobiology, although scientifically more satisfying, have considerably less existential meaning than do the soft-nosed propositions of psychiatry. If neurobiology is at all equal to the task, the sciences of the mind are likely to absorb the relevant techniques and ideas generated by neurobiology and, having absorbed them, move on.
This very dichotomy of antidiscipline and parent discipline indicates how the two disciplines can most fruitfully interact. In this interaction psychiatry has a double role. On the one hand, it must seek answers to questions on its own level—questions related to the diagnosis and treatment of mental disorders. On the other hand, psychiatry must pose the questions that its antidiscipline need answer. One of the powers of psychology and psychiatry, it seems to me, lies in their perspective and, most of all, in their paradigms, their specific views of certain interrelated variables.
I would like to consider the synergistic interaction between psychiatry and biology by describing two paradigms that psychology and psychiatry have defined for neurobiology and that are now being addressed on the cellular level: the effects on later development of certain types of social and sensory deprivation in early life, and the mechanisms of learning.
These two classes of studies are paradigmatic in several senses. In purely behavioral terms, the studies represent examples of the sorts of issues that behavioral science in general and psychiatry in particular must summarize and call to the attention of neurobiology. In addition, the studies are interesting from a methodologic point of view because they illustrate how behavioral models must be simplified and redefined so that they can be effectively tackled on progressively more mechanistic levels.
Experiments ranging from complex ones in the human infant to simple ones in laboratory animals have documented the existence of a set of critical stages for normal psychologic development. During these stages the subject must interact with a normal social and perceptual environment if development is to proceed normally. Unless animals and human beings are raised for the first year (or longer) in what the psychoanalyst Heinz Hartmann3 first called "an average expectable environment," later social and sensory development is disrupted, sometimes disastrously.
Before formal studies on maternal deprivation were performed, a few anecdotal examples of social isolation were collected by anthropologists and clinicians. From time to time children had been discovered living in an attic or a cellar, with minimal social contact, perhaps spending only a few minutes a day with a caretaker, a nurse or a parent. Children so deprived in early childhood are often later found to be speechless and lacking in social responsiveness. It is difficult, however, to analyze exactly what went wrong with these children. One often does not know whether the child was severely retarded mentally from the beginning. In addition, one does not know the nature or degree of social isolation. But further information on isolation has been gained from studies of children reared in public institutions.
In a classic series of studies, the psychoanalyst Rene Spitz4—6 compared the development of infants raised in a foundling home for abandoned children with the development of infants raised in a nursing home attached to a women's prison. Both institutions were reasonably clean and provided adequate food and medical care. The babies in the nursing home were all cared for by their mothers. Because they were in prison and away from their families, the mothers tended to pour affection onto their infants in the time allotted each day. By contrast, in the foundling home, the infants were cared for by nurses, each of whom was responsible for seven infants. As a result, the children in the foundling home had much less contact with other human beings than did those in the nursing home. The two institutions also differed in another respect. In the nursing home the cribs were open, and the infants could readily watch the activity in the ward. They could see other babies playing and observe the mothers and staff going about their business. In the foundling home the bars of the cribs were covered with sheets that prevented the infants from seeing outside and thus dramatically reduced the sensory environment. In short, the children in the foundling home lived under conditions of sensory, as well as social, deprivation.
Spitz followed a group of infants at the two institutions from birth through their early years. At the end of the first four months of life the children in the foundling home scored better than those in the nursing home on a number of developmental indexes. This difference suggested to Spitz that genetic factors did not favor the infants in the nursing home. However, eight months later, at the end of the first year, the children in the foundling home had fallen far below those in the nursing home, and syndromes developed that Spitz, like Eckstein-Schlossmann before him,7 called "hospitalism" (now often called "anaclitic depression"). The children were withdrawn, they showed little curiosity or gaiety, and they were highly susceptible to infection. In the second and third years of life, when the children in the nursing home were walking and talking like family-reared children, the children in the foundling home were retarded in their development and showed slowed reactions to external stimuli. Only two of 26 children in the foundling home were able to walk, only these two spoke out at all, and even they could say only a few words. Normal children at this age are fairly agile, speak hundreds of words and can construct sentences.8
Although Spitz's studies have been criticized for their methodologic weakness,9 several aspects of the studies have been confirmed.10—13 For example, in a study of an orphanage in Teheran where social and sensory stimulation was minimal, Dennis12 found that 60 per cent of the two-year-olds were not capable of sitting up unassisted, and 85 per cent of the four-year-olds were not yet walking on their own. The studies of Spitz thus stand as a landmark; they define a paradigm that has since been studied repeatedly and profitably.
The next step was to develop an animal model of infant social isolation. This step was taken accidentally by Margaret and Harry Harlow, two psychologists working at the University of Wisconsin. In an attempt to raise a stock of sturdy and disease-free monkeys for experimental work, the Harlows separated the infant monkeys from their mothers a few hours after birth, to feed them a special formula and rear them with special hygienic precautions. The newborn monkeys were fed daily by remote control and observed through one-way mirrors. Monkeys reared in isolation for a year proved to be seriously impaired socially and psychologically. When returned to the monkey colony, an isolated monkey did not play with other monkeys, and its grooming and other social interactions were minimal. When attacked, the monkey did not defend itself. Much of its activity was self-directed and consisted of self-clasping, self-mouthing and self-mutilating acts, such as chewing on its fingers and toes. It also tended to crouch in a corner and rock back and forth in a manner reminiscent of autistic children. When these monkeys reached sexual maturity they did not mate, and several mature females that were artificially inseminated ignored their offspring. This profound social and psychologic damage resulted from only six months of total isolation during the first years. Comparable periods of isolation in later life had little effect on social behavior. These findings suggest that in monkeys, as in human beings, there is a critical period for social development.14—16
The Harlows next sought to determine what ingredients had to be introduced into the isolation experience to prevent the development of the isolation syndrome. They found that giving the isolated monkey a surrogate mother, a cloth-covered wooden dummy, elicited clinging behavior in the isolate but was insufficient to allow the emergence of normal social behavior. Social development occurred normally only if, in addition to a surrogate mother, the isolated monkey had contact, for a few hours each day, with a peer who spent the rest of its day in the monkey colony. Recently, Suomi and Harlow16 have found that the syndrome can sometimes be fully reversed by certain monkey psychotherapists—monkeys with certain specific characterologic traits. However, unlike the traits that Dr. Semrad nurtured in his residents, the characteristics of a successful monkey psychotherapist include an obstinate and truculent pursuit, an unmitigated insistence on continued interaction with the socially withdrawn monkey, until the isolate responds, after six months of "therapy," with an apparent flight into health—almost, as it were, out of desperation.
Even restricted sensory deprivation has dire consequences, again, initially revealed through clinical studies. In 1932 von Senden summarized the literature on children born with congenital cataracts that were removed much later in life.17 The cataracts deprived these children of patterned visual experience but allowed them to see diffuse light. Tested after removal of the cataracts in the teen-age years or later, they could not discriminate patterns well. They learned readily to recognize color but had only a limited ability to discriminate forms. Some required months to distinguish a square from a circle. Some never learned to recognize people whom they saw daily.18
Similar results were later obtained in monkeys by Austin Riesen and his colleagues,19 who reared newborn chimpanzees in the dark. By three to four months of age, the normal chimpanzee readily learns to discriminate among visual stimuli and between friends and strangers. The infant chimpanzee recognizes and welcomes its caretaker but shows fear and avoidance of strangers. A chimpanzee reared in the dark for over a year and then restored to a normal environment does not learn readily to recognize and avoid objects and cannot discriminate vertical from horizontal lines. Only after weeks of living in a normal environment does the animal learn to distinguish friend from foe. These abnormal responses are not due simply to the absence of sensory stimulation early in life but are due to the absence of patterns of stimulation. A chimpanzee brought up with sensory stimulation in the form of an unbroken field of light, produced by enclosing the head in a translucent dome of plastic that permits normal intensity of stimulation without the contours of the normal visual environment, is just as blind as the animal reared in darkness. Thus, the development of normal perception—that is, the capacity to distinguish between objects in the visual world—requires exposure to patterned visual stimulation early in infancy.
How is this accomplished? Can we begin to relate the interaction between the perceptual environment and the brain during the critical period to the function of individual nerve cells? In an imaginative series of studies in newborn kittens and monkeys, Hubel and Wiesel20—23 examined the effects of visual deprivation on cellular responses in the primary visual (striate) cortex. They found that a normal adult monkey has good binocular interaction. Most cells in the cortex respond to an appropriate stimulus presented to either the left or right eye; only a small proportion respond exclusively to one eye or the other (F1 and F2). However, if a monkey is raised from birth to three months with one eyelid sutured closed, the animal will be permanently blind in that eye. Electrical recordings made from single nerve cells in the striate cortex after removal of the occluding sutures show that the affected eye has lost its ability to control cortical neurons. Only a very few cells can be driven from the deprived eye. Similar visual deprivation in an adult has no effect on vision.
Hubel and Wiesel next found that visual deprivation in newborn monkeys profoundly alters the organization of the ocular-dominance columns. Normally, the fibers from the lateral geniculate nucleus for each eye end in separate and alternating areas of the cortex, giving rise to equal-sized columns dominated alternately by one or the other eye (F3). The radioautographic data of Hubel and Wiesel show that after deprivation, the columns receiving input from the normal eye are much widened at the expense of those receiving input from the deprived eye (F3). As indicated in F3, these changes may occur because the geniculate cells that receive input from the closed eye regress and lose their connections with cortical cells, whereas the geniculate cells that receive input from the opened eye sprout and connect to cortical cells previously occupied by input from the other eye.
These studies have provided direct evidence that sensory deprivation early in life can alter the structure of the cerebral cortex. What I find particularly interesting is that Hubel and Wiesel had physiologic evidence for the effect of sensory deprivation in 1965. Using standard techniques, they failed at that time to find any evidence of structural changes in the cortex. Only in 1970, with the development of new radioautographic labeling techniques for mapping connections among neurons,24 were they able to demonstrate the disturbance anatomically. Thus, in a larger sense, their studies make us realize that we are just beginning to explore the structural organization of the brain and the alterations that may be caused by experience and by disease. It is no wonder that an understanding of the biologic basis of most forms of mental illness has been beyond our reach until now.
It will be interesting, in the future, to see whether social deprivation of the sort studied by Harlow leads to deterioration or distortion of connections in other areas of the brain.
The effect of patterning of environmental experience on brain function is, of course, not limited to early development. Sensory and social stimuli constantly impinge on the brain and produce consequences of varying intensity and duration. The most clear-cut and best understood of these consequences is learning. Learning is defined as a prolonged or even relatively permanent change in behavior that results from repeated exposure to a pattern of stimulation.25 I use learning as my second example of the effects of patterning because I believe that the mechanisms of learning represent a key to an understanding of character development and of the amelioration of characterologic disorders produced by psychotherapeutic intervention.
The ability to learn from experience is certainly the most remarkable aspect of human behavior. We are in many ways the embodiment of what we have learned. In man as well as other animals, most forms of behavior involve some aspects of learning and memory. Moreover, many psychologic and emotional problems are thought to be learned—that is, they are thought to result, at least in part, from experience. And insofar as psychotherapeutic intervention is successful in treating mental disorders, it presumably succeeds by creating an experience that allows people to change.
As in studies of social and sensory deprivation, the major questions in biologic studies of behavior and learning were first posed 70 years ago, but the ability to answer them was gained only recently. Here, as in investigations of the critical developmental period, this ability came with progressively simpler experimental systems. The most consistent progress has resulted from studies of two simple forms of nonassociative learning: habituation and sensitization. Each of these forms is evident in human beings but can also be explored effectively in a variety of simple animal models. I will first consider habituation.
Habituation, perhaps the simplest form of learning, is a decrease in a behavioral response resulting from repeated presentation of the initiating stimulus. A common example is the habituation of an "orienting response" to a new stimulus. When a novel stimulus such as a loud noise is presented for the first time, one's attention is immediately drawn to it, and one's heart rate and respiratory rate increase. If the same noise is repeated, one rapidly learns to recognize the sound and one's attention and bodily responses gradually diminish (that is why one can become accustomed to working in a noisy office). In this sense, habituation is learning to recognize and to ignore stimuli that have lost novelty or meaning. Besides being important in its own right, habituation is frequently involved in more complex learning, which includes not only acquiring new responses but also eliminating incorrect responses.
The first approach to an animal model of habituation was made by Sherrington in 1906.26 In the course of studying the behavior underlying posture and locomotion, he observed that habituation of certain reflex forms of behavior, such as the flexion withdrawal of a limb to stimulation of the skin, occurred with repeated stimulation, and that recovery occurred only after many seconds of rest. With characteristic prescience, Sherrington suggested that the habituation of the withdrawal reflex was due to a functional decrease in the effectiveness of the set of synapses through which the motor neurons for the behavior were repeatedly activated. This problem was subsequently reinvestigated by Spencer, Thompson and Neilson,27 who found close parallels between habituation of the spinal reflexes in the cat and habituation of more complex behavioral responses in man. Moreover, by recording intracellularly from motor neurons, Spencer and his colleagues began the modern study of habituation. They found, as Sherrington had suggested, that the depression of the behavior was due to a decrease in the synaptic convergence onto the motor cells. However, the central synaptic pathways of the flexion-withdrawal reflex in the cat are complex, involving many as yet unspecified connections through interneurons. As a result, further analysis of habituation has required still simpler systems in which the behavioral response can be reduced to one or a series of monosynaptic connections.
My colleagues and I have extended the analyses of habituation and sensitization in studies of the marine snail Aplysia californica. This animal has a defensive withdrawal reflex of its respiratory organ, the gill, which is similar to the defensive reflexes of mammals, and habituation of this reflex shows all the features that characterize habituation in vertebrates, including man.28,29 Moreover, the wiring diagram of this behavior is remarkably simple, consisting of six identified motor neurons that mediate the behavior and a group of 24 sensory neurons that connect directly onto the motor neurons. There are also several interneurons that receive input from the sensory neurons and converge on the motor neurons (F4). Activity in a sensory neuron leads to release of a chemical transmitter substance that interacts with the receptors on the external membrane of the motor cell and reduces its membrane potential. If the membrane potential is reduced sufficiently, the motor cell will fire an action potential. The synaptically produced reduction in membrane potential is therefore called an excitatory synaptic potential.30 In response to the first stimulus, the sensory neurons produce large excitatory synaptic potentials in the motor cells, causing these cells to discharge rapidly and produce a brisk withdrawal. With habituation training the synaptic potential in the motor cell gradually becomes smaller; it produces fewer spikes, and the behavior is reduced. Finally, the synaptic potential becomes very small, at which point no behavior is produced. After a single training session involving 10 stimuli, the memory for this event (as evidenced by a reduced synaptic potential and behavior) is short, persisting for only minutes or hours. However, after four repeated training sessions spaced over consecutive days, the memory for habituation is prolonged, persisting for more than three weeks.
The critical change underlying short-term habituation occurs at the excitatory chemical synapses that the sensory neurons make on the motor neurons. With repeated stimulation, these synapses become less effective functionally because they release progressively less transmitter. Transmitter release depends on the influx of calcium into the terminals with each action potential. Analyses of the mechanisms that produce habituation indicate that the reduced output of neurotransmitter and the resultant depression of synaptic transmission is caused by a prolonged decrease in calcium influx.31
What are the limits of plasticity? How much can the effectiveness of a given synapse change, and how long can such a change endure? Can long-term habituation produce a complete and prolonged inactivation of a previously functioning synapse? In an effort to answer these questions, the connections between the sensory neurons and a given motor neuron were compared in control animals and animals examined after the acquisition of long-term habituation.32 In the control animals, 90 per cent of sensory neurons produced detectable connections to the major motor cells (F5). By contrast, after long-term habituation, only 30 per cent of the sensory neurons produced detectable connections onto the motor cell, and this effect lasted for over a week; these connections were only partially restored at three weeks. Thus, fully functioning synaptic connections were inactivated for over a week as a result of a simple learning experience—several brief sessions of habituation training of 10 trials each.
Thus, whereas short-term habituation involves a transient decrease in synaptic efficacy, long-term habituation leads to prolonged and profound functional inactivation of a previously existing connection. These data provide direct evidence that long-term change in synaptic efficacy can underlie a specific instance of long-term memory. Moreover, at a critical synapse such as this one, relatively few stimuli produce long-term synaptic depression.
Sensitization, the opposite of habituation, is the process whereby an animal learns to increase a given reflex response as a result of a noxious or novel stimulus. Thus, sensitization requires the animal to attend to stimuli that potentially produce painful or dangerous consequences. Like habituation, sensitization can last from minutes to days and weeks, depending on the pattern of stimulation.33 In this discussion, I will focus on the short-term form.
At the cellular level, sensitization also involves altered transmission at the synapses made by the sensory neurons on their central target cells. Specifically, sensitization involves a mechanism called presynaptic facilitation, whereby the neurons mediating sensitization end on the terminals of the sensory neurons and enhance their ability to release transmitter (F6). Thus, the same synaptic locus is regulated in opposite ways by opposing forms of learning: it is depressed by habituation and enhanced by sensitization. The transmitter released by the neurons that mediate presynaptic facilitation (which is thought to be serotonin) acts on the terminals of the sensory neurons to increase the level of cyclic AMP. Cyclic AMP, in turn, acts (perhaps through phosphorylation of a membrane channel) to increase calcium influx and thereby enhance transmitter release (F7).31,34—38
How effective a restoring force is sensitization? Can it restore the completely inactivated synaptic connections produced by long-term habituation? We have found that study sensitization not only reversed the depressed behavior but restored the effectiveness of synapses that had been functionally disconnected and would have remained so for over a week (F6).39
Thus, in these simple instances, learning does not involve a dramatic anatomic rearrangement in the nervous system. No nerve cells or even synapses are created or destroyed. Rather, learning of habituation and sensitization changes the functional effectiveness of previously existing chemical synaptic connections and, in these instances, does so simply by modulating calcium influx in the presynaptic terminals. Thus, a new dimension is introduced in thinking about the brain. These complex pathways, which are genetically determined, appear to be interrupted not by disease but by experience, and they can also be restored by experience.
The finding that dramatic and enduring alterations in the effectiveness of connections result from sensory deprivation and learning leads to a new way of viewing the relation between social and biologic processes in the generation of behavior. There is a tendency in psychiatry to think that biologic determinants of behavior act on a different "level of the mind" than do social and functional determinants. For example, it is still customary to classify psychiatric illnesses into two major categories: organic and functional. The organic mental illnesses include the dementias and the toxic psychoses; the functional illnesses include the various depressive syndromes, the schizophrenias and the neuroses. This distinction stems from studies in the 19th century, when neuropathologists examined the brains of patients at autopsy and found a disturbance in brain architecture in some diseases and a lack of disturbance in others. The diseases that produced clear (gross) evidence of brain lesions were called organic, and those that lacked these features were called functional. Studies of the critical developmental period and of learning have shown that this distinction is artificial. Sensory deprivation and learning have profound biologic consequences, causing effective disruption of synaptic connections under some circumstances and reactivation of connections under others. Instead of distinguishing between mental disorders along biologic and nonbiologic lines, it might be more appropriate to ask, in each type of mental illness, to what degree is this biologic process determined by genetic and developmental factors, to what degree is it due to infectious or toxic agents, and to what degree is it socially determined? In each case, even in the most socially determined neurotic illness, the end result is biologic. Ultimately, all psychologic disturbances reflect specific alterations in neuronal and synaptic function. And insofar as psychotherapy works, it works by acting on brain functions, not on single synapses, but on synapses nevertheless. Clearly, a shift is needed from a neuropathology also based only on structure to one based on function.
Cellular studies of the critical stages of development and of learning have shown that genetic and developmental processes determine the connections between neurons; what they leave unspecified is the strength of the connections. It is this factor—the long-term efficacy of synaptic connections—that is played on by environmental effects such as learning. What learning accomplishes in the instances so far studied is to alter the effectiveness of preexisting pathways, thereby leading to the expression of new patterns of behavior. As a result, when I speak to someone and he or she listens to me, we not only make eye contact and voice contact but the action of the neuronal machinery in my brain is having a direct and, I hope, longlasting effect on the neuronal machinery in his or her brain, and vice versa. Indeed, I would argue that it is only insofar as our words produce changes in each other's brains that psychotherapeutic intervention produces changes in patients' minds. From this perspective the biologic and psychologic approaches are joined. I would hope that the deep-seated dualism that once caused psychiatry and neurobiology to split into hard-nosed and soft-nosed attitudes will prove to be only a transient interlude in the history of psychiatry. Certainly, in their day, Meynert, Wagner-Jauregg and Freud had little difficulty in appreciating philosophically what my residency cohort and I lost sight of and what we can now again assert, perhaps with slightly more sophistication: what we conceive of as our mind is an expression of the functioning of our brain.