The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:

C linical and experimental evidence suggests that the left hemisphere of the brain is specialized for speech activity and the right hemisphere is specialized for many nonlinguistic functions. Jackson 1 related the hemispheric linguistic differences to differences in cognitive activity, suggesting that the left hemisphere is specialized for analytical organization, while the right hemisphere is adapted for direct associations among stimuli and responses. Modern researchers have substantially generalized this differentiation to encompass a wide range of behaviors in normal subjects. 2 , 3

Experimental 46 and clinical 7 , 8 investigators of hemispheric asymmetry appear to agree on the fundamental nature of the processing differences between the two sides of the brain: the left hemisphere is specialized for propositional, analytic, and serial processing of incoming information, while the right hemisphere is more adapted for the perception of appositional, holistic, and synthetic relations.

Up to now, the perception of music has been a well-documented exception to this differentiation. Melodies are composed of an ordered series of pitches, and hence should be processed by the left hemisphere rather than the right. Yet the recognition of simple melodies has been reported to be better in the left ear than the right. 9 , 10 This finding is prima facie evidence against the functional differentiation of the hemispheres proposed by Jackson; rather, it seems to support the view that the hemispheres are specialized according to stimulus-response modality, with speech in the left, vision and music in the right, and so forth. 10 , 11 In this report we present evidence that such conclusions are simplistic since they do not consider the different kinds of processing strategies that listeners use as a function of their musical experience. 12

Psychological and musicological analysis of processing strategies resolves the difficulty for a general theory of hemispheric differentiation posed by music perception. It has long been recognized that the perception of melodies can be a gestalt phenomenon. That is, the fact that a melody is composed of a series of isolated tones is not relevant for naive listeners—rather, they focus on the overall melodic contour. 13 The view that musically experienced listeners have learned to perceive a melody as an articulated set of relations among components rather than as a whole is suggested directly by Werner: 14, p. 54 “In advanced musical apprehension a melody is understood to be made up of single tonal motifs and tones which are distinct elements of the whole construction.” This is consistent with Meyer’s 15 view that recognition of “meaning” in music is a function not only of perception of whole melodic forms but also of concurrent appreciation of the way in which the analyzable components of the whole forms are combined. If a melody is normally treated as a gestalt by musically naive listeners, then the functional account of the differences between the two hemispheres predicts that melodies will be processed predominantly in the right hemisphere for such subjects. It is significant that the investigator who failied to find a superiority of the left ear for melody recognition used college musicians as subjects; 16 the subjects in other studies were musically naive (or unclassified).

If music perception is dominant in the right hemisphere only insofar as musical form is treated holistically by naive listeners, then the generalization of Jackson’s proposals about the differential functioning of the two hemispheres can be maintained. To establish this we conducted a study with subjects of varied levels of musical sophistication that required them to attend to both the internal structure of a tone sequence and its overall melodic contour.

We found that musically sophisticated listeners could accurately recognize isolated excerpts from a tone sequence, whereas musically naive listeners could not. However, musically naive people could recognize the entire tone sequences, and did so better when the stimuli were presented in the left ear; musically experienced people recognized the entire sequence better in the right ear. This is the first demonstration of the superiority of the right ear for music and shows that it depends on the listener’s being experienced; it explains the previously reported superiority of the left ear as being due to the use of musically naive subjects, who treat simple melodies as unanalyzed wholes. It is also the first report of ear differences for melodies with monaural stimulation.

We recruited two groups of right-handed subjects 17 15 to 30 years old from the New York area; 14 were musically naive listeners, who had less than 3 years of music lessons at least 5 years before the study; 22 were musically experienced (but nonprofessional) listeners, who had at least 4 years of music lessons and were currently playing or singing; each group of subjects was balanced for sex.

The listener’s task is outlined in Fig. 1 . The two-note excerpt recognition task provided a measure of whether the listener could analyze the internal structure of a melody. The sequence recognition task provided a measure of the listener’s ability to discriminate the entire configuration of the tone sequence. Each listener responded to a set of 36 tonal melodies ranging in length from 12 to 18 notes, and a parallel set of materials in which the tone sequences were a rearrangement of the notes in each melody so that the melodic line was disrupted somewhat. A well-tempered 1½-octave scale was used (starting from the note C with a frequency of 256 hertz). Each tone in a melodic sequence was exactly 300 msec long, and was equal in intensity to the other tones. Two seconds after each stimulus melody there was a two-note excerpt; three-fourths of the excerpts were drawn from the stimulus sequence, one-fourth were not. One-fourth of the melodies reoccurred as later stimuli—as the next stimulus, two stimuli later, or three stimuli later.

FIGURE 1. Experimental procedure for each trial. Each subject heard and responded in 72 such trials, in two blocks of 36.

Subjects were asked to listen to each stimulus sequence, to write down whether the following two-note excerpt was in the stimulus sequence, and then to write down whether they had heard the sequence before in the experiment. The stimuli were played over earphones at a comfortable listening level, either all to the right ear or all to the left ear for each subject. One-half of the subjects in each group heard the 36 melodic sequences first, and then the 36 rearranged sequences, with a rest period between the groups. Before each set of materials there was a recorded set of instructions which included four practice stimuli.

The musically experienced subjects discriminated the presence of the two-note excerpts in both ears (see Table 1) [ P < .01 across subjects and across simuli, on scores corrected for guessing 18 ]. No significant differences occurred according to whether the sequence was melodic or rearranged. The musically naive subjects did not discriminate the excerpts in either ear.

All groups of subjects successfully discriminated instances when a sequence was a repetition from instances when it was not. However, this discrimination was better in the right ear for experienced listeners ( P < .01 across subjects and P < .05 across stimuli) and better in the left ear for inexperienced listeners ( P < .025 across subjects and P < .001 across stimuli). These differences were numerically consistent for both melodic and rearranged sequences. Most of the differences between naive and experienced listeners can be attributed to the superior performance of the right ear in experienced listeners ( P < .025 across subjects and P < .025 across stimuli); performance in the left ear does not differ significantly between the two groups of subjects.

Confirming the results of previous studies, the musically naive subjects have a left ear superiority for melody recognition. However, the subjects who are musically sophisticated have a right ear superiority. Our interpretation is that musically sophisticated subjects can organize a melodic sequence in terms of the internal relation of its components. This is supported by the fact that only the experienced listeners could accurately recognize the two-note excerpts as part of the complete stimuli. Dominance of the left hemisphere for such analytic functions would explain dominance of the right ear for melody recognition in experienced listeners: as their capacity for musical analysis increases, the left hemispher becomes increasinly involved in the processing of music. This raises the possibility that being musically sophisticated has real neurological concomitants, permitting the utilization of a different strategy of musical apprehension that calls on left hemisphere functions.

We did not find a significant right ear superiority in excerpt recognition among experienced listeners. This may be due to the overall difficulty of the task and insensitivity of excerpt recognition as a response measure. Support for this interpretation comes from a more recent study in which we compared the response time for excerpt recognition in boys aged 9 to 13 who sing in church choir 19 with the response time in musically naive boys. In this study, recognition accuracy did not differ by ear, but response times were faster in the right ear than the left for choirboys. Furthermore, the relative superiority of the right ear in choirboys compared with other boys of the same age increased progressively with experience in the choir.

In sum, our subjects have demonstrated that it is the kind of processing applied to a musical stimulus that can determine which hemisphere is dominant. This means that music perception is now consistent with the generalization suggested initially by Jackson that the left hemisphere is specialized for internal stimulus analysis and the right hemisphere for holistic processing.

Department of Psychology, Columbia University, New York 10027

3 December 1973; revised 5 February 1974

References

1 . J. Taylor, Ed., Selected Writings of John Hughlings Jackson (Hodder & Stoughton, London, 1932), vol. 2, p. 130 ff. Google Scholar

2 . J. Levy, Nature (Lond.) 224 , 614 (1969); R. Ornstein, The Psychology of Consciousness (Viking, New York, 1973); J. Semmes, Neuropsychologia 6 , 11 (1968). Google Scholar

3 . B. Milner, Br. Med. Bull. 27 , 272 (1971). Google Scholar

4 . Perception of patterns: D. Kimura, Neuropsychologia 4 , 273 (1966). Letter arra G. Cohen, J. Exp. Psychol. 97 , 349 (1973). Face recognition: J. Levy et al. ( 5 ; G. Rizzolati, C. Umilta, G. Berlucci, Brain 94 , 431 (1971); G. Geffen, J. L. Bradshaw, G. Wallace, J. Exp. Psychol. 87 , 415 (1971). Spatial configurations: D. Kimura, Can. J. Psychol. 23 445 (1969); M. Durnford and D. Kimura, Nature (Lond.) 231 , 394 (1071). Chords: H. W. Gordon ( 6 ); D. Molfese, paper presented at the 84th meeting of the Acoustical Society of America, Miami Beach, Florida, 1 December 1972. Environmental sounds: F. L. King and D. Kimura, Can. J. Psychol. 26 , 2 (1972). Pitch and intensity: D. C. Doehring, ibid., p. 106. Emotional tone of voice: M. P. Haggard, Q. J. Exp. Psychol. 23 , 168 (1971). Also, recalled words ordered in sentences show right ear dominance, and unordered word strings do not: D. Bakker, Cortex 5 , 36 (1969); T. G. Bever, in Biological and Social Factors in Psycholinguistics, J. Morton, Ed. (Univ. of Illinois Press, Urbana, 1971); A. Frankfurther and R. P. Honeck, Q. J. Exp. Psychol. 25 , 138 (1973). Google Scholar

5 . J. Levy, C. Trevarthen, R. W. Sperry, Brain 95 , 61 (1972). Google Scholar

6 . H. W. Gordon, Cortex 6 , 387 (1970). Google Scholar

7 . D. Shankwieler, J. Comp. Physiol. Psychol. 62 , 115 (1966); M. S. Gazzagniga and R. W. Sperry, Brain 90 , 131 (1967); J. E. Bogen, Bull. Los Ang. Neurol. Soc. 34 , 135 (19XX); J. Levy-Agresti and R. W. Sperry, Proc. Natl. Acad. Sci. U.S.A. 61 , 1151 (1968); R. D. Nebes, thesis, California Institute of Technology (1970); Cortex 4 , 333 (1971); B. Milner and L. Taylor, Neuropsychologia 10 , 1 (1972); J. Bogen, in Drugs and Cerebral Function, W. L. Smith, Ed. (Thomas, Springfield, Ill., 1972), pp. 36–37. Google Scholar

8 . B. Milner, in Interhemispheric Relations and Cerebral Dominance, V. B. Mountcastle, Ed. (Johns Hopkins Univ. Press, Baltimore, 1961). Google Scholar

9 . We follow the common assumption that contralateral hemisphere-periphery neurological connections are dominant over ipsilateral connections; that is the left hemisphere is functionally connected to the right ear, and the right ear is functionally connected to the left ear [D. Kimura, Q. J. Exp. Psychol. 16 , 355 (1964); C. F. Darwin, ibid. 23 , 46 (1971); F. J. Spellacy and S. Blumstein, J. Acoust. Soc. Am. 49 , 87 (19XX); O. Spreen, F. Spellacy, J. Reid, Neuropsychologia 8 , 243 (1970); D. Kimura ( 10 )]. See also J. Bogen and H. Gordon [ Nature (Lond.) 230 , 524 (1971)] for clinical evidence for the involvement of right hemisphere functioning in singing. Google Scholar

10 . D. Kimura, Cortex 3 , 163 (1967). Google Scholar

11 . This modality view is explored by D. Kimura ( 10 ); Sci. Am. 229 , 70 (March 1973). Google Scholar

12 . For a similar differentiation of hemispheric function in vision and language, see J. Levy et al. ( 5 ) and B. Milner ( 3 ). Google Scholar

13 . Melody perception is a classic gestalt demonstration [C. von Ehrenfels, Vierteljahrsschr. Wiss. Philos. (1890), vol. 14; Z. Angew. Psychol. 26 , 101 (1926); H. Meissner, Zur Entwicklung Des Musikalischen Sinns Beim Kind Waehrend Des Schulalters (Trorvitzsch, Berlin, 1914); F. Brehmer, Beih. Z. Angew. Psychol. (1925), pp. 36 and 37; H. Werner, J. Psychol. 10 , 149 (1940)]. For recent investigations, see: W. J. Dowling, Percept. Psycho-phys. 9 , 348 (1971); D. Deutsch, ibid. 11 , 411 (1972). Google Scholar

14 . H. Werner, Comparative Psychology of Mental Development (International Universities Press, New York, 1948). Google Scholar

15 . L. Meyer, Emotion and Meaning in Music (Univ. of Chicago Press, Chicago, 1956). Google Scholar

16 . H. W. Gordon ( 6 ). The subjects in this study were probably intermediate in musical sophistication; accordingly, they did not show a consistent left or right ear superiority. We would expect individual differences in sucha population to be quite large. Google Scholar

17 . Right-handedness was checked by a modified questionnaire from H. Hecaen and J. Ajuriaguerra, Left-Handedness: Manual Superiority and Cerebral Dominance (Grune & Stratton, New York, 1964). Google Scholar

18 . The formula used wasTrue positives (%) — False positives (%) 1 — False positives (%) The results are tested nonparamentrically across subjects and stimuli separately for reasons outlined by H. Clark [ J. Verb. Learn. Verb. Behav. 12 , 4 (1973)]. In each case, the by-subject test is a Fisher exact test, and the by-stimulus test is a Wilcoxon matched-pairs, signed-ranks, two-tailed test. There were no significant differences between ears in guessing rates by either measure. Google Scholar

19 . The 20 choirboys were in the choir of the Cathedral of Saint John the Divine in New York City. The choir is of professional quality: the boys sing and rehearse about 14 hours a week. The nonchoir, nonmusical boys were drawn from the same school (the Cathedral School) and matched the choirboys in age and school grade (T. Bever, R. Chiarello, L. Kellar, in preparation).Google Scholar

20 . We thank A. Handel of Columbia University, J. Barlow and A. Strong of Wesleyan University, and S. Neff of Barnard College for their assistance. Supported by grants from the Grant Foundation and the National Institutes of Health.Google Scholar