Biomedical Imaging GroupSTI
English only   BIG > Publications > Voice-Sensitive Cortices

 Home Page
 News & Events
 Tutorials and Reviews
 Download Algorithms

 All BibTeX References

Decoding of Emotional Information in Voice-Sensitive Cortices

T. Ethofer, D. Van De Ville, K. Scherer, P. Vuilleumier

Current Biology, vol. 19, no. 12, pp. 1028-1033, June 23, 2009.

The ability to correctly interpret emotional signals from others is crucial for successful social interaction. Previous neuroimaging studies showed that voice-sensitive auditory areas [1, 2, 3] activate to a broad spectrum of vocally expressed emotions more than to neutral speech melody (prosody). However, this enhanced response occurs irrespective of the specific emotion category, making it impossible to distinguish different vocal emotions with conventional analyses [4, 5, 6, 7, 8]. Here, we presented pseudowords spoken in five prosodic categories (anger, sadness, neutral, relief, joy) during event-related functional magnetic resonance imaging (fMRI), then employed multivariate pattern analysis [9, 10] to discriminate between these categories on the basis of the spatial response pattern within the auditory cortex. Our results demonstrate successful decoding of vocal emotions from fMRI responses in bilateral voice-sensitive areas, which could not be obtained by using averaged response amplitudes only. Pairwise comparisons showed that each category could be classified against all other alternatives, indicating for each emotion a specific spatial signature that generalized across speakers. These results demonstrate for the first time that emotional information is represented by distinct spatial patterns that can be decoded from brain activity in modality-specific cortical areas.


  1. P. Belin, R.J. Zatorre, P. Lafaille, P. Ahadt, B. Pike, "Voice-Selective Areas in Human Auditory Cortex," Nature, vol. 403, no. 6767, pp. 309-312, January 20, 2000.

  2. C. Kayser, T. Steudel, K. Whittingstall, M. Augath, N.K. Logothetis, C.I. Petkov, "A Voice Region in the Monkey Brain," Nature Neuroscience, vol. 11, no. 3, pp. 367-374, February 10, 2008.

  3. T. Zähle, E. Geiser, K. Alter, L. Jancke, M. Meyer, "Segmental Processing in the Human Auditory Dorsal Stream," Brain Research, vol. 1220, pp. 179-190, July 18, 2008.

  4. S.A. Kotz, M. Meyer, K. Alter, M. Besson, D.Y. von Cramon, A.D. Friederici, "On the Lateralization of Emotional Prosody: An Event-Related functional MR Investigation," Brain and Language, vol. 86, no. 3, pp. 366-376, September 2003.

  5. D. Sander, G. Pourtois, S. Schwartz, M.L. Seghier, K.R. Scherer, P. Vuilleumier, D. Grandjean, "The Voices of Wrath: Brain Responses to Angry Prosody in Meaningless Speech," Nature Neuroscience, vol. 8, no. 2, pp. 145-146, January 23, 2005.

  6. T. Ethofer, S. Anders, S. Wiethoff, M. Erb, C. Herbert, R. Saur, W. Grodd, D. Wildgruber, "Effects of Prosodic Emotional Intensity on Activation of Associative Auditory Cortex," NeuroReport, vol. 17, no. 3, pp. 249-253, February 27, 2006.

  7. T. Ethofer, S. Wiethoff, S. Anders, B. Kreifelts, W. Grodd, D. Wildgruber, "The Voices of Seduction: Cross-Gender Effects in Processing of Erotic Prosody," Social Cognitive and Affective Neuroscience, vol. 2, no. 4, pp. 334-337, December 2007.

  8. S. Wiethoff, D. Wildgruber, B. Kreifelts, H. Becker, C. Herbert, W. Grodd, T. Ethofer, "Cerebral Processing of Emotional Prosody—Influence of Acoustic Parameters and Arousal," NeuroImage, vol. 39, no. 2, pp. 885-893, January 15, 2008.

  9. J.-D. Haynes, G. Rees, "Decoding Mental States from Brain Activity in Humans," Nature Reviews Neuroscience, vol. 7, no. 7, pp. 523-534, July 2006.

  10. K.A. Norman, S.M. Polyn, G.J. Detre, J.V. Haxby, "Beyond Mind-Reading: Multi-Voxel Pattern Analysis of fMRI Data," Trends in Cognitive Sciences, vol. 10, no. 9, pp. 424-430, September 2006.

AUTHOR="Ethofer, T. and Van De Ville, D. and Scherer, K. and
        Vuilleumier, P.",
TITLE="Decoding of Emotional Information in Voice-Sensitive Cortices",
JOURNAL="Current Biology",
month="June 23,",

© 2009 Elsevier. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from Elsevier.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.