Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to… Click to show full abstract
Previous research has documented perceptual and brain differences between spontaneous and volitional emotional vocalizations. However, the time course of emotional authenticity processing remains unclear. We used event-related potentials (ERPs) to address this question, and we focused on the processing of laughter and crying. We additionally tested whether the neural encoding of authenticity is influenced by attention, by manipulating task focus (authenticity versus emotional category) and visual condition (with versus without visual deprivation). ERPs were recorded from 43 participants while they listened to vocalizations and evaluated their authenticity (volitional versus spontaneous) or emotional meaning (sad versus amused). Twenty-two of the participants were blindfolded and tested in a dark room, and 21 were tested in standard visual conditions. As compared to volitional vocalizations, spontaneous ones were associated with reduced N1 amplitude in the case of laughter, and increased P2 in the case of crying. At later cognitive processing stages, more positive amplitudes were observed for spontaneous (versus volitional) laughs and cries (1000-1400 msec), with earlier effects for laughs (700-1000 msec). Visual condition affected brain responses to emotional authenticity at early (P2 range) and late processing stages (middle and late LPP ranges). Task focus did not influence neural responses to authenticity. Our findings suggest that authenticity information is encoded early and automatically during vocal emotional processing. They also point to a potentially faster encoding of authenticity in laughter compared to crying.
Click one of the above tabs to view related content.