Prosody and semantics are separate but not separable channels in the perception of emotional speech: test of rating of emotions in speech (T-RES)
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. We implement a novel tool, the Test for Rating of Emotions in Speech (T-RES). Eighty native English speakers were presented with spoken sentences made of different combinations of five discrete emotions (anger, fear, happiness, sadness, and neutral) presented in the prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody/semantics).
Our results show three main trends: Supremacy of congruency — a sentence that presents the same emotion in both speech channels was rated highest; Failure of selective attention — listeners were unable to selectively attend to one channel when instructed; Prosodic dominance — prosodic information plays a larger role than semantics in processing emotional speech.
Conclusions: Emotional prosody and semantics are separate but not separable channels and it is difficult to perceive one without the influence of the other. The findings indicate that T-RES can reveal specific aspects in the processing of emotional speech and in future use may prove useful for understanding emotion processing deficits in pathological populations.