TY - JOUR
T1 - The identification of affective-prosodic stimuli by left- and right- hemisphere-damaged subjects
T2 - All errors are not created equal
AU - Van Lancker, D.
AU - Sidtis, J. J.
PY - 1992
Y1 - 1992
N2 - Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.
AB - Impairments in listening tasks that require subjects to match affective- prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere- damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.
UR - http://www.scopus.com/inward/record.url?scp=0026592167&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0026592167&partnerID=8YFLogxK
M3 - Article
C2 - 1447930
AN - SCOPUS:0026592167
SN - 0022-4685
VL - 35
SP - 963
EP - 970
JO - Journal of Speech and Hearing Research
JF - Journal of Speech and Hearing Research
IS - 5
ER -