-
- Chloé Stoll
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
-
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
-
- Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
-
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
-
- Olivier Pascalis
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
-
- Matthew Dye
- National Technical Institute for Deaf/Rochester Institute of Technology
-
- Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
説明
<jats:title>Abstract</jats:title> <jats:p>We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.</jats:p>
収録刊行物
-
- The Journal of Deaf Studies and Deaf Education
-
The Journal of Deaf Studies and Deaf Education 24 (4), 346-355, 2019-07-04
Oxford University Press (OUP)