- 【Updated on May 12, 2025】 Integration of CiNii Dissertations and CiNii Books into CiNii Research
- Trial version of CiNii Research Automatic Translation feature is available on CiNii Labs
- Suspension and deletion of data provided by Nikkei BP
- Regarding the recording of “Research Data” and “Evidence Data”
Visual speech speeds up the neural processing of auditory speech
-
- Virginie van Wassenhove
- Neuroscience and Cognitive Science Program and Departments of Biology and Linguistics, University of Maryland, College Park, MD 20742; and Auditory-Visual Speech Laboratory, Walter Reed Army Medical Center, Washington, DC 20307
-
- Ken W. Grant
- Neuroscience and Cognitive Science Program and Departments of Biology and Linguistics, University of Maryland, College Park, MD 20742; and Auditory-Visual Speech Laboratory, Walter Reed Army Medical Center, Washington, DC 20307
-
- David Poeppel
- Neuroscience and Cognitive Science Program and Departments of Biology and Linguistics, University of Maryland, College Park, MD 20742; and Auditory-Visual Speech Laboratory, Walter Reed Army Medical Center, Washington, DC 20307
Search this article
Description
<jats:p>Synchronous presentation of stimuli to the auditory and visual systems can modify the formation of a percept in either modality. For example, perception of auditory speech is improved when the speaker's facial articulatory movements are visible. Neural convergence onto multisensory sites exhibiting supra-additivity has been proposed as the principal mechanism for integration. Recent findings, however, have suggested that putative sensory-specific cortices are responsive to inputs presented through a different modality. Consequently, when and where audiovisual representations emerge remain unsettled. In combined psychophysical and electroencephalography experiments we show that visual speech speeds up the cortical processing of auditory signals early (within 100 ms of signal onset). The auditory–visual interaction is reflected as an articulator-specific temporal facilitation (as well as a nonspecific amplitude reduction). The latency facilitation systematically depends on the degree to which the visual signal predicts possible auditory targets. The observed auditory–visual data support the view that there exist abstract internal representations that constrain the analysis of subsequent speech inputs. This is evidence for the existence of an “analysis-by-synthesis” mechanism in auditory–visual speech perception.</jats:p>
Journal
-
- Proceedings of the National Academy of Sciences
-
Proceedings of the National Academy of Sciences 102 (4), 1181-1186, 2005-01-12
Proceedings of the National Academy of Sciences
- Tweet
Details 詳細情報について
-
- CRID
- 1360574096540576768
-
- ISSN
- 10916490
- 00278424
-
- Data Source
-
- Crossref