Largely distinct networks mediate perceptually-relevant auditory and visual speech representations

Keitel A, Gross J, Kayser C (2019)
bioRxiv.

Preprint | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Keitel, Anne; Gross, Joachim; Kayser, ChristophUniBi
Abstract / Bemerkung
Visual speech is an integral part of communication. Yet it remains unclear whether semantic information carried by movements of the lips or tongue is represented in the same brain regions that mediate acoustic speech representations. Behaviourally, our ability to understand acoustic speech seems independent from that to understand visual speech, but neuroimaging studies suggest that acoustic and visual speech representations largely overlap. To resolve this discrepancy, and to understand whether acoustic and lip-reading speech comprehension are mediated by the same cerebral representations, we systematically probed where the brain represents acoustically and visually conveyed word identities in a human MEG study. We designed a single-trial classification paradigm to dissociate where cerebral representations merely reflect the sensory stimulus and where they are predictive of the participant’s percept. In general, those brain regions allowing for the highest word classification were distinct from those in which cerebral representations were predictive of participant’s percept. Across the brain, word representations were largely modality-specific and auditory and visual comprehension were mediated by distinct left-lateralised ventral and dorsal fronto-temporal regions, respectively. Only within the inferior frontal gyrus and the anterior temporal lobe did auditory and visual representations converge. These results provide a neural explanation for why acoustic speech comprehension is a poor predictor of lip-reading skills and suggests that those cerebral speech representations that encode word identity may be more modality-specific than often upheld.
Erscheinungsjahr
2019
Zeitschriftentitel
bioRxiv
Page URI
https://pub.uni-bielefeld.de/record/2936796

Zitieren

Keitel A, Gross J, Kayser C. Largely distinct networks mediate perceptually-relevant auditory and visual speech representations. bioRxiv. 2019.
Keitel, A., Gross, J., & Kayser, C. (2019). Largely distinct networks mediate perceptually-relevant auditory and visual speech representations. bioRxiv
Keitel, Anne, Gross, Joachim, and Kayser, Christoph. 2019. “Largely distinct networks mediate perceptually-relevant auditory and visual speech representations”. bioRxiv.
Keitel, A., Gross, J., and Kayser, C. (2019). Largely distinct networks mediate perceptually-relevant auditory and visual speech representations. bioRxiv.
Keitel, A., Gross, J., & Kayser, C., 2019. Largely distinct networks mediate perceptually-relevant auditory and visual speech representations. bioRxiv.
A. Keitel, J. Gross, and C. Kayser, “Largely distinct networks mediate perceptually-relevant auditory and visual speech representations”, bioRxiv, 2019.
Keitel, A., Gross, J., Kayser, C.: Largely distinct networks mediate perceptually-relevant auditory and visual speech representations. bioRxiv. (2019).
Keitel, Anne, Gross, Joachim, and Kayser, Christoph. “Largely distinct networks mediate perceptually-relevant auditory and visual speech representations”. bioRxiv (2019).

Link(s) zu Volltext(en)
Access Level
OA Open Access

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Quellen

Preprint: 10.1101/661405

Suchen in

Google Scholar