Publikation: Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
HINTZ, Florian, Yung Han KHOE, Antje STRAUSS, Adam Johannes Alfredo PSOMAKAS, Judith HOLLER, 2023. Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. In: Cognitive, Affective, & Behavioral Neuroscience. Springer. 2023, 23(2), S. 340-353. ISSN 1530-7026. eISSN 1531-135X. Verfügbar unter: doi: 10.3758/s13415-023-01074-8BibTex
@article{Hintz2023-04Elect-69881, year={2023}, doi={10.3758/s13415-023-01074-8}, title={Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension}, number={2}, volume={23}, issn={1530-7026}, journal={Cognitive, Affective, & Behavioral Neuroscience}, pages={340--353}, author={Hintz, Florian and Khoe, Yung Han and Strauß, Antje and Psomakas, Adam Johannes Alfredo and Holler, Judith} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/69881"> <dc:creator>Hintz, Florian</dc:creator> <dc:contributor>Strauß, Antje</dc:contributor> <dc:language>eng</dc:language> <dc:creator>Strauß, Antje</dc:creator> <dc:contributor>Psomakas, Adam Johannes Alfredo</dc:contributor> <dcterms:title>Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension</dcterms:title> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/45"/> <dcterms:issued>2023-04</dcterms:issued> <dc:contributor>Holler, Judith</dc:contributor> <dc:creator>Psomakas, Adam Johannes Alfredo</dc:creator> <dc:rights>Attribution 4.0 International</dc:rights> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-04-29T10:21:03Z</dc:date> <dc:creator>Khoe, Yung Han</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/69881"/> <dc:contributor>Hintz, Florian</dc:contributor> <dc:creator>Holler, Judith</dc:creator> <dcterms:abstract>In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.</dcterms:abstract> <dc:contributor>Khoe, Yung Han</dc:contributor> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-04-29T10:21:03Z</dcterms:available> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/69881/1/Hintz_2-b172r165dy7z1.pdf"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/45"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/69881/1/Hintz_2-b172r165dy7z1.pdf"/> </rdf:Description> </rdf:RDF>