Publikation: Cued Speech Enhances Speech-in-Noise Perception
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
BAYARD, Clémence, Laura MACHART, Antje STRAUSS, Silvain GERBER, Vincent AUBANEL, Jean-Luc SCHWARTZ, 2019. Cued Speech Enhances Speech-in-Noise Perception. In: Journal of Deaf Studies and Deaf Education. Oxford University Press (OUP). 2019, 24(3), pp. 223-233. ISSN 1081-4159. eISSN 1465-7325. Available under: doi: 10.1093/deafed/enz003BibTex
@article{Bayard2019Speec-48816, year={2019}, doi={10.1093/deafed/enz003}, title={Cued Speech Enhances Speech-in-Noise Perception}, number={3}, volume={24}, issn={1081-4159}, journal={Journal of Deaf Studies and Deaf Education}, pages={223--233}, author={Bayard, Clémence and Machart, Laura and Strauß, Antje and Gerber, Silvain and Aubanel, Vincent and Schwartz, Jean-Luc} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/48816"> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/52"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/45"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-02-26T10:03:25Z</dcterms:available> <dc:contributor>Bayard, Clémence</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/52"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/45"/> <dc:contributor>Strauß, Antje</dc:contributor> <dc:contributor>Schwartz, Jean-Luc</dc:contributor> <dc:contributor>Machart, Laura</dc:contributor> <dc:creator>Machart, Laura</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-02-26T10:03:25Z</dc:date> <dc:contributor>Aubanel, Vincent</dc:contributor> <dcterms:title>Cued Speech Enhances Speech-in-Noise Perception</dcterms:title> <dc:creator>Bayard, Clémence</dc:creator> <dc:creator>Gerber, Silvain</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/48816"/> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:issued>2019</dcterms:issued> <dcterms:abstract xml:lang="eng">Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.</dcterms:abstract> <dc:creator>Aubanel, Vincent</dc:creator> <dc:creator>Schwartz, Jean-Luc</dc:creator> <dc:language>eng</dc:language> <dc:creator>Strauß, Antje</dc:creator> <dc:contributor>Gerber, Silvain</dc:contributor> </rdf:Description> </rdf:RDF>