Towards developing robust multimodal databases for emotion analysis
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
EU-Projektnummer
DFG-Projektnummer
Projekt
Open Access-Veröffentlichung
Sammlungen
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Understanding emotions can make the difference between succeeding and failing during communication. Several systems have been developed in the field of Affective Computing in order to understand emotions. Recently these systems focus into multimodal emotion recognition. The basis of each of these systems is emotion databases. Even though a lot of attention has been placed in capturing spontaneous emotion expressions, building an emotion database is a task with several challenges that are commonly neglected, namely: quality of the recordings, ground truth, multiple device recording, data labeling and context. In this paper we present a new spontaneous emotion database, with human-computer and human to human interactions. This database is composed by eight different synchronized signals, in four interaction tasks. Strategies on how to deal with emotion database construction challenges are explained in detail.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
QUIRÓS-RAMÍREZ, M. Alejandra, Senya POLIKOVSKY, Yoshinari KAMEDA, Takehisa ONISAWA, 2012. Towards developing robust multimodal databases for emotion analysis. 6th International Conference on Soft Computing and Intelligent Systems, and 13th International Symposium on Advanced Intelligence Systems : SCIS/ISIS 2012. Kobe, Japan, 20. Nov. 2012 - 24. Nov. 2012. In: The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems. Piscataway, NJ: IEEE, 2012, pp. 589-594. ISBN 978-1-4673-2742-8. Available under: doi: 10.1109/SCIS-ISIS.2012.6505247BibTex
@inproceedings{QuirosRamirez2012-11Towar-44482, year={2012}, doi={10.1109/SCIS-ISIS.2012.6505247}, title={Towards developing robust multimodal databases for emotion analysis}, isbn={978-1-4673-2742-8}, publisher={IEEE}, address={Piscataway, NJ}, booktitle={The 6th International Conference on Soft Computing and Intelligent Systems, and The 13th International Symposium on Advanced Intelligence Systems}, pages={589--594}, author={Quirós-Ramírez, M. Alejandra and Polikovsky, Senya and Kameda, Yoshinari and Onisawa, Takehisa} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/44482"> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:issued>2012-11</dcterms:issued> <dc:contributor>Quirós-Ramírez, M. Alejandra</dc:contributor> <dc:language>eng</dc:language> <dc:contributor>Polikovsky, Senya</dc:contributor> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/44482"/> <dcterms:title>Towards developing robust multimodal databases for emotion analysis</dcterms:title> <dc:creator>Quirós-Ramírez, M. Alejandra</dc:creator> <dc:creator>Onisawa, Takehisa</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-01-09T11:23:39Z</dc:date> <dc:creator>Kameda, Yoshinari</dc:creator> <dc:contributor>Onisawa, Takehisa</dc:contributor> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-01-09T11:23:39Z</dcterms:available> <dcterms:abstract xml:lang="eng">Understanding emotions can make the difference between succeeding and failing during communication. Several systems have been developed in the field of Affective Computing in order to understand emotions. Recently these systems focus into multimodal emotion recognition. The basis of each of these systems is emotion databases. Even though a lot of attention has been placed in capturing spontaneous emotion expressions, building an emotion database is a task with several challenges that are commonly neglected, namely: quality of the recordings, ground truth, multiple device recording, data labeling and context. In this paper we present a new spontaneous emotion database, with human-computer and human to human interactions. This database is composed by eight different synchronized signals, in four interaction tasks. Strategies on how to deal with emotion database construction challenges are explained in detail.</dcterms:abstract> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:creator>Polikovsky, Senya</dc:creator> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Kameda, Yoshinari</dc:contributor> </rdf:Description> </rdf:RDF>