Publikation:

A survey of multimodal sentiment analysis

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2017

Autor:innen

Soleymani, Mohammad
Jou, Brendan
Schuller, Björn
Chang, Shih-Fu
Pantic, Maja

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Image and Vision Computing. Elsevier. 2017, 65, pp. 3-14. ISSN 0262-8856. eISSN 1872-8138. Available under: doi: 10.1016/j.imavis.2017.08.003

Zusammenfassung

Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual's sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human–machine and human–human interactions. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
320 Politik

Schlagwörter

Sentiment, Affect, Sentiment analysis, Human behavior analysis, Computer vision, Affective computing

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690SOLEYMANI, Mohammad, David GARCIA, Brendan JOU, Björn SCHULLER, Shih-Fu CHANG, Maja PANTIC, 2017. A survey of multimodal sentiment analysis. In: Image and Vision Computing. Elsevier. 2017, 65, pp. 3-14. ISSN 0262-8856. eISSN 1872-8138. Available under: doi: 10.1016/j.imavis.2017.08.003
BibTex
@article{Soleymani2017surve-59852,
  year={2017},
  doi={10.1016/j.imavis.2017.08.003},
  title={A survey of multimodal sentiment analysis},
  volume={65},
  issn={0262-8856},
  journal={Image and Vision Computing},
  pages={3--14},
  author={Soleymani, Mohammad and Garcia, David and Jou, Brendan and Schuller, Björn and Chang, Shih-Fu and Pantic, Maja}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/59852">
    <dcterms:abstract xml:lang="eng">Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual's sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human–machine and human–human interactions. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.</dcterms:abstract>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-01-20T10:08:41Z</dc:date>
    <dc:creator>Jou, Brendan</dc:creator>
    <dc:contributor>Jou, Brendan</dc:contributor>
    <dc:creator>Chang, Shih-Fu</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-01-20T10:08:41Z</dcterms:available>
    <dcterms:title>A survey of multimodal sentiment analysis</dcterms:title>
    <dc:contributor>Soleymani, Mohammad</dc:contributor>
    <dcterms:issued>2017</dcterms:issued>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:language>eng</dc:language>
    <dc:creator>Pantic, Maja</dc:creator>
    <dc:contributor>Garcia, David</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/42"/>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dc:creator>Garcia, David</dc:creator>
    <dc:contributor>Schuller, Björn</dc:contributor>
    <dc:creator>Schuller, Björn</dc:creator>
    <dc:creator>Soleymani, Mohammad</dc:creator>
    <dc:contributor>Chang, Shih-Fu</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/59852"/>
    <dc:contributor>Pantic, Maja</dc:contributor>
    <dc:rights>terms-of-use</dc:rights>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Unbekannt
Diese Publikation teilen