Publikation: TranSalNet : Towards perceptually relevant visual saliency prediction
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
ArXiv-ID
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
LOU, Jianxun, Hanhe LIN, David MARSHALL, Dietmar SAUPE, Hantao LIU, 2022. TranSalNet : Towards perceptually relevant visual saliency prediction. In: Neurocomputing. Elsevier. 2022, 494, pp. 455-467. ISSN 0925-2312. eISSN 1872-8286. Available under: doi: 10.1016/j.neucom.2022.04.080BibTex
@article{Lou2022TranS-57968, year={2022}, doi={10.1016/j.neucom.2022.04.080}, title={TranSalNet : Towards perceptually relevant visual saliency prediction}, volume={494}, issn={0925-2312}, journal={Neurocomputing}, pages={455--467}, author={Lou, Jianxun and Lin, Hanhe and Marshall, David and Saupe, Dietmar and Liu, Hantao} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57968"> <dc:creator>Saupe, Dietmar</dc:creator> <dc:contributor>Saupe, Dietmar</dc:contributor> <dc:language>eng</dc:language> <dcterms:issued>2022</dcterms:issued> <dc:creator>Lin, Hanhe</dc:creator> <dcterms:abstract xml:lang="eng">Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.</dcterms:abstract> <dc:creator>Marshall, David</dc:creator> <dc:creator>Lou, Jianxun</dc:creator> <dc:contributor>Lin, Hanhe</dc:contributor> <dc:rights>Attribution 4.0 International</dc:rights> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-08T07:23:56Z</dcterms:available> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:contributor>Marshall, David</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57968/1/Lou_2-5kwhrr209v4z8.pdf"/> <dc:contributor>Lou, Jianxun</dc:contributor> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-08T07:23:56Z</dc:date> <dc:contributor>Liu, Hantao</dc:contributor> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57968"/> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57968/1/Lou_2-5kwhrr209v4z8.pdf"/> <dc:creator>Liu, Hantao</dc:creator> <dcterms:title>TranSalNet : Towards perceptually relevant visual saliency prediction</dcterms:title> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> </rdf:Description> </rdf:RDF>