TranSalNet : Towards perceptually relevant visual saliency prediction

Loading...
Thumbnail Image
Date
2022
Authors
Lou, Jianxun
Marshall, David
Liu, Hantao
Editors
Contact
Journal ISSN
Electronic ISSN
ISBN
Bibliographical data
Publisher
Series
ArXiv-ID
International patent number
Link to the license
EU project number
Project
Open Access publication
Restricted until
Title in another language
Research Projects
Organizational Units
Journal Issue
Publication type
Journal article
Publication status
Published
Published in
Neurocomputing ; 494 (2022). - pp. 455-467. - Elsevier. - ISSN 0925-2312. - eISSN 1872-8286
Abstract
Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.
Summary in another language
Subject (DDC)
004 Computer Science
Keywords
Conference
Review
undefined / . - undefined, undefined. - (undefined; undefined)
Cite This
ISO 690LOU, Jianxun, Hanhe LIN, David MARSHALL, Dietmar SAUPE, Hantao LIU, 2022. TranSalNet : Towards perceptually relevant visual saliency prediction. In: Neurocomputing. Elsevier. 494, pp. 455-467. ISSN 0925-2312. eISSN 1872-8286. Available under: doi: 10.1016/j.neucom.2022.04.080
BibTex
@article{Lou2022TranS-57968,
  year={2022},
  doi={10.1016/j.neucom.2022.04.080},
  title={TranSalNet : Towards perceptually relevant visual saliency prediction},
  volume={494},
  issn={0925-2312},
  journal={Neurocomputing},
  pages={455--467},
  author={Lou, Jianxun and Lin, Hanhe and Marshall, David and Saupe, Dietmar and Liu, Hantao}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57968">
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <dc:language>eng</dc:language>
    <dcterms:issued>2022</dcterms:issued>
    <dc:creator>Lin, Hanhe</dc:creator>
    <dcterms:abstract xml:lang="eng">Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.</dcterms:abstract>
    <dc:creator>Marshall, David</dc:creator>
    <dc:creator>Lou, Jianxun</dc:creator>
    <dc:contributor>Lin, Hanhe</dc:contributor>
    <dc:rights>Attribution 4.0 International</dc:rights>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-08T07:23:56Z</dcterms:available>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Marshall, David</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57968/1/Lou_2-5kwhrr209v4z8.pdf"/>
    <dc:contributor>Lou, Jianxun</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-08T07:23:56Z</dc:date>
    <dc:contributor>Liu, Hantao</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57968"/>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57968/1/Lou_2-5kwhrr209v4z8.pdf"/>
    <dc:creator>Liu, Hantao</dc:creator>
    <dcterms:title>TranSalNet : Towards perceptually relevant visual saliency prediction</dcterms:title>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
  </rdf:Description>
</rdf:RDF>
Internal note
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Contact
URL of original publication
Test date of URL
Examination date of dissertation
Method of financing
Comment on publication
Alliance license
Corresponding Authors der Uni Konstanz vorhanden
International Co-Authors
Bibliography of Konstanz
Yes
Refereed
Yes