TranSalNet : Towards perceptually relevant visual saliency prediction

Cite This

Files in this item

Checksum: MD5:760234815a808929888b97c8677c106c

LOU, Jianxun, Hanhe LIN, David MARSHALL, Dietmar SAUPE, Hantao LIU, 2022. TranSalNet : Towards perceptually relevant visual saliency prediction. In: Neurocomputing. Elsevier. 494, pp. 455-467. ISSN 0925-2312. eISSN 1872-8286. Available under: doi: 10.1016/j.neucom.2022.04.080

@article{Lou2022TranS-57968, title={TranSalNet : Towards perceptually relevant visual saliency prediction}, year={2022}, doi={10.1016/j.neucom.2022.04.080}, volume={494}, issn={0925-2312}, journal={Neurocomputing}, pages={455--467}, author={Lou, Jianxun and Lin, Hanhe and Marshall, David and Saupe, Dietmar and Liu, Hantao} }

<rdf:RDF xmlns:dcterms="" xmlns:dc="" xmlns:rdf="" xmlns:bibo="" xmlns:dspace="" xmlns:foaf="" xmlns:void="" xmlns:xsd="" > <rdf:Description rdf:about=""> <dcterms:available rdf:datatype="">2022-07-08T07:23:56Z</dcterms:available> <dc:language>eng</dc:language> <dc:creator>Liu, Hantao</dc:creator> <dc:date rdf:datatype="">2022-07-08T07:23:56Z</dc:date> <dcterms:abstract xml:lang="eng">Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.</dcterms:abstract> <dc:contributor>Lin, Hanhe</dc:contributor> <dspace:isPartOfCollection rdf:resource=""/> <dc:creator>Marshall, David</dc:creator> <dc:contributor>Marshall, David</dc:contributor> <dcterms:hasPart rdf:resource=""/> <dcterms:isPartOf rdf:resource=""/> <dc:contributor>Liu, Hantao</dc:contributor> <dcterms:issued>2022</dcterms:issued> <dc:contributor>Saupe, Dietmar</dc:contributor> <dc:creator>Lin, Hanhe</dc:creator> <dspace:hasBitstream rdf:resource=""/> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <bibo:uri rdf:resource=""/> <dc:creator>Saupe, Dietmar</dc:creator> <dcterms:rights rdf:resource=""/> <dcterms:title>TranSalNet : Towards perceptually relevant visual saliency prediction</dcterms:title> <dc:creator>Lou, Jianxun</dc:creator> <dc:contributor>Lou, Jianxun</dc:contributor> <dc:rights>Attribution 4.0 International</dc:rights> </rdf:Description> </rdf:RDF>

Downloads since Jul 8, 2022 (Information about access statistics)

Lou_2-5kwhrr209v4z8.pdf 66

This item appears in the following Collection(s)

Attribution 4.0 International Except where otherwise noted, this item's license is described as Attribution 4.0 International

Search KOPS


My Account