Publikation:

CUDAS : Distortion-Aware Saliency Benchmark

Lade...
Vorschaubild

Dateien

Zhao_2-zfiyul4ms70x2.pdf
Zhao_2-zfiyul4ms70x2.pdfGröße: 4.83 MBDownloads: 33

Datum

2023

Autor:innen

Zhao, Xin
Lou, Jianxun
Wu, Xinbo
Wu, Yingying
Lévêque, Lucie
Liu, Xiaochang
Guo, Pengfei
Qin, Yipeng

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Gold
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

IEEE Access. IEEE. 2023, 11, pp. 58025-58036. eISSN 2169-3536. Available under: doi: 10.1109/access.2023.3283344

Zusammenfassung

Visual saliency prediction remains an academic challenge due to the diversity and complexity of natural scenes as well as the scarcity of eye movement data on where people look in images. In many practical applications, digital images are inevitably subject to distortions, such as those caused by acquisition, editing, compression or transmission. A great deal of attention has been paid to predicting the saliency of distortion-free pristine images, but little attention has been given to understanding the impact of visual distortions on saliency prediction. In this paper, we first present the CUDAS database - a new distortion-aware saliency benchmark, where eye-tracking data was collected for 60 pristine images and their corresponding 540 distorted formats. We then conduct a statistical evaluation to reveal the behaviour of state-of-the-art saliency prediction models on distorted images and provide insights on building an effective model for distortion-aware saliency prediction. The new database is made publicly available to the research community.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Distortion, Databases, Graphics processing units, Visualization, Benchmark testing, Gaze tracking, Computational modeling

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690ZHAO, Xin, Jianxun LOU, Xinbo WU, Yingying WU, Lucie LÉVÊQUE, Xiaochang LIU, Pengfei GUO, Yipeng QIN, Hanhe LIN, Dietmar SAUPE, Hantao LIU, 2023. CUDAS : Distortion-Aware Saliency Benchmark. In: IEEE Access. IEEE. 2023, 11, pp. 58025-58036. eISSN 2169-3536. Available under: doi: 10.1109/access.2023.3283344
BibTex
@article{Zhao2023CUDAS-67086,
  year={2023},
  doi={10.1109/access.2023.3283344},
  title={CUDAS : Distortion-Aware Saliency Benchmark},
  volume={11},
  journal={IEEE Access},
  pages={58025--58036},
  author={Zhao, Xin and Lou, Jianxun and Wu, Xinbo and Wu, Yingying and Lévêque, Lucie and Liu, Xiaochang and Guo, Pengfei and Qin, Yipeng and Lin, Hanhe and Saupe, Dietmar and Liu, Hantao}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/67086">
    <dc:creator>Lévêque, Lucie</dc:creator>
    <dc:contributor>Wu, Xinbo</dc:contributor>
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-06-12T07:09:54Z</dcterms:available>
    <dc:creator>Zhao, Xin</dc:creator>
    <dc:contributor>Lou, Jianxun</dc:contributor>
    <dc:contributor>Liu, Xiaochang</dc:contributor>
    <dc:creator>Guo, Pengfei</dc:creator>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67086/1/Zhao_2-zfiyul4ms70x2.pdf"/>
    <dc:creator>Wu, Xinbo</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-06-12T07:09:54Z</dc:date>
    <dc:creator>Lou, Jianxun</dc:creator>
    <dc:contributor>Qin, Yipeng</dc:contributor>
    <dc:creator>Wu, Yingying</dc:creator>
    <dcterms:title>CUDAS : Distortion-Aware Saliency Benchmark</dcterms:title>
    <dc:contributor>Lin, Hanhe</dc:contributor>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-nc-nd/4.0/"/>
    <dc:contributor>Liu, Hantao</dc:contributor>
    <dc:contributor>Guo, Pengfei</dc:contributor>
    <dc:creator>Lin, Hanhe</dc:creator>
    <dc:creator>Liu, Xiaochang</dc:creator>
    <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67086/1/Zhao_2-zfiyul4ms70x2.pdf"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Liu, Hantao</dc:creator>
    <dc:contributor>Wu, Yingying</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2023</dcterms:issued>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/67086"/>
    <dc:creator>Qin, Yipeng</dc:creator>
    <dc:contributor>Zhao, Xin</dc:contributor>
    <dcterms:abstract>Visual saliency prediction remains an academic challenge due to the diversity and complexity of natural scenes as well as the scarcity of eye movement data on where people look in images. In many practical applications, digital images are inevitably subject to distortions, such as those caused by acquisition, editing, compression or transmission. A great deal of attention has been paid to predicting the saliency of distortion-free pristine images, but little attention has been given to understanding the impact of visual distortions on saliency prediction. In this paper, we first present the CUDAS database - a new distortion-aware saliency benchmark, where eye-tracking data was collected for 60 pristine images and their corresponding 540 distorted formats. We then conduct a statistical evaluation to reveal the behaviour of state-of-the-art saliency prediction models on distorted images and provide insights on building an effective model for distortion-aware saliency prediction. The new database is made publicly available to the research community.</dcterms:abstract>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dc:language>eng</dc:language>
    <dc:contributor>Lévêque, Lucie</dc:contributor>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen