Publikation: Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Saliency has been widely studied in relation to image quality assessment (IQA). The optimal use of saliency in IQA metrics, however, is nontrivial and largely depends on whether saliency can be accurately predicted for images containing various distortions. Although tremendous progress has been made in saliency modelling, very little is known about whether and to what extent state-of-the-art methods are beneficial for saliency prediction of distorted images. In this paper, we analyse the ability of deep learning versus traditional algorithms in predicting saliency, based on an IQA-aware saliency benchmark, the SIQ288 database. Building off the variations in model performance, we make recommendations for model selections for IQA applications.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
ZHAO, Xin, Hanhe LIN, Pengfei GUO, Dietmar SAUPE, Hantao LIU, 2020. Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images. 2020 IEEE International Conference on Image Processing (ICIP). Abu Dhabi, United Arab Emirates, 25. Okt. 2020 - 28. Okt. 2020. In: 2020 IEEE International Conference on Image Processing (ICIP). Piscataway, NJ: IEEE, 2020, pp. 156-160. ISBN 978-1-72816-395-6. Available under: doi: 10.1109/ICIP40778.2020.9191203BibTex
@inproceedings{Zhao2020Learn-54030, year={2020}, doi={10.1109/ICIP40778.2020.9191203}, title={Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images}, isbn={978-1-72816-395-6}, publisher={IEEE}, address={Piscataway, NJ}, booktitle={2020 IEEE International Conference on Image Processing (ICIP)}, pages={156--160}, author={Zhao, Xin and Lin, Hanhe and Guo, Pengfei and Saupe, Dietmar and Liu, Hantao} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54030"> <dcterms:issued>2020</dcterms:issued> <dc:creator>Lin, Hanhe</dc:creator> <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/> <dc:creator>Guo, Pengfei</dc:creator> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:language>eng</dc:language> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:rights>terms-of-use</dc:rights> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54030/1/Zhao_2-1h5s3np44liel6.pdf"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Zhao, Xin</dc:contributor> <dc:contributor>Guo, Pengfei</dc:contributor> <dc:contributor>Saupe, Dietmar</dc:contributor> <dc:creator>Liu, Hantao</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-18T12:33:45Z</dc:date> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-18T12:33:45Z</dcterms:available> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54030"/> <dc:contributor>Lin, Hanhe</dc:contributor> <dc:creator>Zhao, Xin</dc:creator> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54030/1/Zhao_2-1h5s3np44liel6.pdf"/> <dcterms:title>Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images</dcterms:title> <dc:contributor>Liu, Hantao</dc:contributor> <dcterms:abstract xml:lang="eng">Saliency has been widely studied in relation to image quality assessment (IQA). The optimal use of saliency in IQA metrics, however, is nontrivial and largely depends on whether saliency can be accurately predicted for images containing various distortions. Although tremendous progress has been made in saliency modelling, very little is known about whether and to what extent state-of-the-art methods are beneficial for saliency prediction of distorted images. In this paper, we analyse the ability of deep learning versus traditional algorithms in predicting saliency, based on an IQA-aware saliency benchmark, the SIQ288 database. Building off the variations in model performance, we make recommendations for model selections for IQA applications.</dcterms:abstract> <dc:creator>Saupe, Dietmar</dc:creator> </rdf:Description> </rdf:RDF>