Publikation:

Deeprn : A Content Preserving Deep Architecture for Blind Image Quality Assessment

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2018

Autor:innen

Varga, Domonkos
Sziranyi, Tamas

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

2018 IEEE International Conference on Multimedia and Expo (ICME). Piscataway, New Jersey, USA: IEEE, 2018. ISBN 978-1-5386-1737-3. Available under: doi: 10.1109/ICME.2018.8486528

Zusammenfassung

This paper presents a blind image quality assessment (BIQA) method based on deep learning with convolutional neural networks (CNN). Our method is trained on full and arbitrarily sized images rather than small image patches or resized input images as usually done in CNNs for image classification and quality assessment. The resolution independence is achieved by pyramid pooling. This work is the first that applies a fine-tuned residual deep learning network (ResNet-101) to BIQA. The training is carried out on a new and very large, labeled dataset of 10, 073 images (KonIQ-10k) that contains quality rating histograms besides the mean opinion scores (MOS). In contrast to previous methods we do not train to approximate the MOS directly, but rather use the distributions of scores. Experiments were carried out on three benchmark image quality databases. The results showed clear improvements of the accuracy of the estimated MOS values, compared to current state-of-the-art algorithms. We also report on the quality of the estimation of the score distributions.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Blind image quality assessment, deep learning, CNN, spatial pyramid pooling

Konferenz

2018 IEEE International Conference on Multimedia and Expo (ICME), 23. Juli 2018 - 27. Juli 2018, San Diego, California, USA
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690VARGA, Domonkos, Dietmar SAUPE, Tamas SZIRANYI, 2018. Deeprn : A Content Preserving Deep Architecture for Blind Image Quality Assessment. 2018 IEEE International Conference on Multimedia and Expo (ICME). San Diego, California, USA, 23. Juli 2018 - 27. Juli 2018. In: 2018 IEEE International Conference on Multimedia and Expo (ICME). Piscataway, New Jersey, USA: IEEE, 2018. ISBN 978-1-5386-1737-3. Available under: doi: 10.1109/ICME.2018.8486528
BibTex
@inproceedings{Varga2018Deepr-44634,
  year={2018},
  doi={10.1109/ICME.2018.8486528},
  title={Deeprn : A Content Preserving Deep Architecture for Blind Image Quality Assessment},
  isbn={978-1-5386-1737-3},
  publisher={IEEE},
  address={Piscataway, New Jersey, USA},
  booktitle={2018 IEEE International Conference on Multimedia and Expo (ICME)},
  author={Varga, Domonkos and Saupe, Dietmar and Sziranyi, Tamas}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/44634">
    <dc:contributor>Varga, Domonkos</dc:contributor>
    <dc:creator>Sziranyi, Tamas</dc:creator>
    <dcterms:title>Deeprn : A Content Preserving Deep Architecture for Blind Image Quality Assessment</dcterms:title>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-01-21T11:00:57Z</dc:date>
    <dc:contributor>Sziranyi, Tamas</dc:contributor>
    <dcterms:issued>2018</dcterms:issued>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/44634"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:abstract xml:lang="eng">This paper presents a blind image quality assessment (BIQA) method based on deep learning with convolutional neural networks (CNN). Our method is trained on full and arbitrarily sized images rather than small image patches or resized input images as usually done in CNNs for image classification and quality assessment. The resolution independence is achieved by pyramid pooling. This work is the first that applies a fine-tuned residual deep learning network (ResNet-101) to BIQA. The training is carried out on a new and very large, labeled dataset of 10, 073 images (KonIQ-10k) that contains quality rating histograms besides the mean opinion scores (MOS). In contrast to previous methods we do not train to approximate the MOS directly, but rather use the distributions of scores. Experiments were carried out on three benchmark image quality databases. The results showed clear improvements of the accuracy of the estimated MOS values, compared to current state-of-the-art algorithms. We also report on the quality of the estimation of the score distributions.</dcterms:abstract>
    <dc:language>eng</dc:language>
    <dc:creator>Varga, Domonkos</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-01-21T11:00:57Z</dcterms:available>
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen