Publikation:

Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2024

Autor:innen

Sun, Jinqiu
Zhu, Yu
Liu, Hantao
Zhang, Yanning

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Deutsche Forschungsgemeinschaft (DFG): 251654672 – TRR 161

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, pp. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Available under: doi: 10.1109/tmm.2023.3301276

Zusammenfassung

An accurate computational model for image quality assessment (IQA) benefits many vision applications, such as image filtering, image processing, and image generation. Although the study of face images is an important subfield in computer vision research, the lack of face IQA data and models limits the precision of current IQA metrics on face image processing tasks such as face superresolution, face enhancement, and face editing. To narrow this gap, in this article, we first introduce the largest annotated IQA database developed to date, which contains 20,000 human faces – an order of magnitude larger than all existing rated datasets of faces – of diverse individuals in highly varied circumstances. Based on the database, we further propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA. By taking advantage of rich statistics encoded in well pretrained off-the-shelf generative models, we obtain generative prior information and use it as latent references to facilitate blind IQA. The experimental results demonstrate both the value of the proposed dataset for face IQA and the superior performance of the proposed model.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Image quality assessment, face quality, subjective study, GAN, generative priors

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Verknüpfte Datensätze

Zitieren

ISO 690SU, Shaolin, Hanhe LIN, Vlad HOSU, Oliver WIEDEMANN, Jinqiu SUN, Yu ZHU, Hantao LIU, Yanning ZHANG, Dietmar SAUPE, 2024. Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model. In: IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, pp. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Available under: doi: 10.1109/tmm.2023.3301276
BibTex
@article{Su2024Going-69906,
  year={2024},
  doi={10.1109/tmm.2023.3301276},
  title={Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model},
  volume={26},
  issn={1520-9210},
  journal={IEEE Transactions on Multimedia},
  pages={2671--2685},
  author={Su, Shaolin and Lin, Hanhe and Hosu, Vlad and Wiedemann, Oliver and Sun, Jinqiu and Zhu, Yu and Liu, Hantao and Zhang, Yanning and Saupe, Dietmar}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/69906">
    <dc:creator>Sun, Jinqiu</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:creator>Zhang, Yanning</dc:creator>
    <dc:contributor>Wiedemann, Oliver</dc:contributor>
    <dc:contributor>Sun, Jinqiu</dc:contributor>
    <dc:creator>Hosu, Vlad</dc:creator>
    <dc:creator>Liu, Hantao</dc:creator>
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <dc:contributor>Zhang, Yanning</dc:contributor>
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dc:creator>Su, Shaolin</dc:creator>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:creator>Wiedemann, Oliver</dc:creator>
    <dcterms:abstract>An accurate computational model for image quality assessment (IQA) benefits many vision applications, such as image filtering, image processing, and image generation. Although the study of face images is an important subfield in computer vision research, the lack of face IQA data and models limits the precision of current IQA metrics on face image processing tasks such as face superresolution, face enhancement, and face editing. To narrow this gap, in this article, we first introduce the largest annotated IQA database developed to date, which contains 20,000 human faces – an order of magnitude larger than all existing rated datasets of faces – of diverse individuals in highly varied circumstances. Based on the database, we further propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA. By taking advantage of rich statistics encoded in well pretrained off-the-shelf generative models, we obtain generative prior information and use it as latent references to facilitate blind IQA. The experimental results demonstrate both the value of the proposed dataset for face IQA and the superior performance of the proposed model.</dcterms:abstract>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-03T06:45:37Z</dc:date>
    <dc:contributor>Liu, Hantao</dc:contributor>
    <dc:contributor>Lin, Hanhe</dc:contributor>
    <dc:contributor>Hosu, Vlad</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:contributor>Zhu, Yu</dc:contributor>
    <dc:contributor>Su, Shaolin</dc:contributor>
    <dc:creator>Lin, Hanhe</dc:creator>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/69906"/>
    <dc:creator>Zhu, Yu</dc:creator>
    <dcterms:issued>2024</dcterms:issued>
    <dc:language>eng</dc:language>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-03T06:45:37Z</dcterms:available>
    <dcterms:title>Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model</dcterms:title>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Nein
Diese Publikation teilen