Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model
| dc.contributor.author | Su, Shaolin | |
| dc.contributor.author | Lin, Hanhe | |
| dc.contributor.author | Hosu, Vlad | |
| dc.contributor.author | Wiedemann, Oliver | |
| dc.contributor.author | Sun, Jinqiu | |
| dc.contributor.author | Zhu, Yu | |
| dc.contributor.author | Liu, Hantao | |
| dc.contributor.author | Zhang, Yanning | |
| dc.contributor.author | Saupe, Dietmar | |
| dc.date.accessioned | 2024-05-03T06:45:37Z | |
| dc.date.available | 2024-05-03T06:45:37Z | |
| dc.date.issued | 2024 | |
| dc.description.abstract | An accurate computational model for image quality assessment (IQA) benefits many vision applications, such as image filtering, image processing, and image generation. Although the study of face images is an important subfield in computer vision research, the lack of face IQA data and models limits the precision of current IQA metrics on face image processing tasks such as face superresolution, face enhancement, and face editing. To narrow this gap, in this article, we first introduce the largest annotated IQA database developed to date, which contains 20,000 human faces – an order of magnitude larger than all existing rated datasets of faces – of diverse individuals in highly varied circumstances. Based on the database, we further propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA. By taking advantage of rich statistics encoded in well pretrained off-the-shelf generative models, we obtain generative prior information and use it as latent references to facilitate blind IQA. The experimental results demonstrate both the value of the proposed dataset for face IQA and the superior performance of the proposed model. | |
| dc.description.version | published | deu |
| dc.identifier.doi | 10.1109/tmm.2023.3301276 | |
| dc.identifier.uri | https://kops.uni-konstanz.de/handle/123456789/69906 | |
| dc.language.iso | eng | |
| dc.subject | Image quality assessment | |
| dc.subject | face quality | |
| dc.subject | subjective study | |
| dc.subject | GAN | |
| dc.subject | generative priors | |
| dc.subject.ddc | 004 | |
| dc.title | Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model | eng |
| dc.type | JOURNAL_ARTICLE | |
| dspace.entity.type | Publication | |
| kops.citation.bibtex | @article{Su2024Going-69906,
title={Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model},
year={2024},
doi={10.1109/tmm.2023.3301276},
volume={26},
issn={1520-9210},
journal={IEEE Transactions on Multimedia},
pages={2671--2685},
author={Su, Shaolin and Lin, Hanhe and Hosu, Vlad and Wiedemann, Oliver and Sun, Jinqiu and Zhu, Yu and Liu, Hantao and Zhang, Yanning and Saupe, Dietmar}
} | |
| kops.citation.iso690 | SU, Shaolin, Hanhe LIN, Vlad HOSU, Oliver WIEDEMANN, Jinqiu SUN, Yu ZHU, Hantao LIU, Yanning ZHANG, Dietmar SAUPE, 2024. Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model. In: IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, S. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Verfügbar unter: doi: 10.1109/tmm.2023.3301276 | deu |
| kops.citation.iso690 | SU, Shaolin, Hanhe LIN, Vlad HOSU, Oliver WIEDEMANN, Jinqiu SUN, Yu ZHU, Hantao LIU, Yanning ZHANG, Dietmar SAUPE, 2024. Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model. In: IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, pp. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Available under: doi: 10.1109/tmm.2023.3301276 | eng |
| kops.citation.rdf | <rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/69906">
<dc:creator>Sun, Jinqiu</dc:creator>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
<dc:creator>Zhang, Yanning</dc:creator>
<dc:contributor>Wiedemann, Oliver</dc:contributor>
<dc:contributor>Sun, Jinqiu</dc:contributor>
<dc:creator>Hosu, Vlad</dc:creator>
<dc:creator>Liu, Hantao</dc:creator>
<dc:contributor>Saupe, Dietmar</dc:contributor>
<dc:contributor>Zhang, Yanning</dc:contributor>
<dc:creator>Saupe, Dietmar</dc:creator>
<dc:creator>Su, Shaolin</dc:creator>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
<dc:creator>Wiedemann, Oliver</dc:creator>
<dcterms:abstract>An accurate computational model for image quality assessment (IQA) benefits many vision applications, such as image filtering, image processing, and image generation. Although the study of face images is an important subfield in computer vision research, the lack of face IQA data and models limits the precision of current IQA metrics on face image processing tasks such as face superresolution, face enhancement, and face editing. To narrow this gap, in this article, we first introduce the largest annotated IQA database developed to date, which contains 20,000 human faces – an order of magnitude larger than all existing rated datasets of faces – of diverse individuals in highly varied circumstances. Based on the database, we further propose a novel deep learning model to accurately predict face image quality, which, for the first time, explores the use of generative priors for IQA. By taking advantage of rich statistics encoded in well pretrained off-the-shelf generative models, we obtain generative prior information and use it as latent references to facilitate blind IQA. The experimental results demonstrate both the value of the proposed dataset for face IQA and the superior performance of the proposed model.</dcterms:abstract>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-03T06:45:37Z</dc:date>
<dc:contributor>Liu, Hantao</dc:contributor>
<dc:contributor>Lin, Hanhe</dc:contributor>
<dc:contributor>Hosu, Vlad</dc:contributor>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
<dc:contributor>Zhu, Yu</dc:contributor>
<dc:contributor>Su, Shaolin</dc:contributor>
<dc:creator>Lin, Hanhe</dc:creator>
<bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/69906"/>
<dc:creator>Zhu, Yu</dc:creator>
<dcterms:issued>2024</dcterms:issued>
<dc:language>eng</dc:language>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-05-03T06:45:37Z</dcterms:available>
<dcterms:title>Going the Extra Mile in Face Image Quality Assessment : A Novel Database and Model</dcterms:title>
</rdf:Description>
</rdf:RDF> | |
| kops.description.funding | {"first":"dfg","second":"251654672 – TRR 161"} | |
| kops.flag.isPeerReviewed | false | |
| kops.flag.knbibliography | true | |
| kops.sourcefield | IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, <b>26</b>, S. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Verfügbar unter: doi: 10.1109/tmm.2023.3301276 | deu |
| kops.sourcefield.plain | IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, S. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Verfügbar unter: doi: 10.1109/tmm.2023.3301276 | deu |
| kops.sourcefield.plain | IEEE Transactions on Multimedia. Institute of Electrical and Electronics Engineers (IEEE). 2024, 26, pp. 2671-2685. ISSN 1520-9210. eISSN 1941-0077. Available under: doi: 10.1109/tmm.2023.3301276 | eng |
| relation.isAuthorOfPublication | c8c5d383-2277-4596-bcdc-bc759079a116 | |
| relation.isAuthorOfPublication | 72057485-5f84-41aa-b6cb-8d616362e6a8 | |
| relation.isAuthorOfPublication | 46e43f0d-5589-4060-b110-18519cbf61e0 | |
| relation.isAuthorOfPublication | c39b7364-a777-46ff-bf56-4e613f766410 | |
| relation.isAuthorOfPublication | fffb576d-6ec6-4221-8401-77f1d117a9b9 | |
| relation.isAuthorOfPublication.latestForDiscovery | c8c5d383-2277-4596-bcdc-bc759079a116 | |
| source.bibliographicInfo.fromPage | 2671 | |
| source.bibliographicInfo.toPage | 2685 | |
| source.bibliographicInfo.volume | 26 | |
| source.identifier.eissn | 1941-0077 | |
| source.identifier.issn | 1520-9210 | |
| source.periodicalTitle | IEEE Transactions on Multimedia | |
| source.publisher | Institute of Electrical and Electronics Engineers (IEEE) |