Large-scale crowdsourced subjective assessment of picturewise just noticeable difference
Large-scale crowdsourced subjective assessment of picturewise just noticeable difference
Loading...
Date
2022
Editors
Journal ISSN
Electronic ISSN
ISBN
Bibliographical data
Publisher
Series
URI (citable link)
DOI (citable link)
International patent number
Link to the license
EU project number
Project
Open Access publication
Title in another language
Publication type
Journal article
Publication status
Published
Published in
IEEE Transactions on Circuits and Systems for Video Technology ; 32 (2022), 9. - pp. 5859-5873. - IEEE. - ISSN 1051-8215. - eISSN 1558-2205
Abstract
The picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1k-database.html.
Summary in another language
Subject (DDC)
004 Computer Science
Keywords
Conference
Review
undefined / . - undefined, undefined. - (undefined; undefined)
Cite This
ISO 690
LIN, Hanhe, Guangan CHEN, Mohsen JENADELEH, Vlad HOSU, Ulf-Dietrich REIPS, Raouf HAMZAOUI, Dietmar SAUPE, 2022. Large-scale crowdsourced subjective assessment of picturewise just noticeable difference. In: IEEE Transactions on Circuits and Systems for Video Technology. IEEE. 32(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860BibTex
@article{Lin2022Large-57160, year={2022}, doi={10.1109/TCSVT.2022.3163860}, title={Large-scale crowdsourced subjective assessment of picturewise just noticeable difference}, number={9}, volume={32}, issn={1051-8215}, journal={IEEE Transactions on Circuits and Systems for Video Technology}, pages={5859--5873}, author={Lin, Hanhe and Chen, Guangan and Jenadeleh, Mohsen and Hosu, Vlad and Reips, Ulf-Dietrich and Hamzaoui, Raouf and Saupe, Dietmar} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57160"> <dc:creator>Reips, Ulf-Dietrich</dc:creator> <dc:creator>Hosu, Vlad</dc:creator> <dc:contributor>Reips, Ulf-Dietrich</dc:contributor> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-04-05T08:13:03Z</dc:date> <dc:contributor>Chen, Guangan</dc:contributor> <dc:contributor>Hosu, Vlad</dc:contributor> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57160/1/Lin_2-54g16qw29m6b7.pdf"/> <dc:rights>terms-of-use</dc:rights> <dc:contributor>Hamzaoui, Raouf</dc:contributor> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57160/1/Lin_2-54g16qw29m6b7.pdf"/> <dc:creator>Saupe, Dietmar</dc:creator> <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/> <dc:creator>Hamzaoui, Raouf</dc:creator> <dc:language>eng</dc:language> <dc:creator>Chen, Guangan</dc:creator> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dcterms:issued>2022</dcterms:issued> <dc:contributor>Saupe, Dietmar</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-04-05T08:13:03Z</dcterms:available> <foaf:homepage rdf:resource="http://localhost:8080/"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57160"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/> <dc:contributor>Jenadeleh, Mohsen</dc:contributor> <dc:creator>Lin, Hanhe</dc:creator> <dc:creator>Jenadeleh, Mohsen</dc:creator> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:abstract xml:lang="eng">The picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1k-database.html.</dcterms:abstract> <dcterms:title>Large-scale crowdsourced subjective assessment of picturewise just noticeable difference</dcterms:title> <dc:contributor>Lin, Hanhe</dc:contributor> </rdf:Description> </rdf:RDF>
Internal note
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Examination date of dissertation
Method of financing
Comment on publication
Alliance license
Corresponding Authors der Uni Konstanz vorhanden
International Co-Authors
Bibliography of Konstanz
Yes
Refereed
Yes