Large-scale crowdsourced subjective assessment of picturewise just noticeable difference

dc.contributor.authorLin, Hanhe
dc.contributor.authorChen, Guangan
dc.contributor.authorJenadeleh, Mohsen
dc.contributor.authorHosu, Vlad
dc.contributor.authorReips, Ulf-Dietrich
dc.contributor.authorHamzaoui, Raouf
dc.contributor.authorSaupe, Dietmar
dc.date.accessioned2022-04-05T08:13:03Z
dc.date.available2022-04-05T08:13:03Z
dc.date.issued2022eng
dc.description.abstractThe picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1k-database.html.eng
dc.description.versionpublishedde
dc.identifier.doi10.1109/TCSVT.2022.3163860eng
dc.identifier.ppn1845444213
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/57160
dc.language.isoengeng
dc.rightsterms-of-use
dc.rights.urihttps://rightsstatements.org/page/InC/1.0/
dc.subject.ddc004eng
dc.titleLarge-scale crowdsourced subjective assessment of picturewise just noticeable differenceeng
dc.typeJOURNAL_ARTICLEde
dspace.entity.typePublication
kops.citation.bibtex
@article{Lin2022Large-57160,
  year={2022},
  doi={10.1109/TCSVT.2022.3163860},
  title={Large-scale crowdsourced subjective assessment of picturewise just noticeable difference},
  number={9},
  volume={32},
  issn={1051-8215},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  pages={5859--5873},
  author={Lin, Hanhe and Chen, Guangan and Jenadeleh, Mohsen and Hosu, Vlad and Reips, Ulf-Dietrich and Hamzaoui, Raouf and Saupe, Dietmar}
}
kops.citation.iso690LIN, Hanhe, Guangan CHEN, Mohsen JENADELEH, Vlad HOSU, Ulf-Dietrich REIPS, Raouf HAMZAOUI, Dietmar SAUPE, 2022. Large-scale crowdsourced subjective assessment of picturewise just noticeable difference. In: IEEE Transactions on Circuits and Systems for Video Technology. IEEE. 2022, 32(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860deu
kops.citation.iso690LIN, Hanhe, Guangan CHEN, Mohsen JENADELEH, Vlad HOSU, Ulf-Dietrich REIPS, Raouf HAMZAOUI, Dietmar SAUPE, 2022. Large-scale crowdsourced subjective assessment of picturewise just noticeable difference. In: IEEE Transactions on Circuits and Systems for Video Technology. IEEE. 2022, 32(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57160">
    <dc:creator>Reips, Ulf-Dietrich</dc:creator>
    <dc:creator>Hosu, Vlad</dc:creator>
    <dc:contributor>Reips, Ulf-Dietrich</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-04-05T08:13:03Z</dc:date>
    <dc:contributor>Chen, Guangan</dc:contributor>
    <dc:contributor>Hosu, Vlad</dc:contributor>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57160/1/Lin_2-54g16qw29m6b7.pdf"/>
    <dc:rights>terms-of-use</dc:rights>
    <dc:contributor>Hamzaoui, Raouf</dc:contributor>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/57160/1/Lin_2-54g16qw29m6b7.pdf"/>
    <dc:creator>Saupe, Dietmar</dc:creator>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dc:creator>Hamzaoui, Raouf</dc:creator>
    <dc:language>eng</dc:language>
    <dc:creator>Chen, Guangan</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2022</dcterms:issued>
    <dc:contributor>Saupe, Dietmar</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-04-05T08:13:03Z</dcterms:available>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57160"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:contributor>Jenadeleh, Mohsen</dc:contributor>
    <dc:creator>Lin, Hanhe</dc:creator>
    <dc:creator>Jenadeleh, Mohsen</dc:creator>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:abstract xml:lang="eng">The picturewise just noticeable difference (PJND) for a given image, compression scheme, and subject is the smallest distortion level that the subject can perceive when the image is compressed with this compression scheme. The PJND can be used to determine the compression level at which a given proportion of the population does not notice any distortion in the compressed image. To obtain accurate and diverse results, the PJND must be determined for a large number of subjects and images. This is particularly important when experimental PJND data are used to train deep learning models that can predict a probability distribution model of the PJND for a new image. To date, such subjective studies have been carried out in laboratory environments. However, the number of participants and images in all existing PJND studies is very small because of the challenges involved in setting up laboratory experiments. To address this limitation, we develop a framework to conduct PJND assessments via crowdsourcing. We use a new technique based on slider adjustment and a flicker test to determine the PJND. A pilot study demonstrated that our technique could decrease the study duration by 50% and double the perceptual sensitivity compared to the standard binary search approach that successively compares a test image side by side with its reference image. Our framework includes a robust and systematic scheme to ensure the reliability of the crowdsourced results. Using 1,008 source images and distorted versions obtained with JPEG and BPG compression, we apply our crowdsourcing framework to build the largest PJND dataset, KonJND-1k (Konstanz just noticeable difference 1k dataset). A total of 503 workers participated in the study, yielding 61,030 PJND samples that resulted in an average of 42 samples per source image. The KonJND-1k dataset is available at http://database.mmsp-kn.de/konjnd-1k-database.html.</dcterms:abstract>
    <dcterms:title>Large-scale crowdsourced subjective assessment of picturewise just noticeable difference</dcterms:title>
    <dc:contributor>Lin, Hanhe</dc:contributor>
  </rdf:Description>
</rdf:RDF>
kops.description.openAccessopenaccessgreen
kops.flag.isPeerReviewedtrueeng
kops.flag.knbibliographytrue
kops.identifier.nbnurn:nbn:de:bsz:352-2-54g16qw29m6b7
kops.sourcefieldIEEE Transactions on Circuits and Systems for Video Technology. IEEE. 2022, <b>32</b>(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860deu
kops.sourcefield.plainIEEE Transactions on Circuits and Systems for Video Technology. IEEE. 2022, 32(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860deu
kops.sourcefield.plainIEEE Transactions on Circuits and Systems for Video Technology. IEEE. 2022, 32(9), pp. 5859-5873. ISSN 1051-8215. eISSN 1558-2205. Available under: doi: 10.1109/TCSVT.2022.3163860eng
relation.isAuthorOfPublication72057485-5f84-41aa-b6cb-8d616362e6a8
relation.isAuthorOfPublication801d74f1-2a42-4fbd-a9a7-d5a8ef2d3d09
relation.isAuthorOfPublication6a68664e-96d9-4ff1-847a-536c58f26500
relation.isAuthorOfPublication46e43f0d-5589-4060-b110-18519cbf61e0
relation.isAuthorOfPublication10de7423-bec5-4bea-99c1-dff3e543da0b
relation.isAuthorOfPublicationb66a7558-a3f1-485a-8955-faf356061805
relation.isAuthorOfPublicationfffb576d-6ec6-4221-8401-77f1d117a9b9
relation.isAuthorOfPublication.latestForDiscovery72057485-5f84-41aa-b6cb-8d616362e6a8
source.bibliographicInfo.fromPage5859
source.bibliographicInfo.issue9
source.bibliographicInfo.toPage5873
source.bibliographicInfo.volume32
source.identifier.eissn1558-2205eng
source.identifier.issn1051-8215eng
source.periodicalTitleIEEE Transactions on Circuits and Systems for Video Technologyeng
source.publisherIEEEeng

Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
Lin_2-54g16qw29m6b7.pdf
Größe:
3.96 MB
Format:
Adobe Portable Document Format
Lin_2-54g16qw29m6b7.pdf
Lin_2-54g16qw29m6b7.pdfGröße: 3.96 MBDownloads: 208