Publikation: The Konstanz natural video database (KoNViD-1k)
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. Currently, all existing VQA databases include only a small num- ber of video sequences with artificial distortions. The development and evaluation of objective quality assessment methods would benefit from having larger datasets of real-world video sequences with corresponding subjective mean opinion scores (MOS), in particular for deep learning purposes. In addition, the training and validation of any VQA method intended to be ‘general purpose’ requires a large dataset of video sequences that are representative of the whole spectrum of available video content and all types of distortions. We report our work on KoNViD-1k, a subjectively annotated VQA database consisting of 1,200 public- domain video sequences, fairly sampled from a large public video dataset, YFCC100m. We present the challenges and choices we have made in creating such a database aimed at ‘in the wild’ authentic distortions, depicting a wide variety of content.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
HOSU, Vlad, Franz HAHN, Mohsen JENADELEH, Hanhe LIN, Hui MEN, Tamas SZIRANYI, Shujun LI, Dietmar SAUPE, 2017. The Konstanz natural video database (KoNViD-1k). International Conference on Quality of Multimedia Experience (QoMEX 2017). Erfurt, 31. Mai 2017 - 2. Juni 2017. In: 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX). Piscataway, NJ: IEEE, 2017. ISBN 978-1-5386-4024-1. Available under: doi: 10.1109/QoMEX.2017.7965673BibTex
@inproceedings{Hosu2017Konst-39103, year={2017}, doi={10.1109/QoMEX.2017.7965673}, title={The Konstanz natural video database (KoNViD-1k)}, url={https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/HoHaJe17.pdf}, isbn={978-1-5386-4024-1}, publisher={IEEE}, address={Piscataway, NJ}, booktitle={2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX)}, author={Hosu, Vlad and Hahn, Franz and Jenadeleh, Mohsen and Lin, Hanhe and Men, Hui and Sziranyi, Tamas and Li, Shujun and Saupe, Dietmar} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/39103"> <dc:contributor>Hosu, Vlad</dc:contributor> <dcterms:title>The Konstanz natural video database (KoNViD-1k)</dcterms:title> <dc:creator>Jenadeleh, Mohsen</dc:creator> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dcterms:abstract xml:lang="eng">Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. Currently, all existing VQA databases include only a small num- ber of video sequences with artificial distortions. The development and evaluation of objective quality assessment methods would benefit from having larger datasets of real-world video sequences with corresponding subjective mean opinion scores (MOS), in particular for deep learning purposes. In addition, the training and validation of any VQA method intended to be ‘general purpose’ requires a large dataset of video sequences that are representative of the whole spectrum of available video content and all types of distortions. We report our work on KoNViD-1k, a subjectively annotated VQA database consisting of 1,200 public- domain video sequences, fairly sampled from a large public video dataset, YFCC100m. We present the challenges and choices we have made in creating such a database aimed at ‘in the wild’ authentic distortions, depicting a wide variety of content.</dcterms:abstract> <dc:creator>Men, Hui</dc:creator> <dc:creator>Hahn, Franz</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/39103"/> <dc:creator>Hosu, Vlad</dc:creator> <dc:contributor>Saupe, Dietmar</dc:contributor> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/39103/1/Hosu_2-e7mh9z8d8u09.pdf"/> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:contributor>Hahn, Franz</dc:contributor> <dc:language>eng</dc:language> <dc:creator>Lin, Hanhe</dc:creator> <dcterms:issued>2017</dcterms:issued> <dc:creator>Saupe, Dietmar</dc:creator> <dc:contributor>Lin, Hanhe</dc:contributor> <dc:contributor>Li, Shujun</dc:contributor> <dc:contributor>Jenadeleh, Mohsen</dc:contributor> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-06-01T09:41:07Z</dcterms:available> <dc:creator>Li, Shujun</dc:creator> <dc:creator>Sziranyi, Tamas</dc:creator> <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/> <dc:rights>terms-of-use</dc:rights> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/39103/1/Hosu_2-e7mh9z8d8u09.pdf"/> <dc:contributor>Men, Hui</dc:contributor> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2017-06-01T09:41:07Z</dc:date> <dc:contributor>Sziranyi, Tamas</dc:contributor> </rdf:Description> </rdf:RDF>