Blind Image and Video Quality Assessment

Zitieren

Dateien zu dieser Ressource

Prüfsumme: MD5:1374572c6cea2b7b061e005ac91aa029

JENADELEH, Mohsen, 2018. Blind Image and Video Quality Assessment [Dissertation]. Konstanz: University of Konstanz

@phdthesis{Jenadeleh2018Blind-44102, title={Blind Image and Video Quality Assessment}, year={2018}, author={Jenadeleh, Mohsen}, address={Konstanz}, school={Universität Konstanz} }

<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/rdf/resource/123456789/44102"> <dcterms:title>Blind Image and Video Quality Assessment</dcterms:title> <dc:creator>Jenadeleh, Mohsen</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-03T11:47:27Z</dc:date> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-03T11:47:27Z</dcterms:available> <dcterms:abstract xml:lang="eng">The popularity and affordability of handheld imaging devices, especially smartphones, along with the rapid development of social media such as Facebook, Flickr, and YouTube have made videos and images a popular and integral part of everyday communication. With the development of image and video transmission systems and the advancement of consumer video technologies, it is becoming increasingly important to improve visual quality in order to meet the quality expectations of the end users. This thesis focuses on designing algorithms that accurately predict the perceptual and technical quality of images or videos as well as constructing authentic video quality databases. Image quality assessment (IQA) can be classified based on the amount of information available to the algorithm. This thesis focuses on blind or no-reference image quality assessment (BIQA) where only the input (possibly distorted) image is available for the algorithm. The BIQA is further classified into two main groups based on the needs for subjective mean opinion scores (MOS): the BIQA methods that need the corresponding MOS for an input image in their training phase which are known as opinion-aware, and the fully blind IQA methods, which have no access to any subjective scores. In Chapters 3 and 4, we propose two opinion-aware image quality methods. The first is based on the Wakeby modeling of the natural scene statistics (NSS), and the second incorporates aesthetics and content information as well as NSS features in order to predict more accurately the human judgment of image quality. The development of modern imaging technology in smartphones allows a variety of image and video applications, such as iris recognition systems, to be integrated into mobile devices. Ensuring the quality of acquired iris images in visible light poses many challenges to iris recognition in an uncontrolled environment. In Chapter 5, we propose a real-time, general–purpose, and fully blind image quality metric for filtering iris images with poor quality to improve the recognition performance of iris recognition systems and to reduce the false rejection rate. Training machine learning methods for video quality assessment (VQA) require a wide range of video sequences with diverse semantic contexts, visual appearance, and types and combinations of quality distortions. Existing VQA databases are mostly benchmarks which are meant for training restricted quality models and they contain few original content videos that have been artificially distorted without concern for the dataset ecological validity. Chapter 6 discusses the results of our joint work of the Multimedia Signal Processing (MMSP) group of the University of Konstanz. We present the challenges and choices we have made in creating VQA databases with “in the wild” authentic distortions, depicting a wide variety of content. Due to a large number of videos, we crowdsourced the subjective scores using the widely used Figure Eight platform.</dcterms:abstract> <dc:rights>Attribution-ShareAlike 4.0 International</dc:rights> <dc:language>eng</dc:language> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/44102"/> <dc:contributor>Jenadeleh, Mohsen</dc:contributor> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/36"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/36"/> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44102/11/Jenadeleh_2-g5si24h73rb40.pdf"/> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:issued>2018</dcterms:issued> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-sa/4.0/"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44102/11/Jenadeleh_2-g5si24h73rb40.pdf"/> </rdf:Description> </rdf:RDF>

Dateiabrufe seit 03.12.2018 (Informationen über die Zugriffsstatistik)

Jenadeleh_2-g5si24h73rb40.pdf 115

Das Dokument erscheint in:

Attribution-ShareAlike 4.0 International Solange nicht anders angezeigt, wird die Lizenz wie folgt beschrieben: Attribution-ShareAlike 4.0 International

KOPS Suche


Stöbern

Mein Benutzerkonto