Publikation:

Blind Image and Video Quality Assessment

Lade...
Vorschaubild

Dateien

Jenadeleh_2-g5si24h73rb40.pdf
Jenadeleh_2-g5si24h73rb40.pdfGröße: 11.35 MBDownloads: 892

Datum

2018

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Quantitative Methods for Visual Computing
Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

The popularity and affordability of handheld imaging devices, especially smartphones, along with the rapid development of social media such as Facebook, Flickr, and YouTube have made videos and images a popular and integral part of everyday communication. With the development of image and video transmission systems and the advancement of consumer video technologies, it is becoming increasingly important to improve visual quality in order to meet the quality expectations of the end users. This thesis focuses on designing algorithms that accurately predict the perceptual and technical quality of images or videos as well as constructing authentic video quality databases. Image quality assessment (IQA) can be classified based on the amount of information available to the algorithm. This thesis focuses on blind or no-reference image quality assessment (BIQA) where only the input (possibly distorted) image is available for the algorithm. The BIQA is further classified into two main groups based on the needs for subjective mean opinion scores (MOS): the BIQA methods that need the corresponding MOS for an input image in their training phase which are known as opinion-aware, and the fully blind IQA methods, which have no access to any subjective scores. In Chapters 3 and 4, we propose two opinion-aware image quality methods. The first is based on the Wakeby modeling of the natural scene statistics (NSS), and the second incorporates aesthetics and content information as well as NSS features in order to predict more accurately the human judgment of image quality. The development of modern imaging technology in smartphones allows a variety of image and video applications, such as iris recognition systems, to be integrated into mobile devices. Ensuring the quality of acquired iris images in visible light poses many challenges to iris recognition in an uncontrolled environment. In Chapter 5, we propose a real-time, general–purpose, and fully blind image quality metric for filtering iris images with poor quality to improve the recognition performance of iris recognition systems and to reduce the false rejection rate. Training machine learning methods for video quality assessment (VQA) require a wide range of video sequences with diverse semantic contexts, visual appearance, and types and combinations of quality distortions. Existing VQA databases are mostly benchmarks which are meant for training restricted quality models and they contain few original content videos that have been artificially distorted without concern for the dataset ecological validity. Chapter 6 discusses the results of our joint work of the Multimedia Signal Processing (MMSP) group of the University of Konstanz. We present the challenges and choices we have made in creating VQA databases with “in the wild” authentic distortions, depicting a wide variety of content. Due to a large number of videos, we crowdsourced the subjective scores using the widely used Figure Eight platform.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Perceptual image quality, image aesthetics, video quality database, authentic distortions

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690JENADELEH, Mohsen, 2018. Blind Image and Video Quality Assessment [Dissertation]. Konstanz: University of Konstanz
BibTex
@phdthesis{Jenadeleh2018Blind-44102,
  year={2018},
  title={Blind Image and Video Quality Assessment},
  author={Jenadeleh, Mohsen},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/44102">
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44102/11/Jenadeleh_2-g5si24h73rb40.pdf"/>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-sa/4.0/"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44102/11/Jenadeleh_2-g5si24h73rb40.pdf"/>
    <dcterms:title>Blind Image and Video Quality Assessment</dcterms:title>
    <dcterms:abstract xml:lang="eng">The popularity and affordability of handheld imaging devices, especially smartphones, along with the rapid development of social media such as Facebook, Flickr, and YouTube have made videos and images a popular and integral part of everyday communication. With the development of image and video transmission systems and the advancement of consumer video technologies, it is becoming increasingly important to improve visual quality in order to meet the quality expectations of the end users. This thesis focuses on designing algorithms that accurately predict the perceptual and technical quality of images or videos as well as constructing authentic video quality databases. Image quality assessment (IQA) can be classified based on the amount of information available to the algorithm. This thesis focuses on blind or no-reference image quality assessment (BIQA) where only the input (possibly distorted) image is available for the algorithm. The BIQA is further classified into two main groups based on the needs for subjective mean opinion scores (MOS): the BIQA methods that need the corresponding MOS for an input image in their training phase which are known as opinion-aware, and the fully blind IQA methods, which have no access to any subjective scores. In Chapters 3 and 4, we propose two opinion-aware image quality methods. The first is based on the Wakeby modeling of the natural scene statistics (NSS), and the second incorporates aesthetics and content information as well as NSS features in order to predict more accurately the human judgment of image quality. The development of modern imaging technology in smartphones allows a variety of image and video applications, such as iris recognition systems, to be integrated into mobile devices. Ensuring the quality of acquired iris images in visible light poses many challenges to iris recognition in an uncontrolled environment. In Chapter 5, we propose a real-time, general–purpose, and fully blind image quality metric for filtering iris images with poor quality to improve the recognition performance of iris recognition systems and to reduce the false rejection rate. Training machine learning methods for video quality assessment (VQA) require a wide range of video sequences with diverse semantic contexts, visual appearance, and types and combinations of quality distortions. Existing VQA databases are mostly benchmarks which are meant for training restricted quality models and they contain few original content videos that have been artificially distorted without concern for the dataset ecological validity. Chapter 6 discusses the results of our joint work of the Multimedia Signal Processing (MMSP) group of the University of Konstanz. We present the challenges and choices we have made in creating VQA databases with “in the wild” authentic distortions, depicting a wide variety of content. Due to a large number of videos, we crowdsourced the subjective scores using the widely used Figure Eight platform.</dcterms:abstract>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/44102"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-03T11:47:27Z</dc:date>
    <dcterms:issued>2018</dcterms:issued>
    <dc:contributor>Jenadeleh, Mohsen</dc:contributor>
    <dc:creator>Jenadeleh, Mohsen</dc:creator>
    <dc:language>eng</dc:language>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-03T11:47:27Z</dcterms:available>
    <dc:rights>Attribution-ShareAlike 4.0 International</dc:rights>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

October 24, 2018
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2018
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Begutachtet
Diese Publikation teilen