JPEG AIC-3 Dataset : Towards Defining the High Quality to Nearly Visually Lossless Quality Range
2023-06-20, Testolina, Michela, Hosu, Vlad, Jenadeleh, Mohsen, Lazzarotto, Davi, Saupe, Dietmar, Ebrahimi, Touradj
Visual data play a crucial role in modern society, and the rate at which images and videos are acquired, stored, and exchanged every day is rapidly increasing. Image compression is the key technology that enables storing and sharing of visual content in an efficient and cost-effective manner, by removing redundant and irrelevant information. On the other hand, image compression often introduces undesirable artifacts that reduce the perceived quality of the media. Subjective image quality assessment experiments allow for the collection of information on the visual quality of the media as perceived by human observers, and therefore quantifying the impact of such distortions. Nevertheless, the most commonly used subjective image quality assessment methodologies were designed to evaluate compressed images with visible distortions, and therefore are not accurate and reliable when evaluating images having higher visual qualities. In this paper, we present a dataset of compressed images with quality levels that range from high to nearly visually lossless, with associated quality scores in JND units. The images were subjectively evaluated by expert human observers, and the results were used to define the range from high to nearly visually lossless quality. The dataset is made publicly available to researchers, providing a valuable resource for the development of novel subjective quality assessment methodologies or compression methods that are more effective in this quality range.
KonIQ++ : Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects
2021, Su, Shaolin, Hosu, Vlad, Lin, Hanhe, Zhang, Yanning, Saupe, Dietmar
Although image quality assessment (IQA) in-the-wild has been researched in computer vision, it is still challenging to precisely estimate perceptual image quality in the presence of real-world complex and composite distortions. In order to improve machine learning solutions for IQA, we consider side information denoting the presence of distortions besides the basic quality ratings in IQA datasets. Specifically, we extend one of the largest in-the-wild IQA databases, KonIQ-10k, to KonIQ++, by collecting distortion annotations for each image, aiming to improve quality prediction together with distortion identification. We further explore the interactions between image quality and distortion by proposing a novel IQA model, which jointly predicts image quality and distortion by recurrently refining task-specific features in a multi-stage fusion framework. Our dataset KonIQ++, along with the model, boosts IQA performance and generalization ability, demonstrating its potential for solving the challenging authentic IQA task. The proposed model can also accurately predict distinct image defects, suggesting its application in image processing tasks such as image colorization and deblurring.
Foveated Video Coding for Real-Time Streaming Applications
2020, Wiedemann, Oliver, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar
Video streaming under real-time constraints is an increasingly widespread application. Many recent video encoders are unsuitable for this scenario due to theoretical limitations or run time requirements. In this paper, we present a framework for the perceptual evaluation of foveated video coding schemes. Foveation describes the process of adapting a visual stimulus according to the acuity of the human eye. In contrast to traditional region-of-interest coding, where certain areas are statically encoded at a higher quality, we utilize feedback from an eye-tracker to spatially steer the bit allocation scheme in real-time. We evaluate the performance of an H.264 based foveated coding scheme in a lab environment by comparing the bitrates at the point of just noticeable distortion (JND). Furthermore, we identify perceptually optimal codec parameterizations. In our trials, we achieve an average bitrate savings of 63.24% at the JND in comparison to the unfoveated baseline.
ATQAM/MAST'20 : Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends
2020, Guha, Tanaya, Hosu, Vlad, Saupe, Dietmar, Goldlücke, Bastian, Kumar, Naveen, Lin, Weisi, Martinez, Victor, Somandepalli, Krishna, Narayanan, Shrikanth, Cheng, Wen-Huang, McLaughlin, Kree
The Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends (ATQAM/ MAST) aims to bring together researchers and professionals working in fields ranging from computer vision, multimedia computing, multimodal signal processing to psychology and social sciences. It is divided into two tracks: ATQAM and MAST. ATQAM track: Visual quality assessment techniques can be divided into image and video technical quality assessment (IQA and VQA, or broadly TQA) and aesthetics quality assessment (AQA). While TQA is a long-standing field, having its roots in media compression, AQA is relatively young. Both have received increased attention with developments in deep learning. The topics have mostly been studied separately, even though they deal with similar aspects of the underlying subjective experience of media. The aim is to bring together individuals in the two fields of TQA and AQA for the sharing of ideas and discussions on current trends, developments, issues, and future directions. MAST track: The research area of media content analytics has been traditionally used to refer to applications involving inference of higher-level semantics from multimedia content. However, multimedia is typically created for human consumption, and we believe it is necessary to adopt a human-centered approach to this analysis, which would not only enable a better understanding of how viewers engage with content but also how they impact each other in the process.
Relaxed forced choice improves performance of visual quality assessment methods
2023-06, Jenadeleh, Mohsen, Zagermann, Johannes, Reiterer, Harald, Reips, Ulf-Dietrich, Hamzaoui, Raouf, Saupe, Dietmar
In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, "not sure". This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one with the ``not sure'' option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to 40 pair comparisons with and without the "not sure" response option and completed a questionnaire to evaluate their cognitive load for each testing condition. The experimental results show that the inclusion of the "not sure" response option in the forced choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for the equivalence of the models and found that they were different. The dataset is available at http://database.mmsp-kn.de/cogvqa-database.html.
Subjective Assessment of Global Picture-Wise Just Noticeable Difference
2020-07, Lin, Hanhe, Jenadeleh, Mohsen, Chen, Guangan, Reips, Ulf-Dietrich, Hamzaoui, Raouf, Saupe, Dietmar
The picture-wise just noticeable difference (PJND) for a given image and a compression scheme is a statistical quantity giving the smallest distortion that a subject can perceive when the image is compressed with the compression scheme. The PJND is determined with subjective assessment tests for a sample of subjects. We introduce and apply two methods of adjustment where the subject interactively selects the distortion level at the PJND using either a slider or keystrokes. We compare the results and times required to those of the adaptive binary search type approach, in which image pairs with distortions that bracket the PJND are displayed and the difference in distortion levels is reduced until the PJND is identified. For the three methods, two images are compared using the flicker test in which the displayed images alternate at a frequency of 8 Hz. Unlike previous work, our goal is a global one, determining the PJND not only for the original pristine image but also for a sequence of compressed versions. Results for the MCL-JCI dataset show that the PJND measurements based on adjustment are comparable with those of the traditional approach using binary search, yet significantly faster. Moreover, we conducted a crowdsourcing study with side-byside comparisons and forced choice, which suggests that the flicker test is more sensitive than a side-by-side comparison.
Visual Quality Assessment for Interpolated Slow-motion Videos based on a Novel Database
2020, Men, Hui, Hosu, Vlad, Lin, Hanhe, Bruhn, Andres, Saupe, Dietmar
Professional video editing tools can generate slow-motion video by interpolating frames from video recorded at a standard frame rate. Thereby the perceptual quality of such interpolated slow-motion videos strongly depends on the underlying interpolation techniques. We built a novel benchmark database that is specifically tailored for interpolated slow-motion videos (KoSMo-1k). It consists of 1,350 interpolated video sequences, from 30 different content sources, along with their subjective quality ratings from up to ten subjective comparisons per video pair. Moreover, we evaluated the performance of twelve existing full-reference (FR) image/video quality assessment (I/VQA) methods on the benchmark. In this way, we are able to show that specifically tailored quality assessment methods for interpolated slow-motion videos are needed, since the evaluated methods - despite their good performance on real-time video databases - do not give satisfying results when it comes to frame interpolation.
Crowdsourced Quality Assessment of Enhanced Underwater Images : a Pilot Study
2022, Lin, Hanhe, Men, Hui, Yan, Yijun, Ren, Jinchang, Saupe, Dietmar
Underwater image enhancement (UIE) is essential for a high-quality underwater optical imaging system. While a number of UIE algorithms have been proposed in recent years, there is little study on image quality assessment (IQA) of enhanced underwater images. In this paper, we conduct the first crowdsourced subjective IQA study on enhanced underwater images. We chose ten state-of-the-art UIE algorithms and applied them to yield enhanced images from an underwater image benchmark. Their latent quality scales were reconstructed from pair comparison. We demonstrate that the existing IQA metrics are not suitable for assessing the perceived quality of enhanced underwater images. In addition, the overall performance of 10 UIE algorithms on the benchmark is ranked by the newly proposed simulated pair comparison of the methods.
From Technical to Aesthetics Quality Assessment and Beyond : Challenges and Potential
2020, Hosu, Vlad, Saupe, Dietmar, Goldlücke, Bastian, Lin, Weisi, Cheng, Wen-Huang, See, John, Wong, Lai-Kuan
Every day 1.8+ billion images are being uploaded to Facebook, Instagram, Flickr, Snapchat, and WhatsApp . The exponential growth of visual media has made quality assessment become increasingly important for various applications, from image acquisition, synthesis, restoration, and enhancement, to image search and retrieval, storage, and recognition. There have been two related but different classes of visual quality assessment techniques: image quality assessment (IQA) and image aesthetics assessment (IAA). As perceptual assessment tasks, subjective IQA and IAA share some common underlying factors that affect user judgments. Moreover, they are similar in methodology (especially NR-IQA in-the-wild and IAA). However, the emphasis for each is different: IQA focuses on low-level defects e.g. processing artefacts, noise, and blur, while IAA puts more emphasis on abstract and higher-level concepts that capture the subjective aesthetics experience, e.g. established photographic rules encompassing lighting, composition, and colors, and personalized factors such as personality, cultural background, age, and emotion. IQA has been studied extensively over the last decades [3, 14, 22]. There are three main types of IQA methods: full-reference (FR), reduced-reference (RR), and no-reference (NR). Among these, NRIQA is the most challenging as it does not depend on reference images or impose strict assumptions on the distortion types and level. NR-IQA techniques can be further divided into those that predict the global image score [1, 2, 10, 17, 26] and patch-based IQA [23, 25], naming a few of the more recent approaches.
Deep Learning VS. Traditional Algorithms for Saliency Prediction of Distorted Images
2020, Zhao, Xin, Lin, Hanhe, Guo, Pengfei, Saupe, Dietmar, Liu, Hantao
Saliency has been widely studied in relation to image quality assessment (IQA). The optimal use of saliency in IQA metrics, however, is nontrivial and largely depends on whether saliency can be accurately predicted for images containing various distortions. Although tremendous progress has been made in saliency modelling, very little is known about whether and to what extent state-of-the-art methods are beneficial for saliency prediction of distorted images. In this paper, we analyse the ability of deep learning versus traditional algorithms in predicting saliency, based on an IQA-aware saliency benchmark, the SIQ288 database. Building off the variations in model performance, we make recommendations for model selections for IQA applications.