Saupe, Dietmar

Lade...
Profilbild
E-Mail-Adresse
Geburtsdatum
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Saupe
Vorname
Dietmar
Name

Suchergebnisse Publikationen

Gerade angezeigt 1 - 10 von 15
Lade...
Vorschaubild
Veröffentlichung

JPEG AIC-3 Dataset : Towards Defining the High Quality to Nearly Visually Lossless Quality Range

2023-06-20, Testolina, Michela, Hosu, Vlad, Jenadeleh, Mohsen, Lazzarotto, Davi, Saupe, Dietmar, Ebrahimi, Touradj

Visual data play a crucial role in modern society, and the rate at which images and videos are acquired, stored, and exchanged every day is rapidly increasing. Image compression is the key technology that enables storing and sharing of visual content in an efficient and cost-effective manner, by removing redundant and irrelevant information. On the other hand, image compression often introduces undesirable artifacts that reduce the perceived quality of the media. Subjective image quality assessment experiments allow for the collection of information on the visual quality of the media as perceived by human observers, and therefore quantifying the impact of such distortions. Nevertheless, the most commonly used subjective image quality assessment methodologies were designed to evaluate compressed images with visible distortions, and therefore are not accurate and reliable when evaluating images having higher visual qualities. In this paper, we present a dataset of compressed images with quality levels that range from high to nearly visually lossless, with associated quality scores in JND units. The images were subjectively evaluated by expert human observers, and the results were used to define the range from high to nearly visually lossless quality. The dataset is made publicly available to researchers, providing a valuable resource for the development of novel subjective quality assessment methodologies or compression methods that are more effective in this quality range.

Lade...
Vorschaubild
Veröffentlichung

Foveated Video Coding for Real-Time Streaming Applications

2020, Wiedemann, Oliver, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar

Video streaming under real-time constraints is an increasingly widespread application. Many recent video encoders are unsuitable for this scenario due to theoretical limitations or run time requirements. In this paper, we present a framework for the perceptual evaluation of foveated video coding schemes. Foveation describes the process of adapting a visual stimulus according to the acuity of the human eye. In contrast to traditional region-of-interest coding, where certain areas are statically encoded at a higher quality, we utilize feedback from an eye-tracker to spatially steer the bit allocation scheme in real-time. We evaluate the performance of an H.264 based foveated coding scheme in a lab environment by comparing the bitrates at the point of just noticeable distortion (JND). Furthermore, we identify perceptually optimal codec parameterizations. In our trials, we achieve an average bitrate savings of 63.24% at the JND in comparison to the unfoveated baseline.

Vorschaubild nicht verfügbar
Veröffentlichung

Effective Aesthetics Prediction With Multi-Level Spatially Pooled Features

2019-06, Hosu, Vlad, Goldlücke, Bastian, Saupe, Dietmar

We propose an effective deep learning approach to aesthetics quality assessment that relies on a new type of pre-trained features, and apply it to the AVA data set, the currently largest aesthetics database. While previous approaches miss some of the information in the original images, due to taking small crops, down-scaling or warping the originals during training, we propose the first method that efficiently supports full resolution images as an input, and can be trained on variable input sizes. This allows us to significantly improve upon the state of the art, increasing the Spearman rank-order correlation coefficient (SRCC) of ground-truth mean opinion scores (MOS) from the existing best reported of 0.612 to 0.756. To achieve this performance, we extract multi-level spatially pooled (MLSP) features from all convolutional blocks of a pre-trained InceptionResNet-v2 network, and train a custom shallow Convolutional Neural Network (CNN) architecture on these new features.

Vorschaubild nicht verfügbar
Veröffentlichung

Expertise screening in crowdsourcing image quality

2018, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar

We propose a screening approach to find reliable and effectively expert crowd workers in image quality assessment (IQA). Our method measures the users' ability to identify image degradations by using test questions, together with several relaxed reliability checks. We conduct multiple experiments, obtaining reproducible results with a high agreement between the expertise-screened crowd and the freelance experts of 0.95 Spearman rank order correlation (SROCC), with one restriction on the image type. Our contributions include a reliability screening method for uninformative users, a new type of test questions that rely on our proposed database 1 of pristine and artificially distorted images, a group agreement extrapolation method and an analysis of the crowdsourcing experiments.

Vorschaubild nicht verfügbar
Veröffentlichung

KonIQ++ : Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects

2021, Su, Shaolin, Hosu, Vlad, Lin, Hanhe, Zhang, Yanning, Saupe, Dietmar

Although image quality assessment (IQA) in-the-wild has been researched in computer vision, it is still challenging to precisely estimate perceptual image quality in the presence of real-world complex and composite distortions. In order to improve machine learning solutions for IQA, we consider side information denoting the presence of distortions besides the basic quality ratings in IQA datasets. Specifically, we extend one of the largest in-the-wild IQA databases, KonIQ-10k, to KonIQ++, by collecting distortion annotations for each image, aiming to improve quality prediction together with distortion identification. We further explore the interactions between image quality and distortion by proposing a novel IQA model, which jointly predicts image quality and distortion by recurrently refining task-specific features in a multi-stage fusion framework. Our dataset KonIQ++, along with the model, boosts IQA performance and generalization ability, demonstrating its potential for solving the challenging authentic IQA task. The proposed model can also accurately predict distinct image defects, suggesting its application in image processing tasks such as image colorization and deblurring.

Lade...
Vorschaubild
Veröffentlichung

Visual Quality Assessment for Interpolated Slow-motion Videos based on a Novel Database

2020, Men, Hui, Hosu, Vlad, Lin, Hanhe, Bruhn, Andres, Saupe, Dietmar

Professional video editing tools can generate slow-motion video by interpolating frames from video recorded at a standard frame rate. Thereby the perceptual quality of such interpolated slow-motion videos strongly depends on the underlying interpolation techniques. We built a novel benchmark database that is specifically tailored for interpolated slow-motion videos (KoSMo-1k). It consists of 1,350 interpolated video sequences, from 30 different content sources, along with their subjective quality ratings from up to ten subjective comparisons per video pair. Moreover, we evaluated the performance of twelve existing full-reference (FR) image/video quality assessment (I/VQA) methods on the benchmark. In this way, we are able to show that specifically tailored quality assessment methods for interpolated slow-motion videos are needed, since the evaluated methods - despite their good performance on real-time video databases - do not give satisfying results when it comes to frame interpolation.

Vorschaubild nicht verfügbar
Veröffentlichung

SUR-Net : Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning

2019, Fan, Chunling, Lin, Hanhe, Hosu, Vlad, Zhang, Yun, Jiang, Qingshan, Hamzaoui, Raouf, Saupe, Dietmar

The Satisfied User Ratio (SUR) curve for a lossy image compression scheme, e.g., JPEG, characterizes the probability distribution of the Just Noticeable Difference (JND) level, the smallest distortion level that can be perceived by a subject. We propose the first deep learning approach to predict such SUR curves. Instead of the direct approach of regressing the SUR curve itself for a given reference image, our model is trained on pairs of images, original and compressed. Relying on a Siamese Convolutional Neural Network (CNN), feature pooling, a fully connected regression-head, and transfer learning, we achieved a good prediction performance. Experiments on the MCL-JCI dataset showed a mean Bhattacharyya distance between the predicted and the original JND distributions of only 0.072.

Lade...
Vorschaubild
Veröffentlichung

From Technical to Aesthetics Quality Assessment and Beyond : Challenges and Potential

2020, Hosu, Vlad, Saupe, Dietmar, Goldlücke, Bastian, Lin, Weisi, Cheng, Wen-Huang, See, John, Wong, Lai-Kuan

Every day 1.8+ billion images are being uploaded to Facebook, Instagram, Flickr, Snapchat, and WhatsApp [6]. The exponential growth of visual media has made quality assessment become increasingly important for various applications, from image acquisition, synthesis, restoration, and enhancement, to image search and retrieval, storage, and recognition. There have been two related but different classes of visual quality assessment techniques: image quality assessment (IQA) and image aesthetics assessment (IAA). As perceptual assessment tasks, subjective IQA and IAA share some common underlying factors that affect user judgments. Moreover, they are similar in methodology (especially NR-IQA in-the-wild and IAA). However, the emphasis for each is different: IQA focuses on low-level defects e.g. processing artefacts, noise, and blur, while IAA puts more emphasis on abstract and higher-level concepts that capture the subjective aesthetics experience, e.g. established photographic rules encompassing lighting, composition, and colors, and personalized factors such as personality, cultural background, age, and emotion. IQA has been studied extensively over the last decades [3, 14, 22]. There are three main types of IQA methods: full-reference (FR), reduced-reference (RR), and no-reference (NR). Among these, NRIQA is the most challenging as it does not depend on reference images or impose strict assumptions on the distortion types and level. NR-IQA techniques can be further divided into those that predict the global image score [1, 2, 10, 17, 26] and patch-based IQA [23, 25], naming a few of the more recent approaches.

Lade...
Vorschaubild
Veröffentlichung

ATQAM/MAST'20 : Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends

2020, Guha, Tanaya, Hosu, Vlad, Saupe, Dietmar, Goldlücke, Bastian, Kumar, Naveen, Lin, Weisi, Martinez, Victor, Somandepalli, Krishna, Narayanan, Shrikanth, Cheng, Wen-Huang, McLaughlin, Kree

The Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends (ATQAM/ MAST) aims to bring together researchers and professionals working in fields ranging from computer vision, multimedia computing, multimodal signal processing to psychology and social sciences. It is divided into two tracks: ATQAM and MAST. ATQAM track: Visual quality assessment techniques can be divided into image and video technical quality assessment (IQA and VQA, or broadly TQA) and aesthetics quality assessment (AQA). While TQA is a long-standing field, having its roots in media compression, AQA is relatively young. Both have received increased attention with developments in deep learning. The topics have mostly been studied separately, even though they deal with similar aspects of the underlying subjective experience of media. The aim is to bring together individuals in the two fields of TQA and AQA for the sharing of ideas and discussions on current trends, developments, issues, and future directions. MAST track: The research area of media content analytics has been traditionally used to refer to applications involving inference of higher-level semantics from multimedia content. However, multimedia is typically created for human consumption, and we believe it is necessary to adopt a human-centered approach to this analysis, which would not only enable a better understanding of how viewers engage with content but also how they impact each other in the process.

Vorschaubild nicht verfügbar
Veröffentlichung

Visual Quality Assessment for Motion Compensated Frame Interpolation

2019, Men, Hui, Lin, Hanhe, Hosu, Vlad, Maurer, Daniel, Bruhn, Andres, Saupe, Dietmar

Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.