Saupe, Dietmar

Lade...
Profilbild
E-Mail-Adresse
Geburtsdatum
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Saupe
Vorname
Dietmar
Name
Weiterer Name

Suchergebnisse Publikationen

Gerade angezeigt 1 - 10 von 48
Vorschaubild nicht verfügbar
Veröffentlichung

Crowdsourced Estimation of Collective Just Noticeable Difference for Compressed Video with Flicker Test and QUEST+

2023-09-14, Jenadeleh, Mohsen, Hamzaoui, Raouf, Reips, Ulf-Dietrich, Saupe, Dietmar

The concept of video-wise just noticeable difference (JND) was recently proposed to determine the lowest bitrate at which a source video can be compressed without perceptible quality loss with a given probability. This bitrate is usually obtained from an estimate of the satisfied used ratio (SUR) at each bitrate, respectively encoding quality parameter. The SUR is the probability that the distortion corresponding to this bitrate is not noticeable. Commonly, the SUR is computed experimentally by estimating the subjective JND threshold of each subject using binary search, fitting a distribution model to the collected data, and creating the complementary cumulative distribution function of the distribution. The subjective tests consist of paired comparisons between the source video and compressed versions. However, we show that this approach typically over- or underestimates the SUR. To address this shortcoming, we directly estimate the SUR function by considering the entire population as a collective observer. Our method randomly chooses the subject for each paired comparison and uses a state-of-the-art Bayesian adaptive psychometric method (QUEST+) to select the compressed video in the paired comparison. Our simulations show that this collective method yields more accurate SUR results with fewer comparisons. We also provide a subjective experiment to assess the JND and SUR for compressed video. In the paired comparisons, we apply a flicker test that compares a video that interleaves the source video and its compressed version with the source video. Analysis of the subjective data revealed that the flicker test provides on average higher sensitivity and precision in the assessment of the JND threshold than the usual test that compares compressed versions with the source video. Using crowdsourcing and the proposed approach, we build a JND dataset for 45 source video sequences that are encoded with both advanced video coding (AVC) and versatile video coding (VVC) at all available quantization parameters. Our dataset is available at http://database.mmsp-kn.de/flickervidset-database.html.

Vorschaubild nicht verfügbar
Veröffentlichung

Effective Aesthetics Prediction With Multi-Level Spatially Pooled Features

2019-06, Hosu, Vlad, Goldlücke, Bastian, Saupe, Dietmar

We propose an effective deep learning approach to aesthetics quality assessment that relies on a new type of pre-trained features, and apply it to the AVA data set, the currently largest aesthetics database. While previous approaches miss some of the information in the original images, due to taking small crops, down-scaling or warping the originals during training, we propose the first method that efficiently supports full resolution images as an input, and can be trained on variable input sizes. This allows us to significantly improve upon the state of the art, increasing the Spearman rank-order correlation coefficient (SRCC) of ground-truth mean opinion scores (MOS) from the existing best reported of 0.612 to 0.756. To achieve this performance, we extract multi-level spatially pooled (MLSP) features from all convolutional blocks of a pre-trained InceptionResNet-v2 network, and train a custom shallow Convolutional Neural Network (CNN) architecture on these new features.

Vorschaubild nicht verfügbar
Veröffentlichung

Visual Feedback for Pacing Strategies in Road Cycling

2018, Artiga Gonzalez, Alexander, Wolf, Stefan, Bertschinger, Raphael, Saupe, Dietmar

The right choice of a pacing strategy for a time trial race is important and often difficult to establish. Methods are now available to generate pacing strategies that are optimal, however, only in a mathematical sense. Until now, they were tested in practice only under laboratory conditions [1]. Pacing strategies are generally based on two mathematical models: (1) to describe the relation between power output and speed [2], and (2) to describe the fatigue of the rider related to the power output [3]. The quality and validity of these pacing strategies relies on the accuracy of the predictions made by those models.

Vorschaubild nicht verfügbar
Veröffentlichung

Expertise screening in crowdsourcing image quality

2018, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar

We propose a screening approach to find reliable and effectively expert crowd workers in image quality assessment (IQA). Our method measures the users' ability to identify image degradations by using test questions, together with several relaxed reliability checks. We conduct multiple experiments, obtaining reproducible results with a high agreement between the expertise-screened crowd and the freelance experts of 0.95 Spearman rank order correlation (SROCC), with one restriction on the image type. Our contributions include a reliability screening method for uninformative users, a new type of test questions that rely on our proposed database 1 of pristine and artificially distorted images, a group agreement extrapolation method and an analysis of the crowdsourcing experiments.

Vorschaubild nicht verfügbar
Veröffentlichung

KonIQ++ : Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects

2021, Su, Shaolin, Hosu, Vlad, Lin, Hanhe, Zhang, Yanning, Saupe, Dietmar

Although image quality assessment (IQA) in-the-wild has been researched in computer vision, it is still challenging to precisely estimate perceptual image quality in the presence of real-world complex and composite distortions. In order to improve machine learning solutions for IQA, we consider side information denoting the presence of distortions besides the basic quality ratings in IQA datasets. Specifically, we extend one of the largest in-the-wild IQA databases, KonIQ-10k, to KonIQ++, by collecting distortion annotations for each image, aiming to improve quality prediction together with distortion identification. We further explore the interactions between image quality and distortion by proposing a novel IQA model, which jointly predicts image quality and distortion by recurrently refining task-specific features in a multi-stage fusion framework. Our dataset KonIQ++, along with the model, boosts IQA performance and generalization ability, demonstrating its potential for solving the challenging authentic IQA task. The proposed model can also accurately predict distinct image defects, suggesting its application in image processing tasks such as image colorization and deblurring.

Vorschaubild nicht verfügbar
Veröffentlichung

Visual Quality Assessment for Motion Compensated Frame Interpolation

2019, Men, Hui, Lin, Hanhe, Hosu, Vlad, Maurer, Daniel, Bruhn, Andres, Saupe, Dietmar

Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.

Vorschaubild nicht verfügbar
Veröffentlichung

Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment

2018, Men, Hui, Lin, Hanhe, Saupe, Dietmar

One of the main challenges in no-reference video quality assessment is temporal variation in a video. Methods typically were designed and tested on videos with artificial distortions, without considering spatial and temporal variations simultaneously. We propose a no-reference spatiotemporal feature combination model which extracts spatiotemporal information from a video, and tested it on a database with authentic distortions. Comparing with other methods, our model gave satisfying performance for assessing the quality of natural videos.

Vorschaubild nicht verfügbar
Veröffentlichung

KonIQ-10k : An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment

2020-01-24, Hosu, Vlad, Lin, Hanhe, Sziranyi, Tamas, Saupe, Dietmar

Deep learning methods for image quality assessment (IQA) are limited due to the small size of existing datasets. Extensive datasets require substantial resources both for generating publishable content and annotating it accurately. We present a systematic and scalable approach to creating KonIQ-10k, the largest IQA dataset to date, consisting of 10,073 quality scored images. It is the first in-the-wild database aiming for ecological validity, concerning the authenticity of distortions, the diversity of content, and quality-related indicators. Through the use of crowdsourcing, we obtained 1.2 million reliable quality ratings from 1,459 crowd workers, paving the way for more general IQA models. We propose a novel, deep learning model (KonCept512), to show an excellent generalization beyond the test set (0.921 SROCC), to the current state-of-the-art database LIVE-in-the-Wild (0.825 SROCC). The model derives its core performance from the InceptionResNet architecture, being trained at a higher resolution than previous models (512 × 384 ). Correlation analysis shows that KonCept512 performs similar to having 9 subjective scores for each test image.

Vorschaubild nicht verfügbar
Veröffentlichung

SUR-Net : Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning

2019, Fan, Chunling, Lin, Hanhe, Hosu, Vlad, Zhang, Yun, Jiang, Qingshan, Hamzaoui, Raouf, Saupe, Dietmar

The Satisfied User Ratio (SUR) curve for a lossy image compression scheme, e.g., JPEG, characterizes the probability distribution of the Just Noticeable Difference (JND) level, the smallest distortion level that can be perceived by a subject. We propose the first deep learning approach to predict such SUR curves. Instead of the direct approach of regressing the SUR curve itself for a given reference image, our model is trained on pairs of images, original and compressed. Relying on a Siamese Convolutional Neural Network (CNN), feature pooling, a fully connected regression-head, and transfer learning, we achieved a good prediction performance. Experiments on the MCL-JCI dataset showed a mean Bhattacharyya distance between the predicted and the original JND distributions of only 0.072.

Vorschaubild nicht verfügbar
Veröffentlichung

How to Accurately Determine the Position on a Known Course in Road Cycling

2018, Wolf, Stefan, Dobiasch, Martin, Artiga Gonzalez, Alexander, Saupe, Dietmar

With modern cycling computers it is possible to provide cyclists with complex feedback during rides. If the feedback is course-dependent, it is necessary to know the riders current position on the course. Different approaches to estimate the position on the course from common GPS and speed sensors were compared: the direct distance measure derived from the number of rotations of the wheel, GPS coordinates projected onto the course trajectory, and a Kalman filter incorporating speed as well as GPS measurements. To quantify the accuracy of the different methods, an experiment was conducted on a race track where a fixed point on the course was tagged during the ride. The Kalman filter approach was able to overcome certain shortcomings of the other two approaches and achieved a mean error of −0.13m and a root mean square error of 0.97m.