Saupe, Dietmar
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Vorname
Name
Suchergebnisse Publikationen
KonIQ++ : Boosting No-Reference Image Quality Assessment in the Wild by Jointly Predicting Image Quality and Defects
2021, Su, Shaolin, Hosu, Vlad, Lin, Hanhe, Zhang, Yanning, Saupe, Dietmar
Although image quality assessment (IQA) in-the-wild has been researched in computer vision, it is still challenging to precisely estimate perceptual image quality in the presence of real-world complex and composite distortions. In order to improve machine learning solutions for IQA, we consider side information denoting the presence of distortions besides the basic quality ratings in IQA datasets. Specifically, we extend one of the largest in-the-wild IQA databases, KonIQ-10k, to KonIQ++, by collecting distortion annotations for each image, aiming to improve quality prediction together with distortion identification. We further explore the interactions between image quality and distortion by proposing a novel IQA model, which jointly predicts image quality and distortion by recurrently refining task-specific features in a multi-stage fusion framework. Our dataset KonIQ++, along with the model, boosts IQA performance and generalization ability, demonstrating its potential for solving the challenging authentic IQA task. The proposed model can also accurately predict distinct image defects, suggesting its application in image processing tasks such as image colorization and deblurring.
Visual Quality Assessment for Motion Compensated Frame Interpolation
2019, Men, Hui, Lin, Hanhe, Hosu, Vlad, Maurer, Daniel, Bruhn, Andres, Saupe, Dietmar
Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.
Visual Feedback for Pacing Strategies in Road Cycling
2018, Artiga Gonzalez, Alexander, Wolf, Stefan, Bertschinger, Raphael, Saupe, Dietmar
The right choice of a pacing strategy for a time trial race is important and often difficult to establish. Methods are now available to generate pacing strategies that are optimal, however, only in a mathematical sense. Until now, they were tested in practice only under laboratory conditions [1]. Pacing strategies are generally based on two mathematical models: (1) to describe the relation between power output and speed [2], and (2) to describe the fatigue of the rider related to the power output [3]. The quality and validity of these pacing strategies relies on the accuracy of the predictions made by those models.
Spatiotemporal Feature Combination Model for No-Reference Video Quality Assessment
2018, Men, Hui, Lin, Hanhe, Saupe, Dietmar
One of the main challenges in no-reference video quality assessment is temporal variation in a video. Methods typically were designed and tested on videos with artificial distortions, without considering spatial and temporal variations simultaneously. We propose a no-reference spatiotemporal feature combination model which extracts spatiotemporal information from a video, and tested it on a database with authentic distortions. Comparing with other methods, our model gave satisfying performance for assessing the quality of natural videos.
Effective Aesthetics Prediction With Multi-Level Spatially Pooled Features
2019-06, Hosu, Vlad, Goldlücke, Bastian, Saupe, Dietmar
We propose an effective deep learning approach to aesthetics quality assessment that relies on a new type of pre-trained features, and apply it to the AVA data set, the currently largest aesthetics database. While previous approaches miss some of the information in the original images, due to taking small crops, down-scaling or warping the originals during training, we propose the first method that efficiently supports full resolution images as an input, and can be trained on variable input sizes. This allows us to significantly improve upon the state of the art, increasing the Spearman rank-order correlation coefficient (SRCC) of ground-truth mean opinion scores (MOS) from the existing best reported of 0.612 to 0.756. To achieve this performance, we extract multi-level spatially pooled (MLSP) features from all convolutional blocks of a pre-trained InceptionResNet-v2 network, and train a custom shallow Convolutional Neural Network (CNN) architecture on these new features.
Expertise screening in crowdsourcing image quality
2018, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar
We propose a screening approach to find reliable and effectively expert crowd workers in image quality assessment (IQA). Our method measures the users' ability to identify image degradations by using test questions, together with several relaxed reliability checks. We conduct multiple experiments, obtaining reproducible results with a high agreement between the expertise-screened crowd and the freelance experts of 0.95 Spearman rank order correlation (SROCC), with one restriction on the image type. Our contributions include a reliability screening method for uninformative users, a new type of test questions that rely on our proposed database 1 of pristine and artificially distorted images, a group agreement extrapolation method and an analysis of the crowdsourcing experiments.
Deeprn : A Content Preserving Deep Architecture for Blind Image Quality Assessment
2018, Varga, Domonkos, Saupe, Dietmar, Sziranyi, Tamas
This paper presents a blind image quality assessment (BIQA) method based on deep learning with convolutional neural networks (CNN). Our method is trained on full and arbitrarily sized images rather than small image patches or resized input images as usually done in CNNs for image classification and quality assessment. The resolution independence is achieved by pyramid pooling. This work is the first that applies a fine-tuned residual deep learning network (ResNet-101) to BIQA. The training is carried out on a new and very large, labeled dataset of 10, 073 images (KonIQ-10k) that contains quality rating histograms besides the mean opinion scores (MOS). In contrast to previous methods we do not train to approximate the MOS directly, but rather use the distributions of scores. Experiments were carried out on three benchmark image quality databases. The results showed clear improvements of the accuracy of the estimated MOS values, compared to current state-of-the-art algorithms. We also report on the quality of the estimation of the score distributions.
SUR-Net : Predicting the Satisfied User Ratio Curve for Image Compression with Deep Learning
2019, Fan, Chunling, Lin, Hanhe, Hosu, Vlad, Zhang, Yun, Jiang, Qingshan, Hamzaoui, Raouf, Saupe, Dietmar
The Satisfied User Ratio (SUR) curve for a lossy image compression scheme, e.g., JPEG, characterizes the probability distribution of the Just Noticeable Difference (JND) level, the smallest distortion level that can be perceived by a subject. We propose the first deep learning approach to predict such SUR curves. Instead of the direct approach of regressing the SUR curve itself for a given reference image, our model is trained on pairs of images, original and compressed. Relying on a Siamese Convolutional Neural Network (CNN), feature pooling, a fully connected regression-head, and transfer learning, we achieved a good prediction performance. Experiments on the MCL-JCI dataset showed a mean Bhattacharyya distance between the predicted and the original JND distributions of only 0.072.
Disregarding the Big Picture : Towards Local Image Quality Assessment
2018, Wiedemann, Oliver, Hosu, Vlad, Lin, Hanhe, Saupe, Dietmar
Image quality has been studied almost exclusively as a global image property. It is common practice for IQA databases and metrics to quantify this abstract concept with a single number per image. We propose an approach to blind IQA based on a convolutional neural network (patchnet) that was trained on a novel set of 32,000 individually annotated patches of 64×64 pixel. We use this model to generate spatially small local quality maps of images taken from KonIQ-10k, a large and diverse in-the-wild database of authentically distorted images. We show that our local quality indicator correlates well with global MOS, going beyond the predictive ability of quality related attributes such as sharpness. Averaging of patchnet predictions already outperforms classical approaches to global MOS prediction that were trained to include global image features. We additionally experiment with a generic second-stage aggregation CNN to estimate mean opinion scores. Our latter model performs comparable to the state of the art with a PLCC of 0.81 on KonIQ-10k.
How to Accurately Determine the Position on a Known Course in Road Cycling
2018, Wolf, Stefan, Dobiasch, Martin, Artiga Gonzalez, Alexander, Saupe, Dietmar
With modern cycling computers it is possible to provide cyclists with complex feedback during rides. If the feedback is course-dependent, it is necessary to know the riders current position on the course. Different approaches to estimate the position on the course from common GPS and speed sensors were compared: the direct distance measure derived from the number of rotations of the wheel, GPS coordinates projected onto the course trajectory, and a Kalman filter incorporating speed as well as GPS measurements. To quantify the accuracy of the different methods, an experiment was conducted on a race track where a fixed point on the course was tagged during the ride. The Kalman filter approach was able to overcome certain shortcomings of the other two approaches and achieved a mean error of −0.13m and a root mean square error of 0.97m.