CUDAS : Distortion-Aware Saliency Benchmark
2023, Zhao, Xin, Lou, Jianxun, Wu, Xinbo, Wu, Yingying, Lévêque, Lucie, Liu, Xiaochang, Guo, Pengfei, Qin, Yipeng, Lin, Hanhe, Saupe, Dietmar, Liu, Hantao
Visual saliency prediction remains an academic challenge due to the diversity and complexity of natural scenes as well as the scarcity of eye movement data on where people look in images. In many practical applications, digital images are inevitably subject to distortions, such as those caused by acquisition, editing, compression or transmission. A great deal of attention has been paid to predicting the saliency of distortion-free pristine images, but little attention has been given to understanding the impact of visual distortions on saliency prediction. In this paper, we first present the CUDAS database - a new distortion-aware saliency benchmark, where eye-tracking data was collected for 60 pristine images and their corresponding 540 distorted formats. We then conduct a statistical evaluation to reveal the behaviour of state-of-the-art saliency prediction models on distorted images and provide insights on building an effective model for distortion-aware saliency prediction. The new database is made publicly available to the research community.