Skip to main content Skip to main navigation

Publication

Expanding Synthetic Real-World Degradations for Blind Video Super Resolution

Mehran Jeelani; Sadbhawna; Noshaba Cheema; Klaus Illgner-Fehns; Philipp Slusallek; Sunil Jaiswal
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. New Trends in Image Restoration and Enhancement Workshop (NTIRE-2023), 8th, located at CVPR-2023, June 18, Vancouver, BC, Canada, Pages 1199-1208, Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (CVPRW), IEEE Xplore, 6/2023.

Abstract

Video super-resolution (VSR) techniques, especially deep-learning-based algorithms, have drastically improved over the last few years and shown impressive performance on synthetic data. However, their performance on real-world video data suffers because of the complexity of real-world degradations and misaligned video frames. Since obtaining a synthetic dataset consisting of low-resolution (LR) and high-resolution (HR) frames are easier than obtaining real-world LR and HR images, in this paper, we propose synthesizing real-world degradations on synthetic training datasets. The proposed synthetic real-world degradations (SRWD) include a combination of blur, noise, downsampling, pixel binning, and image and video compression artifacts. We then propose using a random shuffling-based strategy to simulate these degradations on the training datasets and train a single end-to-end deep neural network (DNN) on the proposed larger variation of realistic synthesized training data. Our quantitative and qualitative comparative analysis shows that the proposed training strategy using diverse realistic degradations improves the performance by 7.1 % in terms of NRQM compared to RealBasicVSR and by 3.34 % compared to BSRGAN on the VideoLQ dataset. We also introduce a new dataset that contains high-resolution real-world videos that can serve as a common ground for bench-marking.

Weitere Links