Abstract

Single image super-resolution (SISR) for reconstructing from a low-resolution (LR) input image its corresponding high-resolution (HR) output is a widely-studied research problem in the field of multimedia applications and computer vision. Despite the magic leap brought by recent development of deep neural networks for SISR, such problem is still considered to be quite challenging and non-scalable for the real-world data due to its ill-posed nature, where the degradations happened to the input LR images are usually complex and even unknown (in which the degradations in the test data could be unseen or different from the ones shown in the training dataset). To this end, two branches of SISR methods have emerged: blind super-resolution (blind-SR) and arbitrary-scale super-resolution (ASSR), where the former aims to reconstruct SR images under the unknown degradations, while the latter improves the scalability via learning to handle arbitrary up-sampling ratios. In this paper, we propose a holistic framework to take both blind-SR and ASSR tasks (accordingly named as arbitrary-scale blind-SR) into consideration with two main designs: 1) learning dual degradation representations where the implicit and explicit representations of degradation are sequentially extracted from the input LR image, and 2) modeling both upsampling (i.e. LR→HR) and downsampling (i.e. HR→LR) processes at the same time, where they utilize the implicit and explicit degradation representations respectively, in order to enable the cycle-consistency objective and further improve the training. We conduct extensive experiments on various datasets where the results well verify the effectiveness of our proposed framework in handling complex degradations as well as its superiority with respect to several state-of-the-art baselines.

Method

Result

Qualitative comparison ↓

Qualitative results with various continuous upsampling ↓

Citation

    @inproceedings{weng2024wacv,
 	title = {Best of Both Worlds: Learning Arbitrary-scale Blind Super-Resolution via Dual Degradation Representations and Cycle-Consistency},
 	author = {Sha0-Yu Weng and Hsuan Yuan and Yu-Syuan Xu and Ching-Chun Huang and Wei-Chen Chiu},
 	booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
 	year = {2024}
	}