全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于域适应的真实场景图像超分辨方法
Real-World Image Super-Resolution Method Based on Domain Adaptation

DOI: 10.12677/CSA.2023.136126, PP. 1279-1288

Keywords: 深度学习,域适应,频率分离,图像超分辨
Deep Learning
, Domain Adaptation, Frequency Separation, Image Super-Resolution

Full-Text   Cite this paper   Add to My Lib

Abstract:

本研究针对目前图像超分辨技术广泛应用于医疗、遥感等领域,但已有的图像超分辨方法大多采用退化假设的方式来训练模型,导致训练出的超分辨网络对真实场景下的图像的重建效果不理想这一问题,提出了一种基于域适应的真实世界图像超分辨方法。真实场景图像超分辨方法分为两阶段,第一阶段,通过设计退化网络,利用高分辨率图像经过退化网络合成低分辨率图像,并引入域适应损失来使得合成的低分辨率图像逼近真实世界低分辨图像;第二阶段利用第一阶段合成的高分辨率–低分辨率图像对以监督学习的方式训练重建网络,并通过频率分离技术,提取图像高频信息,送入判别器网络,通过频域对抗损失来促进重建网络恢复出图像中的更多细节,并减少伪影,来使得训练出的超分辨率网络对真实场景图像有着很好的泛化性。在RealSR数据集的子数据集上进行训练并进行验证测试,实验结果表明,与现有方法先比,所提出的方法在定量指标与定性效果方面都取得了最优的效果。因此,基于域适应的真实场景图像超分辨方法能够有效的缩小合成图像与真实场景图像间的域分布差异,使得超分辨网络能够适应具有多样分布的真实场景图像,大大提高了图像的重建质量。
This study proposes a real-world image super-resolution method based on domain adaptation for the problem that image super-resolution technology is widely used in medical, remote sensing, and other fields, but most of the existing image super-resolution methods use degradation hypothesis to train the model, which leads to the unsatisfactory reconstruction of the trained super-resolution network for real scene images. The real-world image super-resolution method is divided into two stages. In the first stage, the high-resolution image is synthesized into a low-resolution image by designing a degradation network, and the domain adaptation loss is introduced to make the synthe-sized low-resolution image approximate the real-world low-resolution image; The second stage uses the high-resolution-low-resolution image pairs synthesized in the first stage to train the reconstruc-tion network in a supervised learning manner, and extracts the high-frequency information from the images through frequency separation techniques and feeds it into the discriminator network, which facilitates the reconstruction network to recover more details in the images and reduce arti-facts through frequency-domain adversarial loss to make the trained super-resolution network have good generalization to the real scene images. The experimental results of training and validation tests on a subset of the RealSR dataset show that the proposed method achieves optimal results in terms of quantitative metrics and qualitative effects compared with the existing methods. Therefore, the super-resolution method based on domain adaptation of real scene images can effectively reduce the difference of domain distribution between synthetic images and real scene images, which enables the super-resolution network to adapt to real scene images with diverse distribution and greatly improves the reconstruction quality of images.

References

[1]  Dong, C., Loy, C.C., He, K., et al. (2014) Learning a Deep Convolutional Network for Image Super-Resolution. Com-puter Vision—ECCV 2014: 13th European Conference, Zurich, 6-12 September 2014, 184-199.
https://doi.org/10.1007/978-3-319-10593-2_13
[2]  Ahn, N., Kang, B. and Sohn, K.A. (2018) Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network. Proceedings of the European Conference on Computer Vision (ECCV), Munich, 8-14 September 2018, 256-272.
https://doi.org/10.1007/978-3-030-01249-6_16
[3]  Ahn, N., Kang, B. and Sohn, K.A. (2018) Image Super-Resolution via Progressive Cascading Residual Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, 18-22 June 2018, 791-799.
https://doi.org/10.1109/CVPRW.2018.00123
[4]  Dong, C., Loy, C.C., He, K., et al. (2015) Image Su-per-Resolution Using Deep Convolutional Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 295-307.
https://doi.org/10.1109/TPAMI.2015.2439281
[5]  Fan, Y., Shi, H., Yu, J., et al. (2017) Balanced Two-Stage Residual Networks for Image Super-Resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, 21-26 July 2017, 161-168.
https://doi.org/10.1109/CVPRW.2017.154
[6]  Haris, M., Shakhnarovich, G. and Ukita, N. (2018) Deep Back-Projection Networks for Super-Resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 1664-1673.
https://doi.org/10.1109/CVPR.2018.00179
[7]  Kim, J., Lee, J.K. and Lee, K.M. (2016) Accurate Image Su-per-Resolution Using Very Deep Convolutional Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 1646-1654.
https://doi.org/10.1109/CVPR.2016.182
[8]  Lai, W.S., Huang, J.B., Ahuja, N., et al. (2017) Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 624-632.
https://doi.org/10.1109/CVPR.2017.618
[9]  Lim, B., Son, S., Kim, H., et al. (2017) Enhanced Deep Residual Networks for Single Image Super-Resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 136-144.
https://doi.org/10.1109/CVPRW.2017.151
[10]  Johnson, J., Alahi, A. and Li, F.-F. (2016) Perceptual Losses for Real-Time Style Transfer and Super-Resolution. Computer Vision—ECCV 2016: 14th European Conference, Amster-dam, 11-14 October 2016, 694-711.
https://doi.org/10.1007/978-3-319-46475-6_43
[11]  Ledig, C., Theis, L., Huszár, F., et al. (2017) Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of the IEEE Conference on Com-puter Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 4681-4690.
https://doi.org/10.1109/CVPR.2017.19
[12]  Wang, X., Yu, K., Wu, S., et al. (2018) Esrgan: Enhanced Su-per-Resolution Generative Adversarial Networks. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, 8-14 September 2018, 63-79.
https://doi.org/10.1007/978-3-030-11021-5_5
[13]  Liang, J., Cao, J., Sun, G., et al. (2021) SwinIR: Image Resto-ration Using Swin Transformer. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, 11-17 October 2021, 1833-1844.
https://doi.org/10.1109/ICCVW54120.2021.00210
[14]  Yang, F., Yang, H., Fu, J., et al. (2020) Learning Texture Transformer Network for Image Super-Resolution. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, 13-19 June 2020, 5791-5800.
https://doi.org/10.1109/CVPR42600.2020.00583
[15]  Cai, J., Zeng, H., Yong, H., et al. (2019) Toward Re-al-World Single Image Super-Resolution: A New Benchmark and a New Model. Proceedings of the IEEE/CVF Interna-tional Conference on Computer Vision, Seoul, 27 October-2 November 2019, 3086-3095.
https://doi.org/10.1109/ICCV.2019.00318
[16]  Chen, C., Xiong, Z., Tian, X., et al. (2019) Camera Lens Su-per-Resolution. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, 15-20 June 2019, 1652-1660.
https://doi.org/10.1109/CVPR.2019.00175
[17]  Shocher, A., Cohen, N. and Irani, M. (2018) “Zero-Shot” Su-per-Resolution Using Deep Internal Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 3118-3126.
https://doi.org/10.1109/CVPR.2018.00329
[18]  Gu, J., Lu, H., Zuo, W., et al. (2019) Blind Super-Resolution with Iterative Kernel Correction. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, 15-20 June 2019, 1604-1613.
https://doi.org/10.1109/CVPR.2019.00170
[19]  Zhang, K., Zuo, W. and Zhang, L. (2018) Learning a Single Con-volutional Super-Resolution Network for Multiple Degradations. Proceedings of the IEEE Conference on Computer Vi-sion and Pattern Recognition, Salt Lake City, 18-23 June 2018, 3262-3271.
https://doi.org/10.1109/CVPR.2018.00344
[20]  Ghifary, M., Kleijn, W.B., Zhang, M., et al. (2016) Deep Recon-struction-Classification Networks for Unsupervised Domain Adaptation. Computer Vision—ECCV: 14th European Conference, Amsterdam, 11-14 October 2016, 597-613.
https://doi.org/10.1007/978-3-319-46493-0_36
[21]  Ren, J., Hacihaliloglu, I., Singer, E.A., et al. (2018) Adversar-ial Domain Adaptation for Classification of Prostate Histopathology Whole-Slide Images. Medical Image Computing and Computer Assisted Intervention—MICCAI 2018: 21st International Conference, Granada, 16-20 September 2018, 201-209.
https://doi.org/10.1007/978-3-030-00934-2_23
[22]  Saito, K., Watanabe, K., Ushiku, Y., et al. (2018) Maximum Classifier Discrepancy for Unsupervised Domain Adaptation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 3723-3732.
https://doi.org/10.1109/CVPR.2018.00392
[23]  Chen, Y., Li, W., Sakaridis, C., et al. (2018) Domain Adaptive Faster r-cnn for Object Detection in the Wild. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 3339-3348.
https://doi.org/10.1109/CVPR.2018.00352
[24]  Gopalan, R., Li, R. and Chellappa, R. (2011) Domain Adaptation for Object Recognition: An Unsupervised Approach. 2011 International Conference on Computer Vision IEEE, Barce-lona, 6-13 November 2011, 999-1006.
https://doi.org/10.1109/ICCV.2011.6126344
[25]  Yuan, Y., Liu, S., Zhang, J., et al. (2018) Unsupervised Image Super-Resolution Using Cycle-in-Cycle Generative Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, 18-23 June 2018, 701-710.
https://doi.org/10.1109/CVPRW.2018.00113
[26]  Fritsche, M., Gu, S. and Timofte, R. (2019) Frequency Separa-tion for Real-World Super-Resolution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, 27-28 October 2019, 3599-3608.
https://doi.org/10.1109/ICCVW.2019.00445

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133