Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

Related tags

Admin Panels DASR
Overview

DASR

Paper

Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution
Jie Liang, Hui Zeng, and Lei Zhang.
In arxiv preprint.

Abstract

Efficient and effective real-world image super-resolution (Real-ISR) is a challenging task due to the unknown complex degradation of real-world images and the limited computation resources in practical applications. Recent research on Real-ISR has achieved significant progress by modeling the image degradation space; however, these methods largely rely on heavy backbone networks and they are inflexible to handle images of different degradation levels. In this paper, we propose an efficient and effective degradation-adaptive super-resolution (DASR) network, whose parameters are adaptively specified by estimating the degradation of each input image. Specifically, a tiny regression network is employed to predict the degradation parameters of the input image, while several convolutional experts with the same topology are jointly optimized to specify the network parameters via a non-linear mixture of experts. The joint optimization of multiple experts and the degradation-adaptive pipeline significantly extend the model capacity to handle degradations of various levels, while the inference remains efficient since only one adaptively specified network is used for super-resolving the input image. Our extensive experiments demonstrate that the proposed DASR is not only much more effective than existing methods on handling real-world images with different degradation levels but also efficient for easy deployment.

Overall pipeline of the DASR:

illustration

For more details, please refer to our paper.

Getting started

  • Clone this repo.
git clone https://github.com/csjliang/DASR
cd DASR
  • Install dependencies. (Python 3 + NVIDIA GPU + CUDA. Recommend to use Anaconda)
pip install -r requirements.txt
  • Prepare the training and testing dataset by following this instruction.
  • Prepare the pre-trained models by following this instruction.

Training

First, check and adapt the yml file options/train/DASR/train_DASR.yml, then

  • Single GPU:
PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python dasr/train.py -opt options/train/DASR/train_DASR.yml --auto_resume
  • Distributed Training:
YTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=4335 dasr/train.py -opt options/train/DASR/train_DASR.yml --launcher pytorch --auto_resume

Training files (logs, models, training states and visualizations) will be saved in the directory ./experiments/{name}

Testing

First, check and adapt the yml file options/test/DASR/test_DASR.yml, then run:

PYTHONPATH="./:${PYTHONPATH}" CUDA_VISIBLE_DEVICES=0 python basicsr/test.py -opt options/test/DASR/test_DASR.yml

Evaluating files (logs and visualizations) will be saved in the directory ./results/{name}

License

This project is released under the Apache 2.0 license.

Citation

@article{jie2022DASR,
  title={Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution},
  author={Liang, Jie and Zeng, Hui and Zhang, Lei},
  journal={arXiv preprint arXiv:2203.14216},
  year={2022}
}

Acknowledgement

This project is built based on the excellent BasicSR project.

Contact

Should you have any questions, please contact me via [email protected].

Comments
  • error when loading pretrained models

    error when loading pretrained models

    I downloaded the pretrained models as you said, and the file name is "net_g.pth" and "net_p.pth", However, when I tried to load "net_g.pth" using train_DASR.yml, it shows an error as below.

    Traceback (most recent call last): File "./dasr/train.py", line 15, in train_pipeline(root_path) File "/nas/workspace/anse/code/pytorch/SR/DASR/basicsr/train.py", line 128, in train_pipeline model = build_model(opt) File "/nas/workspace/anse/code/pytorch/SR/DASR/basicsr/models/init.py", line 27, in build_model model = MODEL_REGISTRY.get(opt['model_type'])(opt) File "/nas/workspace/anse/code/pytorch/SR/DASR/dasr/models/DASR_model.py", line 20, in init super(DASRModel, self).init(opt) File "/nas/workspace/anse/code/pytorch/SR/DASR/basicsr/models/srgan_dynamic_model.py", line 41, in init self.load_network_init_alldynamic(self.net_g, load_path, self.opt['num_networks'], self.opt['path'].get('strict_load_g', True), load_key) File "/nas/workspace/anse/code/pytorch/SR/DASR/basicsr/models/base_model.py", line 372, in load_network_init_alldynamic load_net = load_net[param_key] KeyError: 'params'

    I think that the pretrained model weights (similar as dictionary?) has no key 'params'. 캡처 So, I add the key 'params', and this code shows another error. 캡처 캡처

    Could you tell me what the problem is?

    opened by anse3832 11
  • TypeError: tuple indices must be integers or slices, not str

    TypeError: tuple indices must be integers or slices, not str

    File "C:\DASR\basicsr\test.py", line 45, in <module> test_pipeline(root_path) File "C:\DASR\basicsr\test.py", line 19, in test_pipeline make_exp_dirs(opt) File "C:\Python39\lib\site-packages\basicsr\utils\dist_util.py", line 80, in wrapper return func(*args, **kwargs) File "C:\Python39\lib\site-packages\basicsr\utils\misc.py", line 40, in make_exp_dirs path_opt = opt['path'].copy() TypeError: tuple indices must be integers or slices, not str

    opened by AIisCool 7
  • why the training is not  convergence

    why the training is not convergence

    i use the train_DASR.yml as you offered, just change two place. 1.training samples is DIV2K. 2.pretrain_network_g is none. and it trained from random init. then i found all of the losses are nan. should i trained it using pretrained model?

    opened by Lvhhhh 4
  • Questions about pretrained MSRResNet

    Questions about pretrained MSRResNet

    Thx for sharing codes! I carefully studied your codes but didnt find the pretrained MSRResNet model (Not trained DASR model). Could you provide a link for it? Also very interested in the training yml of MSRResnet, great thx if you could update it!

    Some minor ques:

    1. I applied similar idea about degradation sub-space and predictor in my sr model, but found it really hard to train a good predictor, the avg L1 regression loss stays around 0.25 (which means the predictor only output a random embedding I think) and stops decreasing. Wonder if you meet similar problem.
    2. I find a "cycle_opt" loss in train_DASR yml, but actually unused in training. Any special meaning?

    Thx again for your work.

    opened by orchidmalevolence 2
  • How to train a model to retain more texture details?

    How to train a model to retain more texture details?

    I'm currently trying to train your model, but I found that when encountering leaves, lawns, sand grains, etc., the model recreates these scenes badly, how can I adjust the training loss to make the model support these scenes, or need to add more such scenes dataset?

    opened by kelisiya 1
  • suggestion for fixing the code to use multi GPU

    suggestion for fixing the code to use multi GPU

    When I tried to use multi GPU, the code shows an error (Unfortunately, I didn't save the error massage. It is related to dimension error)

    So, I fixed the code in DASR/dasr/models/DASR_model.py as below, and it works well. ( multiplying self.opt['num_gpu'] ) image image image

    Please check if my correction is adequate. Thanks!

    opened by anse3832 1
  • Pretrained model correspond to which degradation space subset?

    Pretrained model correspond to which degradation space subset?

    Hi and thanks for sharing your interesting research! My question is related to the pretrained model:

    • the pretrained model correspond to which degradation space? S_1, S_2 or S_3?
    • Or does the pretrained model correspond to the training done for the parameters in the train_DASR.yml file? So all three degradation spaces with the given probability:
    degree_list: ['weak_degrade_one_stage', 'standard_degrade_one_stage', 'severe_degrade_two_stage']
    degree_prob: [0.3, 0.3, 0.4]
    

    Would it be possible to share (if you have done it and if is possible) the pretrained model only for the degradation spaces separately, ie one model for weak_degrade_one_stage, one model for standard_degrade_one_stageand one model for severe_degrade_two_stage? Thanks!

    opened by g-moschetti 1
  • Excellent work, but friendly advice

    Excellent work, but friendly advice

    Excellent work,but the idea of meta-learning based degradation adaption has been explored in the following TIP paper. Is it better to add this reference?

    @article{yin2022conditional, title={Conditional Hyper-Network for Blind Super-Resolution with Multiple Degradations}, author={Yin, Guanghao and Wang, Wei and Yuan, Zehuan and Ji, Wei and Yu, Dongdong and Sun, Shouqian and Chua, Tat-Seng and Wang, Changhu}, journal={IEEE Transactions on Image Processing}, year={2022}, publisher={IEEE} }

    opened by guanghaoyin 0
  • New Super-Resolution Benchmarks

    New Super-Resolution Benchmarks

    Hello,

    MSU Graphics & Media Lab Video Group has recently launched two new Super-Resolution Benchmarks.

    If you are interested in participating, you can add your algorithm following the submission steps:

    We would be grateful for your feedback on our work!

    opened by EvgeneyBogatyrev 0
  • degradation params

    degradation params

    I have two questions.

    1. the sinc kernel_size may be negative when the prob larger than final_sinc_prob? https://github.com/csjliang/DASR/blob/ff2e1ec02c767b75d09b5d60f85c5cbd4115d058/dasr/models/DASR_model.py#L106
    2. why are the previous degradation params overwritten , that is to say, the sinc degradation_params[:, 9:10] is overwritten by the second blur prob? https://github.com/csjliang/DASR/blob/ff2e1ec02c767b75d09b5d60f85c5cbd4115d058/dasr/models/DASR_model.py#L161
    opened by jiamingNo1 1
  • pretrained weights for 2X model

    pretrained weights for 2X model

    The shared link https://drive.google.com/drive/folders/18TuFlx5Fp9W9dDHQ-LyNFae5vakpjGq- contains weights for the 4X model. Can I get access to 2X model weights?

    opened by prasannakdev0 0
  • question about

    question about "User-Interactive Super-resolution"

    In your paper, you mentioned User-Interactive Super-resolution, how can I manually increasing and de-creasing the scale of blur kernel or manually increasing and decreasing the level of noise?

    opened by zack1943 1
Owner
null
Official MegEngine implementation of ECCV2022 "D2C-SR: A Divergence to Convergence Approach for Real-World Image Super-Resolution".

[ECCV 2022] D2C-SR: A Divergence to Convergence Approach for Real-World Image Super-Resolution Youwei Li$^1$, Haibin Huang$^2$, Lanpeng Jia$^1$, Haoqi

MEGVII Research 31 Nov 14, 2022
This repository is an attempt at implementing the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network using PyTorch.

ESPCN This repository is implementation of the "Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Ne

Gaurav 0 Oct 7, 2022
Pytorch code for "BSRT: Improving Burst Super-Resolution with Swin Transformer and Flow-Guided Deformable Alignment", CVPRW, 1st place in NTIRE 2022 BurstSR Challenge (real-world track).

BSRT: Improving Burst Super-Resolution with Swin Transformer and Flow-Guided Deformable Alignment (CVPRW 2022) BSRT, the winner of the NTIRE 2022 Burs

balabala 66 Nov 22, 2022
Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution

SPLUT Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution The folder training_testing_code c

null 35 Nov 25, 2022
Implementation of our paper "Super-resolution with adversarial loss on the feature maps of the generated high-resolution image" (IET Electronics Letters 2022)

Adversarial Feature Maps Super Resolution This repository is the Implementation of our paper: Imanuel, I. and Lee, S. (2022), Super-resolution with ad

null 2 Oct 26, 2022
[ECCV 2022] Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations

SelfDZSR (ECCV 2022) PyTorch implementation of Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations 1. Framework Ove

Zhilu Zhang 38 Nov 23, 2022
An official PyTorch implementation of the paper "Feedback Network for Mutually Boosted Stereo Image Super-Resolution and Disparity Estimation"

This repository is an official PyTorch implementation of the paper Feedback Network for Mutually Boosted Stereo Image Super-Resolution and Disparity Estimation. (ACM MM 2021)

MIVRC 13 Nov 21, 2022
An efficient Group Skip-Connecting Network for image super-resolution

GSCN-KBS2021 An efficient Group Skip-Connecting Network for image super-resolution. Our codes ae based on EDSR, More training or testing details can b

null 1 Aug 2, 2022
Adaptive Patch Exiting for Scalable Single Image Super-Resolution" (ECCV2022 Oral)

Adaptive Patch Exiting for Scalable Single Image Super-Resolution (ECCV2022 Oral) This repository is an official PyTorch implementation of the paper "

null 19 Nov 10, 2022
This is the official PyTorch implementation of TBSR. Our team received 2nd place (real data track) and 3rd place (synthetic track) in NTIRE 2022 Burst Super-Resolution Challenge (CVPRW 2022).

Transformer for Burst Image Super-Resolution (TBSR) This is the official PyTorch implementation of TBSR. Our team received 2nd place (real data track)

Zhilu Zhang 11 Jul 26, 2022
The official implementation of "Bayesian Image Super-Resolution with Deep Modeling of Image Statistics" via TensorFlow

BayeSR The official implementation of "Bayesian Image Super-Resolution with Deep Modeling of Image Statistics" which has been accepted by IEEE Transac

Shangqi Gao 15 Nov 9, 2022
Official PyTorch implementation of the paper "Deep Constrained Least Squares for Blind Image Super-Resolution", CVPR 2022.

Deep Constrained Least Squares for Blind Image Super-Resolution [Paper] This is the official implementation of 'Deep Constrained Least Squares for Bli

MEGVII Research 124 Nov 23, 2022
Official implementation of the paper 'Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution' in CVPR 2022

LDL Paper | Supplementary Material Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution Jie Liang*, Hu

null 143 Nov 19, 2022
[AIM & ECCVW 2022] Fast Nearest Convolution for Real-Time Image Super-Resolution

Fast Nearest Convolution for Real-Time Image Super-Resolution, AIM & ECCV Workshops 2022 Update [2022.08.25] We have uploaded the pretrained model in

balabala 44 Nov 23, 2022
A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022) https://arxiv.org/abs/2203.09388 Jianqi Ma, Zheto

MA Jianqi, shiki 99 Nov 17, 2022
Lightweight Image Super-Resolution with Multi-Scale Feature Interaction Network (ICME 2021)

MSFIN:Lightweight Image Super-Resolution with Multi-Scale Feature Interaction Network This repository is an official PyTorch implementation of the pap

Zhengxue Wang 7 Oct 7, 2022
The method for Ntire 2022 Efficient Super-Resolution Challenge

RFESR Code for our method Residual Feature Extraction Network for Ntire 2022 Efficient Super-Resolution Challenge The model files are uploaded! We use

ShiRui 2 Apr 7, 2022
Winner of runtime track in NTIRE 2022 challenge on Efficient Super-Resolution

Residual Local Feature Network Our team (ByteESR) won the first place in Runtime Track (Main Track) and the second place in Overall Performance Track

Bytedance Inc. 65 Nov 12, 2022
Real-Time Video Super-Resolution on Mobile

Real-Time Video Super-Resolution on Mobile Overview [Challenge Website] [Workshop Website] This repository provides the implementation of the baseline

Mediatek NeuroPilot Ecosystem 32 Nov 21, 2022