Pytorch code for "EBSR: Feature Enhanced Burst Super-Resolution with Deformable Alignment", CVPRW 2021, 1st NTIRE (real data track).

Algolzw, updated 🕥 2022-07-10 07:12:03

EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment (CVPRW 2021)

Update !!!

  • 2022.04.22 🎉🎉🎉 We won the 1st place in NTIRE 2022 BurstSR Challenge again [Paper][Code].
  • 2022.01.22 We updated the code to support real track testing and provided the model weights here
  • 2021 Now we support 1 GPU training and provide the pretrained model here.

This repository is an official PyTorch implementation of the paper "EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment" from CVPRW 2021, 1st NTIRE21 Burst SR in real track (2nd in synthetic track).


  • OS: Ubuntu 18.04
  • Python: Python 3.7
  • nvidia :
  • cuda: 10.1
  • cudnn: 7.6.1
  • Other reference requirements

Quick Start

1.Create a conda virtual environment and activate it python3 conda create -n pytorch_1.6 python=3.7 source activate pytorch_1.6 2.Install PyTorch and torchvision following the official instructions python3 conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch 3.Install build requirements python3 pip3 install -r requirements.txt 4.Install apex to use DistributedDataParallel following the Nvidia apex (optional) python3 git clone cd apex pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ 5.Install DCN python3 cd DCNv2-pytorch_1.6 python3 build develop # build python3 # run examples and check



Modify the root path of training dataset and model etc.

The number of GPUs should be more than 1

python --n_GPUs 4 --lr 0.0002 --decay 200-400 --save ebsr --model EBSR --fp16 --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --batch_size 16 --burst_size 14 --patch_size 256 --scale 4 --loss 1*L1 ```



Modify the path of test dataset and the path of the trained model

python --root /data/dataset/ntire21/burstsr/synthetic/syn_burst_val --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/EBSRbest_epoch.pth or test on the validation dataset:python3 python --n_GPUs 1 --test_only --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/EBSRbest_epoch.pth ```

Real track evaluation

You may need to download pretrained PWC model to the pwcnet directory (here).

``` python --n_GPUs 1 --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/BBSR_realbest_epoch.pth --root burstsr_validation_dataset...



If EBSR helps your research or work, please consider citing EBSR. The following is a BibTeX reference.

@InProceedings{Luo_2021_CVPR, author = {Luo, Ziwei and Yu, Lei and Mo, Xuan and Li, Youwei and Jia, Lanpeng and Fan, Haoqiang and Sun, Jian and Liu, Shuaicheng}, title = {EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {471-478} }


email: [[email protected], [email protected]]


DCNv2 + Cuda 11

opened on 2023-02-27 09:01:40 by nonick2k23

Is there a way to update your DCNv2 code to support Cuda 11? Cuda 10 is very old and newer GPUs don't work with it.

Since your DCNv2 code is modified, only you can update it...

How is the fine tuning of real data done?

opened on 2022-05-19 08:52:08 by sfxz035

Hi, I have some questions about the fine tuning strategy. In table 1, fine-tuning the real data get a higher PSNR. I did this and it turned out worse, so i hope to get your answer for some questions. What is the specific strategy of fine tuning? What is the initial learning rate? Is there a learning rate decay?Whether to control certain weights to disable learning? Or is there some other strategy to focus on?

What is the use of 'flatten_raw_image_batch' ?

opened on 2022-04-12 05:59:01 by wgg1999

What is the use of 'burst = flatten_raw_image_batch(burst)' before 'sr = self.model(burst, 0)' If the flatten step is reduced, will it influence the result? And after using the flatten step, the W and H of original burst is changed, When is it changed back before it is used to calculate the loss with ground truth(which has the original W and H) ?

What is the burst_size that can be supported by 1 gpu ?

opened on 2022-03-27 08:27:43 by pokaaa

I can only set burst_size=4. If burst_size is set 8 or 16, it will be out of memory right? Is that because the model is too big?

pytorch super-resolution