This repository is an official PyTorch implementation of the paper "EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment" from CVPRW 2021, 1st NTIRE21 Burst SR in real track (2nd in synthetic track).
1.Create a conda virtual environment and activate it
python3
conda create -n pytorch_1.6 python=3.7
source activate pytorch_1.6
2.Install PyTorch and torchvision following the official instructions
python3
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch
3.Install build requirements
python3
pip3 install -r requirements.txt
4.Install apex to use DistributedDataParallel following the Nvidia apex (optional)
python3
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
5.Install DCN
python3
cd DCNv2-pytorch_1.6
python3 setup.py build develop # build
python3 test.py # run examples and check
```python3
python main.py --n_GPUs 4 --lr 0.0002 --decay 200-400 --save ebsr --model EBSR --fp16 --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --batch_size 16 --burst_size 14 --patch_size 256 --scale 4 --loss 1*L1 ```
```python3
python test.py --root /data/dataset/ntire21/burstsr/synthetic/syn_burst_val --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/EBSRbest_epoch.pth
or test on the validation dataset:
python3
python main.py --n_GPUs 1 --test_only --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/EBSRbest_epoch.pth
```
You may need to download pretrained PWC model to the pwcnet directory (here).
``` python test_real.py --n_GPUs 1 --model EBSR --lrcn --non_local --n_feats 128 --n_resblocks 8 --n_resgroups 5 --burst_size 14 --scale 4 --pre_train ./checkpoints/BBSR_realbest_epoch.pth --root burstsr_validation_dataset...
```
If EBSR helps your research or work, please consider citing EBSR. The following is a BibTeX reference.
@InProceedings{Luo_2021_CVPR,
author = {Luo, Ziwei and Yu, Lei and Mo, Xuan and Li, Youwei and Jia, Lanpeng and Fan, Haoqiang and Sun, Jian and Liu, Shuaicheng},
title = {EBSR: Feature Enhanced Burst Super-Resolution With Deformable Alignment},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2021},
pages = {471-478}
}
email: [[email protected], [email protected]]
Is there a way to update your DCNv2 code to support Cuda 11? Cuda 10 is very old and newer GPUs don't work with it.
Since your DCNv2 code is modified, only you can update it...
Hi, I have some questions about the fine tuning strategy. In table 1, fine-tuning the real data get a higher PSNR. I did this and it turned out worse, so i hope to get your answer for some questions. What is the specific strategy of fine tuning? What is the initial learning rate? Is there a learning rate decay?Whether to control certain weights to disable learning? Or is there some other strategy to focus on?
What is the use of 'burst = flatten_raw_image_batch(burst)' before 'sr = self.model(burst, 0)' If the flatten step is reduced, will it influence the result? And after using the flatten step, the W and H of original burst is changed, When is it changed back before it is used to calculate the loss with ground truth(which has the original W and H) ?
I can only set burst_size=4. If burst_size is set 8 or 16, it will be out of memory right? Is that because the model is too big?
pytorch super-resolution