SimDeblur
SimDeblur (Simple Deblurring) is an open-sourced unifying training and testing framework for image and video deblurring based on PyTorch. It supports most deep-learning based state-of-the-art deblurring algorithms, and provides easy way to implement your own image or video deblurring and restoration algorithms.
The toolbox decomposes the deblurring framework into different components and one can easily construct a customized restoration framework by combining different modules.
The toolbox contains most deep-learning based state-of-the-art deblurring algorithms, including MSCNN, SRN, DeblurGAN, EDVR, etc.
SimDeblur supports distributed data-parallel training.
[2022/12/11] SimDeblur supports NAFNet (ckpt) model for image deblurring.
[2022/11/12] SimDeblur supports MIMOUnet model.
[2022/3/8] We further provide a image deblurring-based inference code, please refer to Usage section for the using.
[2022/2/18] We add PVDNet model for video deblurring. Note that it requires the pretrained BIMNet for motion estimation. Thus please modify the CKPT path of BIMNet in the source codes.
[2022/1/21] We add Restormer model. Note that it can only works on PyTorch1.8+.
[2022/1/20] We transfer some checkpoints from the open-sourced repos into SimDeblur framework! You can find them here.
[2022/1/1] Support real-world video deblurring dataset: BSD.
[2021/3/31] Support DVD, GoPro and REDS video deblurring datasets.
[2021/3/21] First release.
We will gradually release the checkpoints of each model in checkpoints.md.
Single Image Deblurring
Video Deblurring
Benchmarks
Python 3 (Conda is recommended)
Pytorch 1.5+ (with GPU, note some methods require higher version)
CUDA 10.1+ with NVCC (for code compilation in some models)
Clone the repositry or download the zip file
git clone https://github.com/ljzycmd/SimDeblur.git
bash
# create a pytorch env
conda create -n simdeblur python=3.7
conda activate simdeblur
# install the packages
cd SimDeblur
bash Install.sh # some problems may occur due to wrong NVCC configurations for CUDA codes compiling
You can open the Colab Notebook to learn about basic usage and see the deblurring performance.
The design of SimDeblur consists of FOUR main parts as follows: | Dataset | Model | Scheduler | Engine | |:-------:|:------:|:---------:|:------:| |Dataset-specific classes | The backbone, losses, and meta_archs. Backbone is the main network, and the meta_arch is a class for model training | Opeimizer, LR scheduler | Trainer, and some hook functions during model training |
Note that the dataset, model and scheduler can be constructed with config (EasyDict
) with corresponding build_{dataset, backbone, meta_arch, scheduler, optimizer, etc.}
functions. The Trainer class automatically construct all reqiured elements for model training in a general way. This means that if you want to do some specific modeling training, you may modify the training logics in corresponding meta_arch
class.
We provide a image deblurring inference code, and you can run it to deblur a blurry image as follows:
bash
python inference_image.py CONFIG_PATH CKPT_PATH --img=BLUR_IMAGE_PATH --save_path=DEBLURRED_OUT_PATH
the deblurred latent image will be stored at ./inference_resutls
in default.
You can construct a simple training process using the default Trainer as follows (refer to the train.py
for more details):
```python from easydict import EasyDict as edict from simdeblur.config import build_config, merge_args from simdeblur.engine.parse_arguments import parse_arguments from simdeblur.engine.trainer import Trainer
args = parse_arguments()
cfg = build_config(args.config_file) cfg = merge_args(cfg, args) cfg.args = edict(vars(args))
trainer = Trainer(cfg) trainer.train() ```
Start training with single GPU:
bash
CUDA_VISIBLE_DEVICES=0 bash ./tools/train.sh ./configs/dbn/dbn_dvd.yaml 1
or multiple GPUs training:
bash
CUDA_VISIBLE_DEVICES=0,1,2,3 bash ./tools/train.sh ./configs/dbn/dbn_dvd.yaml 4
For the testing, SimDeblur only supports single GPU testing and validation util now:
bash
CUDA_VISIBLE_DEVICES=0 python test.py ./configs/dbn/dbn_dvd.yaml PATH_TO_CKPT
SimDeblur also provides you to build some specific modules, including dataset, model, loss, etc.
Build a dataset:
```python from easydict import EasyDict as edict from simdeblur.dataset import build_dataset
dataset_cfg = edict({ "name": "DVD", "mode": "train", "sampling": "n_c", "overlapping": True, "interval": 1, "root_gt": "./dataset/DVD/quantitative_datasets", "num_frames": 5, "augmentation": { "RandomCrop": { "size": [256, 256] }, "RandomHorizontalFlip": { "p": 0.5 }, "RandomVerticalFlip": { "p": 0.5 }, "RandomRotation90": { "p": 0.5 }, } })
dataset = build_dataset(dataset_cfg)
print(dataset[0]) ```
Build a model:
```python from easydict import EasyDict as edict from simdeblur.model import build_backbone
model_cfg = edict({ "name": "DBN", "num_frames": 5, "in_channels": 3, "inner_channels": 64 })
model = build_backbone(model_cfg)
x = torch.randn(1, 5, 3, 256, 256) out = model(x) ```
Build a loss:
```python from easydict import EasyDict as edict from simdeblur.model import build_loss
criterion_cfg = { "name": "MSELoss", }
criterion = build_loss()
x = torch.randn(2, 3, 256, 256) y = torch.randn(2, 3, 256, 256)
print(criterion(x, y)) ```
And the optimizer and lr_scheduler also can be created by the functions build_optimizer
and build_lr_scheduler
in the simdeblur.scheduler
, etc.
SimDeblur supports the most popular image and video deblurring datasets, including GOPRO, DVD, REDS, BSD. We design different data reading strategies that can meet the input requirements of different image and video deblurring models.
You can click here for more information about the design of the dataset.
To start, note that you should change the path of the dataset in related config files.
The design spirit of SimDeblur comes most from Detectron2 [1], we highly thank for this amazing open-sourced toolbox. We also thank for the paper and code collections in Awesome-Deblurring repositry [2].
[1] facebookresearch. detectron2. https://github.com/facebookresearch/detectron2
[2] subeeshvasu. Awesome-Deblurring. https://github.com/subeeshvasu/Awesome-Deblurring
If SimDeblur helps your research or work, please consider citing SimDeblur.
bibtex
@misc{cao2021simdeblur,
author = {Mingdeng Cao},
title = {SimDeblur: A Simple Framwork for Image and Video Deblurring},
howpublished = {\url{https://github.com/ljzycmd/SimDeblur}},
year = {2021}
}
Last, if you have any questions about SimDeblur, please feel free to open an new issue or contact me at mingdengcao [AT] gmail.com
, and I will try to solve your problem. Meanwhile, any contribution to this Repo is highly welcome. Let's make SimDeblur more powerful!
Hello there,
I was able to follow your example which you have posted on the ColabNotebook and have successfully able to perform deblurring using the DBN model on the test images locally on my PC via JupyterNotebook with CUDA enabled on PyTorch. So the example in ColabNotebook using DBN is working well.
Next, I tried to load a different mode (i.e. the DBLRNet) to compare the results, with the code snippet below.
...
model = build_backbone(model_cfg)
ckpt = torch.load("./demo/dblrnet_dvd.pth")
model_ckpt = ckpt["model"]
model_ckpt = {k[7:]: v for k, v in model_ckpt.items()}
model.load_state_dict(model_ckpt)
model = model.to(device)
...
I then get an error from the Python below.
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_3448/3417428220.py in
~/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict) 1666 if len(error_msgs) > 0: 1667 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( -> 1668 self.class.name, "\n\t".join(error_msgs))) 1669 return _IncompatibleKeys(missing_keys, unexpected_keys) 1670
RuntimeError: Error(s) in loading state_dict for DBN:
Missing key(s) in state_dict: "F0.0.weight", "F0.0.bias", "F0.1.weight", "F0.1.bias", "F0.1.running_mean", "F0.1.running_var", "D1.0.weight", "D1.0.bias", "D1.1.weight", "D1.1.bias", "D1.1.running_mean", "D1.1.running_var", "F1_1.0.weight", "F1_1.0.bias", "F1_1.1.weight", "F1_1.1.bias", "F1_1.1.running_mean", "F1_1.1.running_var",
...
``
I have also attempted to run the inference_image.py script to do the same thing with following command in Linux.
python inference_image.py ./configs/dblrnet/dblrnet_dvd.yaml ./demo/dblrnet_dvd.pth --img=./datasets/input/00000.jpg`
This resulted in the following error below.
Using checkpoint loaded from ./demo/dblrnet_dvd.pth for testing.
Traceback (most recent call last):
File "inference_image.py", line 81, in <module>
inference()
File "inference_image.py", line 70, in inference
outputs = arch.postprocess(arch.model(arch.preprocess(input_image)))
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/emui/sandbox-git/SimDeblur/simdeblur/model/backbone/dblrnet/dblrnet.py", line 52, in forward
l2 = self.L_in(x)
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 613, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/emui/anaconda3/envs/simdeblur/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 609, in _conv_forward
input, weight, bias, self.stride, self.padding, self.dilation, self.groups
RuntimeError: Calculated padded input size per channel: (1 x 722 x 1282). Kernel size: (3 x 3 x 3). Kernel size can't be greater than actual input size
Do you know what could be causing the issue here? It would be really nice if you could provide step-by-step examples on how to use the scripts to deblur images/videos along with test inputs and expected output, so that we know we have properly setup the SimDeblur on our machines locally.
P.S. It would be useful to also add in the README.md on the instructions of how to install all the required dependencies for SimDeblur, e.g. the Python packages and the CUDA AI libraries and toolkit on Linux. Thanks again for the great work. :+1:
Hello, thanks for your contributions. Have you trained the model with your own script, or you just copied or wrote the code for that method without training by yourself? If not, how to verify the correctness of your tool? Thanks.
Hi, Great repository, just loved it. I am trying to execute inference_image.py with the dbn architecture as the backbone but I'm getting stuck on this: Executing the script produces the following:
Cannot inport EDVR modules!!!
Cannot import STFAN modules!!!
Using checkpoint loaded from ./checkpoints/dbn_ckpt.pth for testing.
Traceback (most recent call last):
File "inference_image.py", line 81, in
I tried printing the dimensions of the image and it came out to be: torch.Size([1, 1, 3, 385, 1504]) and self.num_frames is 5 for my case. Don't know how to resolve the issue. Please help out.
What settings do you reccomend for a steady camera (like on tripods) in order to deblur moving objects?
Are there plans to add pre-trained model for ESTRNN? The ones provided on the official ESTRNN github produce artifacts in my tests so far. Would be interested to know if there are plans to add models trained of REDS or GoPro.
Thank you so much for sharing the this! Forgive me, because I'm not super technical but I was hoping you could walk me through how to use the Colab notebook to test my own image sequence. When I "run all" in Colab it shows a single output comparison jpeg. Does it save the entire deblurred image sequence anywhere? All I can see are the input images when I search the folder structure. How would I deblur and entire image sequence and save the resulting images? Thanks in advance!
image-deblurring video-deblurring dbn dblrnet ifirnn strcnn simdeblur mscnn srn