Contextual Loss (CX) and Contextual Bilateral Loss (CoBi).

S-aiueo32, updated 🕥 2022-11-22 04:47:49

Contextual Loss

PyTorch implementation of Contextual Loss (CX) and Contextual Bilateral Loss (CoBi).

Introduction

There are many image transformation tasks whose spatially aligned data is hard to capture in the wild. Pixel-to-pixel or global loss functions can NOT be directly applied such unaligned data. CX is a loss function to defeat the problem. The key idea of CX is interpreting images as sets of feature points that don't have spatial coordinates. If you want to know more about CX, please refer the original paper, repo and examples in ./doc directory.

Requirements

  • Python3.7+
  • torch & torchvision

Installation

pip install git+https://github.com/S-aiueo32/contextual_loss_pytorch.git

Usage

You can use it like PyTorch APIs. ```python import torch

import contextual_loss as cl import contextual_loss.fuctional as F

input features

img1 = torch.rand(1, 3, 96, 96) img2 = torch.rand(1, 3, 96, 96)

contextual loss

criterion = cl.ContextualLoss() loss = criterion(img1, img2)

functional call

loss = F.contextual_loss(img1, img2, band_width=0.1, loss_type='cosine')

comparing with VGG features

if use_vgg is set, VGG model will be created inside of the criterion

criterion = cl.ContextualLoss(use_vgg=True, vgg_layer='relu5_4') loss = criterion(img1, img2)

```

Reference

Papers

  1. Mechrez, Roey, Itamar Talmi, and Lihi Zelnik-Manor. "The contextual loss for image transformation with non-aligned data." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
  2. Mechrez, Roey, et al. "Maintaining natural image statistics with the contextual loss." Asian Conference on Computer Vision. Springer, Cham, 2018.

Implementations

Thanks to the owners of the following awesome implementations. - Original Repository: https://github.com/roimehrez/contextualLoss - Simple PyTorch Implemantation: https://gist.github.com/yunjey/3105146c736f9c1055463c33b4c989da - CoBi: https://github.com/ceciliavision/zoom-learn-zoom

Issues

⬆️ Bump pillow from 6.2.1 to 9.3.0

opened on 2022-11-22 04:47:46 by dependabot[bot]

Bumps pillow from 6.2.1 to 9.3.0.

Release notes

Sourced from pillow's releases.

9.3.0

https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

Changes

... (truncated)

Changelog

Sourced from pillow's changelog.

9.3.0 (2022-10-29)

  • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

  • Initialize libtiff buffer when saving #6699 [radarhere]

  • Inline fname2char to fix memory leak #6329 [nulano]

  • Fix memory leaks related to text features #6330 [nulano]

  • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

  • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

  • Fixed set_variation_by_name offset #6445 [radarhere]

  • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

  • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

  • Added ExifTags enums #6630 [radarhere]

  • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

  • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

  • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

  • Added GPS TIFF tag info #6661 [radarhere]

  • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

  • Do not attempt normalization if mode is already normal #6644 [radarhere]

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/S-aiueo32/contextual_loss_pytorch/network/alerts).

⬆️ Bump py from 1.8.0 to 1.10.0

opened on 2022-10-18 19:27:49 by dependabot[bot]

Bumps py from 1.8.0 to 1.10.0.

Changelog

Sourced from py's changelog.

1.10.0 (2020-12-12)

  • Fix a regular expression DoS vulnerability in the py.path.svnwc SVN blame functionality (CVE-2020-29651)
  • Update vendored apipkg: 1.4 => 1.5
  • Update vendored iniconfig: 1.0.0 => 1.1.1

1.9.0 (2020-06-24)

  • Add type annotation stubs for the following modules:

    • py.error
    • py.iniconfig
    • py.path (not including SVN paths)
    • py.io
    • py.xml

    There are no plans to type other modules at this time.

    The type annotations are provided in external .pyi files, not inline in the code, and may therefore contain small errors or omissions. If you use py in conjunction with a type checker, and encounter any type errors you believe should be accepted, please report it in an issue.

1.8.2 (2020-06-15)

  • On Windows, py.path.locals which differ only in case now have the same Python hash value. Previously, such paths were considered equal but had different hashes, which is not allowed and breaks the assumptions made by dicts, sets and other users of hashes.

1.8.1 (2019-12-27)

  • Handle FileNotFoundError when trying to import pathlib in path.common on Python 3.4 (#207).

  • py.path.local.samefile now works correctly in Python 3 on Windows when dealing with symlinks.

Commits
  • e5ff378 Update CHANGELOG for 1.10.0
  • 94cf44f Update vendored libs
  • 5e8ded5 testing: comment out an assert which fails on Python 3.9 for now
  • afdffcc Rename HOWTORELEASE.rst to RELEASING.rst
  • 2de53a6 Merge pull request #266 from nicoddemus/gh-actions
  • fa1b32e Merge pull request #264 from hugovk/patch-2
  • 887d6b8 Skip test_samefile_symlink on pypy3 on Windows
  • e94e670 Fix test_comments() in test_source
  • fef9a32 Adapt test
  • 4a694b0 Add GitHub Actions badge to README
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/S-aiueo32/contextual_loss_pytorch/network/alerts).

Nan when using cl.ContextualBilateralLoss(use_vgg=False, loss_type='cosine').cuda()

opened on 2022-05-30 07:20:12 by laulampaul

When I use the codes :

def calculate_cobiloss(img,gt): bb = img.shape[0] loss = 0. cobiloss = cl.ContextualBilateralLoss(use_vgg=False, loss_type='cosine').cuda() for i in range(bb): imgpatches = sample_patches(img[i],10,5) gtpatches = sample_patches(gt[i],10,5) c, patch_size, patch_size, n_patches = imgpatches.shape imgpatches = imgpatches.reshape(1,c*patch_size*patch_size,n_patches,1) gtpatches = gtpatches.reshape(1,c*patch_size*patch_size,n_patches,1) #pdb.set_trace() loss = loss + cobiloss(imgpatches,gtpatches) return loss/bb

After some iterations, I face the NAN problem, how can I debug ?? Thanks.

⬆️ Bump ipython from 7.10.0 to 7.16.3

opened on 2022-01-21 20:14:32 by dependabot[bot]

Bumps ipython from 7.10.0 to 7.16.3.

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/S-aiueo32/contextual_loss_pytorch/network/alerts).

Cobi loss out of CUDA memory

opened on 2021-03-06 10:50:33 by harryin212

hi, when I try to test the cobi loss on my srcnn model, I found it ran out of menory my image size is 128*128 and batch size is 1, test on a gtx1080 gpu can u tell me how to avoid oom here's my error code: Traceback (most recent call last): File "D:\SRCNN_Pytorch_1.0-master_new1\train.py", line 88, in <module> loss = criterion(preds, labels) File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\contextual_loss\modules\contextual_bilateral.py", line 69, in forward return F.contextual_bilateral_loss(x, y, self.band_width) File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\contextual_loss\functional.py", line 108, in contextual_bilateral_loss cx_combine = (1. - weight_sp) * cx_feat + weight_sp * cx_sp RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.01 GiB already allocated; 50.02 MiB free; 6.06 GiB reserved in total by PyTorch)

CUDA out of memory

opened on 2020-09-13 07:56:30 by RuyuXu2019

How can I solve this question? Do it really need so much memory?

Traceback (most recent call last): File "main.py", line 33, in main() File "main.py", line 27, in main t.train() File "/home/anaconda3/workFile/DCTtoSpatial/YCbCrCxLoss/YCbCrCxLoss/src/trainer.py", line 58, in train loss_reconstruct = self.loss(sr, hr) File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/home/anaconda3/workFile/DCTtoSpatial/YCbCrCxLoss/YCbCrCxLoss/src/loss/init.py", line 78, in forward loss = l'function' File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, **kwargs) File "/home/anaconda3/workFile/DCTtoSpatial/YCbCrCxLoss/YCbCrCxLoss/src/loss/modules/contextual.py", line 66, in forward return F.contextual_loss(x, y, self.band_width) File "/home/anaconda3/workFile/DCTtoSpatial/YCbCrCxLoss/YCbCrCxLoss/src/loss/functional.py", line 43, in contextual_loss dist_raw = compute_cosine_distance(x, y) File "/home/anaconda3/workFile/DCTtoSpatial/YCbCrCxLoss/YCbCrCxLoss/src/loss/functional.py", line 150, in compute_cosine_distance dist = 1 - cosine_sim File "/home/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 325, in rsub return _C._VariableFunctions.rsub(self, other) RuntimeError: CUDA out of memory. Tried to allocate 5.06 GiB (GPU 1; 10.76 GiB total capacity; 5.43 GiB already allocated; 4.33 GiB free; 138.94 MiB cached)