Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cobi loss out of CUDA memory #8

Open
harryin212 opened this issue Mar 6, 2021 · 1 comment
Open

Cobi loss out of CUDA memory #8

harryin212 opened this issue Mar 6, 2021 · 1 comment

Comments

@harryin212
Copy link

hi, when I try to test the cobi loss on my srcnn model, I found it ran out of menory
my image size is 128*128 and batch size is 1, test on a gtx1080 gpu
can u tell me how to avoid oom
here's my error code:

Traceback (most recent call last):
  File "D:\SRCNN_Pytorch_1.0-master_new1\train.py", line 88, in <module>
    loss = criterion(preds, labels)
  File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\contextual_loss\modules\contextual_bilateral.py", line 69, in forward
    return F.contextual_bilateral_loss(x, y, self.band_width)
  File "C:\Users\anaconda3\envs\pytorch\lib\site-packages\contextual_loss\functional.py", line 108, in contextual_bilateral_loss
    cx_combine = (1. - weight_sp) * cx_feat + weight_sp * cx_sp
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.01 GiB already allocated; 50.02 MiB free; 6.06 GiB reserved in total by PyTorch)
@Kelsey2018
Copy link

It is because the the parameters that do not need to do gradient calculation are also backward

The following codes work:

with torch.no_grad():
    cx_combine = (1. - weight_sp) * cx_feat + weight_sp * cx_sp
    k_max_NC, _ = torch.max(cx_combine, dim=2, keepdim=True)
    cx = k_max_NC.mean(dim=1)
    cx_loss = torch.mean(-torch.log(cx + 1e-5))

    return cx_loss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants