You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to fine tune the model you shared with a new dataset, but I can't get it running due to the following error:
lib/python2.7/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 168, in <module> main() File "train.py", line 95, in main train(net, optimizer) File "train.py", line 136, in train loss.backward() File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Any hint of what i am doing wrong?
The text was updated successfully, but these errors were encountered:
Hello there,
I am trying to fine tune the model you shared with a new dataset, but I can't get it running due to the following error:
lib/python2.7/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) Traceback (most recent call last): File "train.py", line 168, in <module> main() File "train.py", line 95, in main train(net, optimizer) File "train.py", line 136, in train loss.backward() File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/eloyroura/anaconda3/envs/BDRAR/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Any hint of what i am doing wrong?
The text was updated successfully, but these errors were encountered: