Getting cls_logits NaN or Inf during training #1684
Replies: 1 comment 3 replies
-
Hi @AceMcAwesome77, The error message you're encountering, "cls_logits is NaN or Inf.", is letting you know that at some point in your training, the cls_logits tensor contains a Not a Number (NaN) or Infinity (Inf). Hope it helps, thanks. |
Beta Was this translation helpful? Give feedback.
-
I am training this retinanet 3D detection model with mostly the same parameters as the example in this repo, except with batch_size in the config = 1 because many image volumes are smaller than the training patch size. During training, I am getting this error at random, several epochs into the training:
Traceback of TorchScript, original code (most recent call last):
File "/home/mycomputer/.local/lib/python3.10/site-packages/monai/apps/detection/networks/retinanet_network.py", line 130, in forward
if torch.isnan(cls_logits).any() or torch.isinf(cls_logits).any():
if torch.is_grad_enabled():
raise ValueError("cls_logits is NaN or Inf.")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
warnings.warn("cls_logits is NaN or Inf.")
builtins.ValueError: cls_logits is NaN or Inf.
On the last few training attempts, this failed on epoch 6 on the first two attempts, then failed on epoch 12 on the third attempt. So it can make it though all the training data without failing on any particular case. Does anyone know what could be causing this? If it's exploding gradients, is there something built into MONAI to clip these and prevent the training from crashing? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions