You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that in training some tensors are of the float 16 datatype, whereas in validation, I only see float 32. is that in line with what you see? Is this intentional? I haven't found the part of the code that causes the float 16 conversion; if there is some conversion like that, could you please point me to where it is in the code?
The text was updated successfully, but these errors were encountered:
Hi @sanjayss34, we are using Pytorch’s automatic mixed precision mode, which was introduced in 1.6 and originates from NVidia’s own APEX project. I recommend you read up on it here: https://pytorch.org/docs/stable/amp.html
I've noticed that in training some tensors are of the float 16 datatype, whereas in validation, I only see float 32. is that in line with what you see? Is this intentional? I haven't found the part of the code that causes the float 16 conversion; if there is some conversion like that, could you please point me to where it is in the code?
The text was updated successfully, but these errors were encountered: