-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradients in optimizer's param_groups are undefined when using mps device #1206
Comments
Hi @sebffischer, The problem seems to be that you change the device after passing the parameters to the optimizer. Somehow a copy of the tensors is created in the optimizer when you move the parameters to other devices (cuda has the same problem, I tested it) and the computation graph is severed? In PyTorch it works...so definitely a bug? For now, change the device before passing the parameters to the optimizer:
|
Thanks for looking into this! But tensors have reference semantics so I think calling |
Yes, exactly! This is why I think that a copy is unintentionally created instead (by the $to() method), check:
The tensors are on different devices, so they are actually different... |
Created on 2024-11-05 with reprex v2.1.1
session info:
The text was updated successfully, but these errors were encountered: