You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For my own research, I want to test implicit integrators with adjoint sensitivity method to train a neural network. While implicit Euler method provided by the library can be used to integrate ODEs (no gradient required), it is incompatible with adjoint sensitivity method which requires gradient information. The code snippet to reproduce the error is provided as below.
import torch
import torch.nn as nn
import torch.utils.data as data
import pytorch_lightning as pl
from torchdyn.core import NeuralODE
from torchdyn.datasets import *
from torchdyn import *
device = torch.device("cpu") # all of this works in GPU as well :)
X_train = torch.Tensor(X).to(device)
y_train = torch.LongTensor(yn.long()).to(device)
train = data.TensorDataset(X_train, y_train)
trainloader = data.DataLoader(train, batch_size=len(X), shuffle=True)
class Learner(pl.LightningModule):
def __init__(self, t_span:torch.Tensor, model:nn.Module):
super().__init__()
self.model, self.t_span = model, t_span
def forward(self, x):
return self.model(x)
def training_step(self, batch, batch_idx):
x, y = batch
t_eval, y_hat = self.model(x, t_span)
y_hat = y_hat[-1] # select last point of solution trajectory
loss = nn.CrossEntropyLoss()(y_hat, y)
return {'loss': loss}
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters(), lr=0.01)
def train_dataloader(self):
return trainloader
f = nn.Sequential(
nn.Linear(2, 16),
nn.Tanh(),
nn.Linear(16, 2)
)
model = NeuralODE(f, sensitivity='adjoint', solver='ieuler').to(device)
learn = Learner(t_span, model)
trainer = pl.Trainer(min_epochs=200, max_epochs=300)
trainer.fit(learn)
I will get the RuntimeError by executing the above code snippet:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Changing adjoint to autograd prevented RuntimeError from happening, but the loss will not decrease during training. And I tried to modify the implicit Euler method by changing retain_graph from False to True, it didn't solve the issue. I think this issue has something to do with the LBFGS optimizer used by the library to find roots for the implicit integrator, but I don't really know how to fix it.
Any help on this issue is much appreciated! Thanks in advance!
The text was updated successfully, but these errors were encountered:
Providing an update here based on our private chat, in case someone else is interested: this should be done using IFT at the fixed-point, implementing a custom backward for the implicit step.
Providing an update here based on our private chat, in case someone else is interested: this should be done using IFT at the fixed-point, implementing a custom backward for the implicit step.
First of all, thank you for this amazing library!
For my own research, I want to test implicit integrators with adjoint sensitivity method to train a neural network. While implicit Euler method provided by the library can be used to integrate ODEs (no gradient required), it is incompatible with adjoint sensitivity method which requires gradient information. The code snippet to reproduce the error is provided as below.
I will get the RuntimeError by executing the above code snippet:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
Changing
adjoint
toautograd
prevented RuntimeError from happening, but the loss will not decrease during training. And I tried to modify the implicit Euler method by changingretain_graph
fromFalse
toTrue
, it didn't solve the issue. I think this issue has something to do with the LBFGS optimizer used by the library to find roots for the implicit integrator, but I don't really know how to fix it.Any help on this issue is much appreciated! Thanks in advance!
The text was updated successfully, but these errors were encountered: