-
-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can no longer pass extra variables from loss function to callback #835
Comments
@ChrisRackauckas, I wouldn't think so, but could this be related to the new DiffEqBase issue: SciML/DiffEqBase.jl#1088? |
Hey, the support for it has been removed, can you specify where in the docs you say this is still there? I thought I had removed it but might have missed some places. In the example above extra is not used at all, can you specify what you use the extra return for otherwise? |
@Vaibhavdixit02 Thanks for the reply. I use the extra variable(s) to send the current batch of training data to the callback function for plotting. It is very useful to me, and I'm not sure how I would plot the current solution vs. training data if I could not send data from the loss function. Can you tell me where in the code base it has been removed? Perhaps I can edit my dev version to put it back in. Or, is there some other way to pass data from the loss function to the callback? More specifically, I use a dataloader to send an index of data rows to the loss function. In the loss function, I create data matrices based on the indexes. I would either need the row indexes or the constructed data in order to plot the current solution vs. training data in the callback function. Is there some other way to accomplish this? I imagine that many users would want to pass data from the loss function to the callback, for the very reason specified in the documentation. Here is what the documentation says:
|
With the new changes, perhaps the recommended way to pass data from the loss function to the callback is via a global variable. That seems to work fine. Is there a better way to accomplish this? |
Yeah, you'd have to use some global variables some way or the other for this now, there's a few options you could access the index by checking the |
Also the data handling has changed so make sure to look up the new tutorial on that https://docs.sciml.ai/Optimization/stable/tutorials/minibatch/, basically now instead of passing your dataloader to the solve call you pass in the |
@Vaibhavdixit02, the minibatch documentation imports ncycle but then never uses it. Does ncycle need to be called, or is ncycle imported for some other reason?
|
No it's not needed it should be removed from there |
I think the keyword Iteration= 1, 1, Loss= 2.197e+00
Iteration= 2, 2, Loss= 9.193e-01
Iteration= 2, 3, Loss= 9.193e-01
Iteration= 1, 4, Loss= 2.091e+00
Iteration= 2, 5, Loss= 8.688e-01
Iteration= 2, 6, Loss= 8.688e-01
The code is: module Test
using Optimization, OptimizationOptimisers, DiffEqFlux.Lux, Zygote, MLUtils, Random,
ComponentArrays
using Format
x = rand(25)
y = sin.(x)
data = MLUtils.DataLoader((x, y), batchsize = 5)
# Define the neural network
model = Chain(Dense(1, 32, tanh), Dense(32, 1))
ps, st = Lux.setup(Random.default_rng(), model)
ps_ca = ComponentArray(ps)
smodel = StatefulLuxLayer{true}(model, nothing, st)
Iter = 0
function callback(state, l)
global Iter
Iter += 1
printfmtln("Iteration= {}, {}, Loss= {:.3e}", state.iter, Iter, l)
return false
end
function loss(ps, data)
ypred = [smodel([data[1][i]], ps)[1] for i in eachindex(data[1])]
return sum(abs2, ypred .- data[2])
end
optf = OptimizationFunction(loss, AutoZygote())
prob = OptimizationProblem(optf, ps_ca, data)
res = Optimization.solve(prob, Optimisers.Adam(), callback = callback, epochs = 2)
end # --module |
@Vaibhavdixit02 the docs below need to be updated too. https://github.com/SciML/SciMLBase.jl/blob/master/src/scimlfunctions.jl#L1813-L1815 also the docs for the callback API need to be updated |
The minibatch tutorial is still misleading. The function |
yeah sorry about the mess, #838 fixes all of these things. @John-Boik the iteration count was corrected but not released, you should have it available in the next release (in a couple of hours) |
Hey there, the doc is till misleading on this topic; this line should be corrected https://github.com/SciML/SciMLBase.jl/blob/0f8ec1f501e916e5f8c39d94ddc02757d574cc9c/src/solve.jl#L55 |
Thanks for catching that @vboussange, it should be fixed with SciML/SciMLBase.jl#908 |
May I ask what was the motivation behind this major breaking change? |
AD doesn't like unknown number returns (and even just fixed number of multiple returns). This hindered our ability to do stuff like primal and gradient evaluation together etc |
Ok, thanks for the answer. That's a shame, because we lose quite some flexibility here! |
On a new install of Julia, my previously working neural SDE code no longer allows extra variables to be passed from the loss function to the callback function. The documentation says that this should still be possible. The following is test code from the optimization test library. It runs if unmodified, but fails if I try to pass extra variables from loss function to callback. Is there a new way to accomplish the passing of extra variables?
Error & Stacktrace⚠️
With the modified code, the error is:
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
Version:
The text was updated successfully, but these errors were encountered: