You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The docstring for OptimizationState states that it contains the current objective and current gradient, but it does not specify whether this is of the OptimizationFunction or, in the case of MaxSense, of the sign-flipped version. NLopt/Optimization seem to use the former, while Optim uses the latter.
Expected behavior
It would be helpful for there to be a consistent convention for these signs, since the callback may rely on these state values and their signs.
Minimal Reproducible Example 👇
This example checks assumes that grad and objective should store the gradient and value of the function wrapped by OptimizationFunction and tests for this. It fails only for Optim optimizers with sense=MaxSense.
Describe the bug 🐞
The docstring for
OptimizationState
states that it contains the current objective and current gradient, but it does not specify whether this is of theOptimizationFunction
or, in the case ofMaxSense
, of the sign-flipped version. NLopt/Optimization seem to use the former, while Optim uses the latter.Expected behavior
It would be helpful for there to be a consistent convention for these signs, since the callback may rely on these state values and their signs.
Minimal Reproducible Example 👇
This example checks assumes that
grad
andobjective
should store the gradient and value of the function wrapped byOptimizationFunction
and tests for this. It fails only for Optim optimizers withsense=MaxSense
.Error & Stacktrace⚠️
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
The text was updated successfully, but these errors were encountered: