You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For many optimization problems, the only thing that ever changes are constant inputs and parameters. All other things, constraints, goals, etc, do not change. We can gain a significant speed up for certain large problems (or low latency problems) if we can reuse the discretized problem as e.g. an MXFunction, and then call it with the new set of constant inputs and parameters.
I am not sure if this is at all possible, desirable, or worth the effort (time saved in collocating).
For large (linear) problems we might even want to cache part of the Jacobian, but I'm not sure if it makes sense to express it as a function of the constant inputs/parameters.
The text was updated successfully, but these errors were encountered:
In GitLab by @vreeken on Aug 17, 2019, 18:29
For many optimization problems, the only thing that ever changes are constant inputs and parameters. All other things, constraints, goals, etc, do not change. We can gain a significant speed up for certain large problems (or low latency problems) if we can reuse the discretized problem as e.g. an MXFunction, and then call it with the new set of constant inputs and parameters.
I am not sure if this is at all possible, desirable, or worth the effort (time saved in collocating).
For large (linear) problems we might even want to cache part of the Jacobian, but I'm not sure if it makes sense to express it as a function of the constant inputs/parameters.
The text was updated successfully, but these errors were encountered: