-
Notifications
You must be signed in to change notification settings - Fork 20
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working with Multi-Dimensional Variables and the ndarray
Argument to Query Values
#213
Comments
Hi there. I don't fully understand what is being asked for here exactly. the Also, you might look into using the |
This might just be an interactive discovery or documentation enhancement suggestion. Taking the documentation's hovercraft example, if the whole model construction and optimization is done inside a function scope, it becomes less clear how to access the optimized variable values contained in the optimized model: using InfiniteOpt, Ipopt
xw = [1 4 6 1; 1 3 0 1] # positions
tw = [0, 25, 50, 60]; # times
function infopt()
m = InfiniteModel(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0));
@infinite_parameter(m, t in [0, 60], num_supports = 61)
@variables(m, begin
# state variables
x[1:2], Infinite(t)
v[1:2], Infinite(t)
# control variables
u[1:2], Infinite(t), (start = 0)
end)
@objective(m, Min, ∫(u[1]^2 + u[2]^2, t))
@constraint(m, [i = 1:2], v[i](0) == 0)
@constraint(m, [i = 1:2], ∂(x[i], t) == v[i])
@constraint(m, [i = 1:2], ∂(v[i], t) == u[i])
@constraint(m, [i = 1:2, j = eachindex(tw)], x[i](tw[j]) == xw[i, j])
optimize!(m)
return m
end
model = infopt() Exploring the object with tab completion in the repl, In the case of multidimensional variables, it would be useful to get directly the components as an array: x_array = hcat(value.(model[:x])...) |> permutedims So the question is if adding a query like |
Thanks for the further explanation. I think this will correspond to a needed documentation improvement, since this is not how the Below I provide some background and then explain why this is the case. This is all more fully explained in the variable, transcription, and results sections of the User Guide. Variable Creation
All of these variables are then used in the infinite model to represent the model in its infinite (e.g., continuous form). For example, the following are equivalent: using InfiniteOpt
m = InfiniteModel()
@infinite_parameter(m, t in [0, 10])
@variable(m, x[1:2], Infinite(t)) using InfiniteOpt
m = InfiniteModel()
@infinite_parameter(m, t in [0, 10])
x = [add_variable(m, build_variable(error, VariableInfo(false, NaN, false, NaN, false, NaN, false, NaN, false, false), Infinite(t)), name = "x[$i]") for i in 1:2]
m.obj_dict[:x] = x See https://pulsipher.github.io/InfiniteOpt.jl/stable/guide/variable/#Variable-Definition-Methodology for more info. Building the Optimizer Model This is explained with examples in https://pulsipher.github.io/InfiniteOpt.jl/stable/guide/transcribe/ Infinite Variables, their Infinite Parameters, and their Supports using InfiniteOpt
model = InfiniteModel()
@infinite_parameter(model, t in [0, 1], supports = [0, 0.5, 1])
@infinite_parameter(model, x in [-1, 1], supports = [-1, 1])
@variable(model, y, Infinite(t, x))
@variable(model, q, Infinite(t))
build_optimizer_model!(model) Let's lookup what discretized JuMP variables were created for each infinite variable and how they correspond to the supports: julia> transcription_variable(y) # gives a vector of JuMP variables
6-element Vector{VariableRef}:
y(support: 1)
y(support: 2)
y(support: 3)
y(support: 4)
y(support: 5)
y(support: 6)
julia> supports(y) # lookup the support of each discretized variable in the form `(t, x)`
6-element Vector{Tuple}:
(0.0, -1.0)
(0.5, -1.0)
(1.0, -1.0)
(0.0, 1.0)
(0.5, 1.0)
(1.0, 1.0) Hence, we create a discrete variable of julia> transcription_variable(y, ndarray = true)
3×2 Matrix{VariableRef}:
y(support: 1) y(support: 4)
y(support: 2) y(support: 5)
y(support: 3) y(support: 6)
julia> supports(y, ndarray = true)
3×2 Matrix{Tuple}:
(0.0, -1.0) (0.0, 1.0)
(0.5, -1.0) (0.5, 1.0)
(1.0, -1.0) (1.0, 1.0)
julia> transcription_variable(q)
3-element Vector{VariableRef}:
q(support: 1)
q(support: 2)
q(support: 3)
julia> transcription_variable(q, ndarray = true) # we get the same thing since it only depends on 1 infinite parameter
3-element Vector{VariableRef}:
q(support: 1)
q(support: 2)
q(support: 3) Hence, the first dimension of the array corresponds to the first infinite parameter Querying Results value(y, ndarray = true)
value(transcription_variable(y, ndarray = true)) All queries are intended to be carried out scalarwise over variable collections following JuMP's syntax. Hence, for collections of multivariate variables we should use vectorized queries like using InfiniteOpt, Ipopt
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, 1], supports = [0, 0.5, 1])
@infinite_parameter(model, x in [-1, 1], supports = [-1, 1])
@variable(model, y[1:2] >= 0, Infinite(t, x))
@objective(model, Min, integral(integral(y[1] + y[2], t), x))
optimize!(model) We should query scalarwise over julia> value.(y) # each element is a vector of values corresponding to the raw discrete variables
2-element Vector{Vector{Float64}}:
[2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11, 2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11]
[2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11, 2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11]
julia> supports.(y)
2-element Vector{Vector{Tuple}}:
[(0.0, -1.0), (0.5, -1.0), (1.0, -1.0), (0.0, 1.0), (0.5, 1.0), (1.0, 1.0)]
[(0.0, -1.0), (0.5, -1.0), (1.0, -1.0), (0.0, 1.0), (0.5, 1.0), (1.0, 1.0)]
julia> value.(y, ndarray = true)
value.(y, ndarray = true)
2-element Vector{Matrix{Float64}}:
[2.3614781834601273e-11 2.3614781834601273e-11; -4.9881927448616094e-9 -4.9881927448616094e-9; 2.3614781834601273e-11 2.3614781834601273e-11]
[2.3614781834601273e-11 2.3614781834601273e-11; -4.9881927448616094e-9 -4.9881927448616094e-9; 2.3614781834601273e-11 2.3614781834601273e-11]
julia> supports.(y, ndarray = true)
2-element Vector{Matrix{Tuple}}:
[(0.0, -1.0) (0.0, 1.0); (0.5, -1.0) (0.5, 1.0); (1.0, -1.0) (1.0, 1.0)]
[(0.0, -1.0) (0.0, 1.0); (0.5, -1.0) (0.5, 1.0); (1.0, -1.0) (1.0, 1.0)] Hence, each julia> myreshape(a) = reshape(reduce(hcat, a), :, size(first(a))...)
julia> myreshape(value.(y, ndarray = true))
2×3×2 Array{Float64, 3}:
[:, :, 1] =
2.36148e-11 2.36148e-11 -4.98819e-9
-4.98819e-9 2.36148e-11 2.36148e-11
[:, :, 2] =
2.36148e-11 2.36148e-11 -4.98819e-9
-4.98819e-9 2.36148e-11 2.36148e-11 How we go about doing this reshape is dependent on the variable container type and how many infinite parameters we have. Since, it is difficult to make an efficient catch all method to do that for arbitrary infinite dimensional optimization problems, we leave that to the user. |
value
ndarray
Argument to Query Values
I see, thank you for the detailed explanation! |
Sure thing, are there any suggestions you would have on how we can improve the documentation to make this more clear? |
I think the documentation is very complete but probably an explicit FAQ section with guides to common patterns would help users to get domain specific stuff done without having to process all the internal details. Something short and succinct like this perhaps. |
Thank you for the suggestion, I have created #217 to track the progress of adding an FAQ section. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
In the documentation it is shown how to use
value(x; ndarray=true)
to retrieve a nice array with the results of a variable, but this does not work if that variable is retrieved from the optimized model. This is useful when the optimization is modularized to just passing around the optimized model object. Could this be extended to work for something like this:value(model, x; ndarray=true)
?Currently I access the results with the dictionary interface to get an array with the horizontal axis as the time:
This is biased towards dynamical optimization, not sure about the other class of problems.
The error message of
value(model[:x])
could suggest this new access apart from suggesting the broadcast syntax of JUMP.The text was updated successfully, but these errors were encountered: