Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working with Multi-Dimensional Variables and the ndarray Argument to Query Values #213

Closed
IlyaOrson opened this issue Feb 1, 2022 · 7 comments
Labels
documentation Improvements or additions to documentation question Further information is requested

Comments

@IlyaOrson
Copy link

IlyaOrson commented Feb 1, 2022

In the documentation it is shown how to use value(x; ndarray=true) to retrieve a nice array with the results of a variable, but this does not work if that variable is retrieved from the optimized model. This is useful when the optimization is modularized to just passing around the optimized model object. Could this be extended to work for something like this: value(model, x; ndarray=true)?

Currently I access the results with the dictionary interface to get an array with the horizontal axis as the time:

hcat(value.(model[:x])...) |> permutedims

This is biased towards dynamical optimization, not sure about the other class of problems.

The error message of value(model[:x]) could suggest this new access apart from suggesting the broadcast syntax of JUMP.

@IlyaOrson IlyaOrson added the enhancement New feature or request label Feb 1, 2022
@pulsipher
Copy link
Collaborator

Hi there. I don't fully understand what is being asked for here exactly. the ndarray argument is useful for infinite variables with multiple infinite parameter dependencies, but dynamic optimization variables will only depend on one infinite parameter (time) so a vector of values will be returned regardless of what is specified for ndarray. Could you please provide a complete minimum working example to demonstrate what works now and shows the syntax you would like to have.

Also, you might look into using the ndarray argument directly with the transcription_variable function (link) which will give you an array of the underlying JuMP variables stored in the optimizer (transcription) model. Then you can do any normal JuMP query that you like. Looking at the docs for TranscriptionOpt.make_ndarray (link) might also be helpful since that is the underlying function for enabling the ndarray argument across all the different query functions.

@IlyaOrson
Copy link
Author

IlyaOrson commented Feb 2, 2022

This might just be an interactive discovery or documentation enhancement suggestion. Taking the documentation's hovercraft example, if the whole model construction and optimization is done inside a function scope, it becomes less clear how to access the optimized variable values contained in the optimized model:

using InfiniteOpt, Ipopt

xw = [1 4 6 1; 1 3 0 1] # positions
tw = [0, 25, 50, 60];    # times

function infopt()
    m = InfiniteModel(optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0));
    @infinite_parameter(m, t in [0, 60], num_supports = 61)
    @variables(m, begin
    # state variables
    x[1:2], Infinite(t)
    v[1:2], Infinite(t)
    # control variables
    u[1:2], Infinite(t), (start = 0)
    end)
    @objective(m, Min, (u[1]^2 + u[2]^2, t))
    @constraint(m, [i = 1:2], v[i](0) == 0)
    @constraint(m, [i = 1:2], (x[i], t) == v[i])
    @constraint(m, [i = 1:2], (v[i], t) == u[i])
    @constraint(m, [i = 1:2, j = eachindex(tw)], x[i](tw[j]) == xw[i, j])
    optimize!(m)
    return m
end

model = infopt()

Exploring the object with tab completion in the repl, model.<tab>, suggest that the variables are stored in the model.obj_dict, so something like value.(model[:x]) works as the documentation examples with value.(x).

In the case of multidimensional variables, it would be useful to get directly the components as an array:

x_array = hcat(value.(model[:x])...) |> permutedims

So the question is if adding a query like value(model, x; ndarray=true) to be equivalent to the last command would be useful, or even something like values(model; ndarray=true) that returns the dictionary of arrays for all variables.

@pulsipher
Copy link
Collaborator

pulsipher commented Feb 3, 2022

Thanks for the further explanation. I think this will correspond to a needed documentation improvement, since this is not how the ndarray argument is intended to be used. In short, ndarray exists to help with queries on an individual (scalar) infinite variable that contains multiple infinite parameters (as occurs in PDE-constrained optimization). It is not for querying multidimensional infinite variables. Following JuMP convention, all variables should be queried scalarwise using vectorized function calls were appropriate. This means that ndarray will not be helpful for problems with only one infinite parameter (e.g., dynamic optimization problems).

Below I provide some background and then explain why this is the case. This is all more fully explained in the variable, transcription, and results sections of the User Guide.

Variable Creation
First, it is probably helpful to review how variables are added/stored in InfiniteModels. When we add variables via @variable, a few things occur:

  • An appropriate data object for each scalar variable is created and stored in the InfiniteModel (for instance InfiniteVariables are stored in model.infinite_vars)
  • A variable reference (GeneralVariableRef) is created for each scalar variable that points to where it is stored in the model
  • A variable reference or a container (e.g., a vector) of variable references is/are returned as appropriate
  • A Julia variable is created to store the variable reference or container of references (does not occur for anonymous variables)
  • The Julia variable is stored in the object dictionary (model.obj_dict) to enable the model[:my_var] lookup syntax (does not occur for anonymous variables)

All of these variables are then used in the infinite model to represent the model in its infinite (e.g., continuous form).

For example, the following are equivalent:

using InfiniteOpt
m = InfiniteModel()
@infinite_parameter(m, t in [0, 10])
@variable(m, x[1:2], Infinite(t))
using InfiniteOpt
m = InfiniteModel()
@infinite_parameter(m, t in [0, 10])
x = [add_variable(m, build_variable(error, VariableInfo(false, NaN, false, NaN, false, NaN, false, NaN, false, false), Infinite(t)), name = "x[$i]") for i in 1:2]
m.obj_dict[:x] = x

See https://pulsipher.github.io/InfiniteOpt.jl/stable/guide/variable/#Variable-Definition-Methodology for more info.

Building the Optimizer Model
When we are ready to solve a model, we first need to transform it into a finite JuMP model that the optimizer can handle. By default, we build a TranscriptionModel when optimize! is invoked (this is done via build_optimizer_model!). This entails creating JuMP variables that discretize ("transcribe") each infinite variable in accordance with the support values of the infinite parameters the infinite variable depends on. The mapping between the InfiniteModel variables and JuMP.Model variables is stored in the TranscriptionModel (hence it is a JuMP model + variable mappings). This model is stored in model.optimizer_model.

This is explained with examples in https://pulsipher.github.io/InfiniteOpt.jl/stable/guide/transcribe/

Infinite Variables, their Infinite Parameters, and their Supports
Now that we know infinite variables are created and that they are used to create a discretized JuMP model (a TranscriptionModel), let's consider the implications of infinite parameters and their supports. Consider the following simple model:

using InfiniteOpt
model = InfiniteModel()
@infinite_parameter(model, t in [0, 1], supports = [0, 0.5, 1])
@infinite_parameter(model, x in [-1, 1], supports = [-1, 1])
@variable(model, y, Infinite(t, x))
@variable(model, q, Infinite(t))
build_optimizer_model!(model)

Let's lookup what discretized JuMP variables were created for each infinite variable and how they correspond to the supports:

julia> transcription_variable(y) # gives a vector of JuMP variables
6-element Vector{VariableRef}:
 y(support: 1)
 y(support: 2)
 y(support: 3)
 y(support: 4)
 y(support: 5)
 y(support: 6)

julia> supports(y) # lookup the support of each discretized variable in the form `(t, x)`
6-element Vector{Tuple}:
 (0.0, -1.0)
 (0.5, -1.0)
 (1.0, -1.0)
 (0.0, 1.0)
 (0.5, 1.0)
 (1.0, 1.0)

Hence, we create a discrete variable of y at each unique combination of support points. However, this might not be a convenient format to access the variables at certain times t or positions x. To alleviate this, we provide the ndarray keyword:

julia> transcription_variable(y, ndarray = true)
3×2 Matrix{VariableRef}:
 y(support: 1)  y(support: 4)
 y(support: 2)  y(support: 5)
 y(support: 3)  y(support: 6)

julia> supports(y, ndarray = true)
3×2 Matrix{Tuple}:
 (0.0, -1.0)  (0.0, 1.0)
 (0.5, -1.0)  (0.5, 1.0)
 (1.0, -1.0)  (1.0, 1.0)

julia> transcription_variable(q)
3-element Vector{VariableRef}:
 q(support: 1)
 q(support: 2)
 q(support: 3)

julia> transcription_variable(q, ndarray = true) # we get the same thing since it only depends on 1 infinite parameter
3-element Vector{VariableRef}:
 q(support: 1)
 q(support: 2)
 q(support: 3)

Hence, the first dimension of the array corresponds to the first infinite parameter t and the second to the second infinite parameter x.

Querying Results
All the result queries like value simply forward to the discretized JuMP variables. Hence, the following are equivalent:

value(y, ndarray = true)
value(transcription_variable(y, ndarray = true))

All queries are intended to be carried out scalarwise over variable collections following JuMP's syntax. Hence, for collections of multivariate variables we should use vectorized queries like value.(my_vars). For example, consider the simple multivariate example:

using InfiniteOpt, Ipopt
model = InfiniteModel(Ipopt.Optimizer)
@infinite_parameter(model, t in [0, 1], supports = [0, 0.5, 1])
@infinite_parameter(model, x in [-1, 1], supports = [-1, 1])
@variable(model, y[1:2] >= 0, Infinite(t, x))
@objective(model, Min, integral(integral(y[1] + y[2], t), x))
optimize!(model)

We should query scalarwise over y and see how it changes with and without ndarray:

julia> value.(y) # each element is a vector of values corresponding to the raw discrete variables
2-element Vector{Vector{Float64}}:
 [2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11, 2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11]
 [2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11, 2.3614781834601273e-11, -4.9881927448616094e-9, 2.3614781834601273e-11]

julia> supports.(y)
2-element Vector{Vector{Tuple}}:
 [(0.0, -1.0), (0.5, -1.0), (1.0, -1.0), (0.0, 1.0), (0.5, 1.0), (1.0, 1.0)]
 [(0.0, -1.0), (0.5, -1.0), (1.0, -1.0), (0.0, 1.0), (0.5, 1.0), (1.0, 1.0)]

julia> value.(y, ndarray = true)
value.(y, ndarray = true)
2-element Vector{Matrix{Float64}}:
 [2.3614781834601273e-11 2.3614781834601273e-11; -4.9881927448616094e-9 -4.9881927448616094e-9; 2.3614781834601273e-11 2.3614781834601273e-11]
 [2.3614781834601273e-11 2.3614781834601273e-11; -4.9881927448616094e-9 -4.9881927448616094e-9; 2.3614781834601273e-11 2.3614781834601273e-11]

julia> supports.(y, ndarray = true)
2-element Vector{Matrix{Tuple}}:
 [(0.0, -1.0) (0.0, 1.0); (0.5, -1.0) (0.5, 1.0); (1.0, -1.0) (1.0, 1.0)]
 [(0.0, -1.0) (0.0, 1.0); (0.5, -1.0) (0.5, 1.0); (1.0, -1.0) (1.0, 1.0)]

Hence, each ndarray manipulation is performed scalarwise over each infinite variable. If we want to make one multi-dimensional array to contain the indices of the variable and its infinite parameters then we'll need to reshape it:

julia> myreshape(a) = reshape(reduce(hcat, a), :, size(first(a))...)

julia> myreshape(value.(y, ndarray = true))
2×3×2 Array{Float64, 3}:
[:, :, 1] =
  2.36148e-11  2.36148e-11  -4.98819e-9
 -4.98819e-9   2.36148e-11   2.36148e-11

[:, :, 2] =
  2.36148e-11  2.36148e-11  -4.98819e-9
 -4.98819e-9   2.36148e-11   2.36148e-11

How we go about doing this reshape is dependent on the variable container type and how many infinite parameters we have. Since, it is difficult to make an efficient catch all method to do that for arbitrary infinite dimensional optimization problems, we leave that to the user.

@pulsipher pulsipher changed the title [FEATURE] Access model results directly with value Working with Multi-Dimensional Variables and the ndarray Argument to Query Values Feb 4, 2022
@pulsipher pulsipher added documentation Improvements or additions to documentation question Further information is requested and removed enhancement New feature or request labels Feb 4, 2022
@IlyaOrson
Copy link
Author

I see, thank you for the detailed explanation!

@pulsipher
Copy link
Collaborator

Sure thing, are there any suggestions you would have on how we can improve the documentation to make this more clear?

@IlyaOrson
Copy link
Author

IlyaOrson commented Feb 8, 2022

I think the documentation is very complete but probably an explicit FAQ section with guides to common patterns would help users to get domain specific stuff done without having to process all the internal details. Something short and succinct like this perhaps.

@pulsipher
Copy link
Collaborator

Thank you for the suggestion, I have created #217 to track the progress of adding an FAQ section.

@infiniteopt infiniteopt locked and limited conversation to collaborators Feb 27, 2022
@pulsipher pulsipher converted this issue into discussion #235 Feb 27, 2022

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
documentation Improvements or additions to documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants