From 4569b1dc216a688bbe778edfcac495eb07a41edd Mon Sep 17 00:00:00 2001 From: Vaibhav Dixit Date: Thu, 28 Jul 2022 16:26:52 +0530 Subject: [PATCH 1/5] Add constraints tutorial --- docs/Project.toml | 5 ++ docs/src/tutorials/constraints.md | 90 ++++++++++++++++++++++++++++++- 2 files changed, 94 insertions(+), 1 deletion(-) diff --git a/docs/Project.toml b/docs/Project.toml index e8d3fb23f..5f8507b41 100644 --- a/docs/Project.toml +++ b/docs/Project.toml @@ -1,12 +1,17 @@ [deps] +AmplNLWriter = "7c4d4715-977e-5154-bfe0-e096adeac482" Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41" ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210" +Ipopt = "b6b21f68-93f8-5de0-b562-5493be1d77c9" +Ipopt_jll = "9cc047cb-c261-5740-88fc-0cf96f7bdcc7" IterTools = "c8e1da08-722c-5040-9ed9-7db0dc04731e" ModelingToolkit = "961ee093-0014-501f-94e3-6117800e7a78" +Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba" OptimizationBBO = "3e6eede4-6085-4f62-9a71-46d9bc1eb92b" OptimizationCMAEvolutionStrategy = "bd407f91-200f-4536-9381-e4ba712f53f8" OptimizationEvolutionary = "cb963754-43f6-435e-8d4b-99009ff27753" +OptimizationMOI = "fd9f6733-72f4-499f-8506-86b2bdd0dea1" OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb" OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e" OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1" diff --git a/docs/src/tutorials/constraints.md b/docs/src/tutorials/constraints.md index 705c55b6d..6ef08f95e 100644 --- a/docs/src/tutorials/constraints.md +++ b/docs/src/tutorials/constraints.md @@ -1 +1,89 @@ -# [Using Equality and Inequality Constraints](@id constraints) \ No newline at end of file +# [Using Equality and Inequality Constraints](@id constraints) + +Multiple optmization packages available with the MathOptInterface and Optim's `IPNewton` solver can handle non-linear constraints. +Optimization.jl provides a simple interface to define the constraint as a julia function and then specify the bounds for the output +in `OptimizationFunction` to indicate if it's an equality or inequality constraint. + +Let's define the rosenbrock function as our objective function and consider the below inequalities as our constraints. + +$$ + +x_1^2 + x_2^2 \leq 0.8 + +0.0 \leq x_1 * x_2 \leq 5.0 +$$ + +```@example constraints +using Optimization, OptimizationMOI, OptimizationOptimJL, ForwardDiff, ModelingToolkit + +rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2 +x0 = zeros(2) +_p = [1.0, 1.0] +``` + +Next we define the sum of squares and the product of the optimization variables as our constraint functions. + +```@example constraints +cons(res, x, p) = (res .= [x[1]^2+x[2]^2, x[1]*x[2]]) +``` + +We'll use the `IPNewton` solver from Optim to solve the problem. + +```@example constraints +optprob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(), cons = cons) +prob = OptimizationProblem(optprob, x0, _p, lcons = [-Inf, -1.0], ucons = [0.8, 2.0]) +sol = solve(prob, IPNewton()) +``` + +Let's check that the constraints are satisfied and the objective is lower than at initial values to be sure. + +```@example constraints +res = zeros(2) +cons(res, sol.u, _p) +prob.f(sol.u, _p) +``` + +We can also use the Ipopt library with the OptimizationMOI package. + +```@example constraints +sol = solve(prob, Ipopt.Optimizer()) +``` + +```@example constraints +res = zeros(2) +cons(res, sol.u, _p) +println(res) +prob.f(sol.u, _p) +``` + +We can also use ModelingToolkit as our AD backend and generate symbolic derivatives and expression graph for the objective and constraints. + +Let's modify the bounds to use the function as an equality constraint. The constraint now becomes - + + +$$ + +x_1^2 + x_2^2 = 1.0 + +\leq x_1 * x_2 = 0.5 +$$ + +```@example constraints +optprob = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit(), cons = cons) +prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5]) +``` + +Below the AmplNLWriter.jl package is used with to use the Ipopt library to solve the problem. + +```@example constraints +using AmplNLWriter, Ipopt_jll +sol = solve(prob, AmplNLWriter.Optimizer(Ipopt_jll.amplexe)) +``` + +The constraints evaluate to 1.0 and 0.5 respectively as expected. + +```@example constraints +res = zeros(2) +cons(res, sol.u, _p) +println(res) +``` From 98cf8db45acc0808d5e421aadbd857dac4fbe990 Mon Sep 17 00:00:00 2001 From: Vaibhav Dixit Date: Thu, 28 Jul 2022 16:47:42 +0530 Subject: [PATCH 2/5] Add to pages --- docs/pages.jl | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/pages.jl b/docs/pages.jl index 1dd6e3ed9..aef6f6bf2 100644 --- a/docs/pages.jl +++ b/docs/pages.jl @@ -5,7 +5,8 @@ pages = [ "tutorials/intro.md", "tutorials/rosenbrock.md", "tutorials/minibatch.md", - "tutorials/symbolic.md" + "tutorials/symbolic.md", + "tutorials/constraints.md", ], "API" => [ @@ -30,4 +31,4 @@ pages = [ "Optimisers.jl" => "optimization_packages/optimisers.md", "QuadDIRECT.jl" => "optimization_packages/quaddirect.md" ], - ] \ No newline at end of file + ] From d082f57639c7da9b0d9d3c9f1c93be15a67b8cc4 Mon Sep 17 00:00:00 2001 From: Vaibhav Dixit Date: Thu, 28 Jul 2022 17:00:49 +0530 Subject: [PATCH 3/5] align --- docs/src/tutorials/constraints.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/docs/src/tutorials/constraints.md b/docs/src/tutorials/constraints.md index 6ef08f95e..ea8be2714 100644 --- a/docs/src/tutorials/constraints.md +++ b/docs/src/tutorials/constraints.md @@ -6,12 +6,14 @@ in `OptimizationFunction` to indicate if it's an equality or inequality constrai Let's define the rosenbrock function as our objective function and consider the below inequalities as our constraints. -$$ +```math +\begin{aligned} -x_1^2 + x_2^2 \leq 0.8 +x_1^2 + x_2^2 \leq 0.8 \\ 0.0 \leq x_1 * x_2 \leq 5.0 -$$ +\end{aligned} +``` ```@example constraints using Optimization, OptimizationMOI, OptimizationOptimJL, ForwardDiff, ModelingToolkit @@ -61,12 +63,14 @@ We can also use ModelingToolkit as our AD backend and generate symbolic derivati Let's modify the bounds to use the function as an equality constraint. The constraint now becomes - -$$ +``` +\begin{align} -x_1^2 + x_2^2 = 1.0 +x_1^2 + x_2^2 = 1.0 \\ \leq x_1 * x_2 = 0.5 -$$ +\end{align} +``` ```@example constraints optprob = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit(), cons = cons) From a9558fdf1a50ddeaadcca4913a5ca19b7efc99cf Mon Sep 17 00:00:00 2001 From: Vaibhav Dixit Date: Thu, 28 Jul 2022 17:42:10 +0530 Subject: [PATCH 4/5] hide and nothing return for whole block stdout --- docs/src/tutorials/constraints.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/src/tutorials/constraints.md b/docs/src/tutorials/constraints.md index ea8be2714..03c6620a2 100644 --- a/docs/src/tutorials/constraints.md +++ b/docs/src/tutorials/constraints.md @@ -43,6 +43,7 @@ Let's check that the constraints are satisfied and the objective is lower than a res = zeros(2) cons(res, sol.u, _p) prob.f(sol.u, _p) +nothing # hide ``` We can also use the Ipopt library with the OptimizationMOI package. @@ -56,6 +57,7 @@ res = zeros(2) cons(res, sol.u, _p) println(res) prob.f(sol.u, _p) +nothing # hide ``` We can also use ModelingToolkit as our AD backend and generate symbolic derivatives and expression graph for the objective and constraints. @@ -63,7 +65,7 @@ We can also use ModelingToolkit as our AD backend and generate symbolic derivati Let's modify the bounds to use the function as an equality constraint. The constraint now becomes - -``` +```math \begin{align} x_1^2 + x_2^2 = 1.0 \\ From f4c150bec0ab8dfe21c3c05b7f3ae8dd60839579 Mon Sep 17 00:00:00 2001 From: Vaibhav Dixit Date: Thu, 28 Jul 2022 17:59:55 +0530 Subject: [PATCH 5/5] Separate blocks for output visibility --- docs/src/tutorials/constraints.md | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/docs/src/tutorials/constraints.md b/docs/src/tutorials/constraints.md index 03c6620a2..b810b74cc 100644 --- a/docs/src/tutorials/constraints.md +++ b/docs/src/tutorials/constraints.md @@ -42,8 +42,11 @@ Let's check that the constraints are satisfied and the objective is lower than a ```@example constraints res = zeros(2) cons(res, sol.u, _p) +res +``` + +```@example constraints prob.f(sol.u, _p) -nothing # hide ``` We can also use the Ipopt library with the OptimizationMOI package. @@ -55,23 +58,24 @@ sol = solve(prob, Ipopt.Optimizer()) ```@example constraints res = zeros(2) cons(res, sol.u, _p) -println(res) +res +``` + +```@example constraints prob.f(sol.u, _p) -nothing # hide ``` We can also use ModelingToolkit as our AD backend and generate symbolic derivatives and expression graph for the objective and constraints. Let's modify the bounds to use the function as an equality constraint. The constraint now becomes - - ```math -\begin{align} +\begin{aligned} x_1^2 + x_2^2 = 1.0 \\ -\leq x_1 * x_2 = 0.5 -\end{align} +x_1 * x_2 = 0.5 +\end{aligned} ``` ```@example constraints