Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different results when using Jump wrapper and scs_solve #304

Closed
zhenweilin opened this issue Dec 18, 2024 · 2 comments
Closed

Different results when using Jump wrapper and scs_solve #304

zhenweilin opened this issue Dec 18, 2024 · 2 comments

Comments

@zhenweilin
Copy link

The testing code are:

function test()
    rng = Random.MersenneTwister(1)
    m = 3
    n = 1
    u = rand(rng, m, n)
    w = rand(rng, m)
    btilde = rand(rng, n)
    ######## generate data ########
    b = zeros(m * n + m + n + 3m)
    b[1:n] .= btilde
    for i in 1:m
        # b[m * n + n + 3(i - 1) + 2] = -1.0
        b[m * n + n + m + 3(i - 1) + 2] = -1.0
    end
    u = vec(u')
    sparse_u = sparse(u)
    nnz_u = nnz(sparse_u)
    println(nnz_u)
    row_indices = zeros(Int, nnz_u)
    for i = 1:nnz_u
        row_indices[i] = (sparse_u.nzind[i] - 1) ÷ n + 1 + n
    end

    c = zeros(m*n + 2*m)
    @views c[m*n+1:2:m*n+2*m-1] .= -w
    row1 = 1:(m * n)
    row1 = row1 .+ n .+ m
    col1 = 1:(m * n)
    val1 = fill(1.0, m * n)
    rows = repeat(1:n, m)
    cols = repeat((0:(m-1)) * n, inner=n) .+ repeat(1:n, m)
    values = ones(n * m)
    rows = vcat([rows, row1]...)
    cols = vcat([cols, col1]...)
    val = vcat([values, val1]...)



    row2 = row_indices
    col2 = sparse_u.nzind
    val2 = -sparse_u.nzval
    row = vcat([rows, row2]...)
    col = vcat([cols, col2]...)
    val = vcat([val, val2]...)

    row3 = (n+1) : (n+m)
    row3 = row3
    col3 = m * n .+ 2 .* (1:m)  .- 1
    val3 = fill(1.0, m)

    row = vcat([row, row3]...)
    col = vcat([col, col3]...)
    val = vcat([val, val3]...)

    row4 = vcat([[3*i + 1, 3i+3] for i in 0:(m-1)]...) .+ (m + n)
    row4 = row4 .+ (n * m)
    col4 = Vector{Integer}(1:2m)
    col4 = col4 .+ (m * n)
    val4 = fill(1.0, 2m)
    row = vcat([row, row4]...)
    col = vcat([col, col4]...)
    val = vcat([val, val4]...)
    A = sparse(row, col, val, m + n + 3m + m *n, m*n + m + m)
    A = SparseMatrixCSC(A)
    mA, nA = size(A)
    ######### solve with scs #########

    model = Model(SCS.Optimizer)
    @variable(model, x[1:nA])
    @objective(model, Min, c' * x)
    @constraint(model, A[1:m+n, :] * x .== b[1:m+n]) # 1:m+n
    @constraint(model, A[m+n+1:m+n+m*n, :] * x .>= b[m+n+1:m+n+m*n]) # m+n+1:m+n+m*n
    @constraint(model, con[i=1:m], [A[m+n+m*n+3(i-1)+1, :]' * x - b[m+n+m*n+3(i-1)+1], A[m+n+m*n+3(i-1)+2, :]' * x - b[m+n+m*n+3(i-1)+2], A[m+n+m*n+3(i-1)+3, :]' * x - b[m+n+m*n+3(i-1)+3]] in MOI.ExponentialCone())
    # @constraint(model, con[i=1:m], [x[m*n + 2*i - 1], 1.0, x[m*n + 2*i]] in MOI.ExponentialCone())
    optimize!(model)
    println("objective_value: ", objective_value(model))
    println("x: ", value.(x))

    println("outer A.nzval: ", A.nzval)
    println("outer A.rowval: ", A.rowval)
    println("outer A.colptr: ", A.colptr)
    sol_res = SCS.scs_solve(
        SCS.DirectSolver,
        mA,
        nA,
        A,
        sparse(zeros(nA, nA)),
        -b,
        c,
        m + n, # z
        m * n, # l
        Float64[], # bu
        Float64[], # bl
        Vector{Integer}([]), # q
        Vector{Integer}([]), # s
        m,
        0,
        Float64[],
        zeros(nA),
        zeros(mA),
        zeros(mA)
    )

end

test()

The results between JUMP api and scs_solve are different. What is the reason?

------------------------------------------------------------------
	       SCS v3.2.7 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 9, constraints m: 16
cones: 	  z: primal zero / dual free vars: 4
	  l: linear vars: 3
	  e: exp vars: 9, dual exp vars: 0
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
	  compiled with openmp parallelization enabled
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 18, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 9.54e-01  5.04e-01  6.93e-01 -5.54e-02  1.00e-01  2.35e-03 
    50| 2.15e-04  1.44e-05  7.30e-05 -1.61e-01  1.00e-01  4.52e-03 
------------------------------------------------------------------
status:  solved
timings: total: 4.52e-03s = setup: 1.42e-04s + solve: 4.38e-03s
	 lin-sys: 1.46e-05s, cones: 4.22e-03s, accel: 1.37e-06s
------------------------------------------------------------------
objective = -0.161192
------------------------------------------------------------------
objective_value: -0.16122833996512112
x: [-0.00020622095726905562, 0.9523381209761298, -0.00021521556977977655, -4.870661351048371e-5, 1.7127410306943451, 0.3300014342567794, 3.072837080661251, -6.735386102597836e-5, 1.8484621613426016]
outer A.nzval: [1.0, -0.23603334566204692, 1.0, 1.0, -0.34651701419196046, 1.0, 1.0, -0.3127069683360675, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
outer A.rowval: [1, 2, 5, 1, 3, 6, 1, 4, 7, 2, 8, 10, 3, 11, 13, 4, 14, 16]
outer A.colptr: [1, 4, 7, 10, 12, 13, 15, 16, 18, 19]
------------------------------------------------------------------
	       SCS v3.2.7 - Splitting Conic Solver
	(c) Brendan O'Donoghue, Stanford University, 2012
------------------------------------------------------------------
problem:  variables n: 9, constraints m: 16
cones: 	  z: primal zero / dual free vars: 4
	  l: linear vars: 3
	  e: exp vars: 9, dual exp vars: 0
settings: eps_abs: 1.0e-04, eps_rel: 1.0e-04, eps_infeas: 1.0e-07
	  alpha: 1.50, scale: 1.00e-01, adaptive_scale: 1
	  max_iters: 100000, normalize: 1, rho_x: 1.00e-06
	  acceleration_lookback: 10, acceleration_interval: 10
	  compiled with openmp parallelization enabled
lin-sys:  sparse-direct-amd-qdldl
	  nnz(A): 18, nnz(P): 0
------------------------------------------------------------------
 iter | pri res | dua res |   gap   |   obj   |  scale  | time (s)
------------------------------------------------------------------
     0| 9.58e-01  3.74e-01  6.80e-01  2.30e-01  1.00e-01  1.38e-04 
    50| 2.94e-04  6.04e-05  5.47e-06  1.71e-03  1.00e-01  2.62e-03 
------------------------------------------------------------------
status:  solved
timings: total: 2.62e-03s = setup: 2.42e-05s + solve: 2.60e-03s
	 lin-sys: 1.99e-05s, cones: 2.42e-03s, accel: 1.74e-06s
------------------------------------------------------------------
objective = 0.001712
------------------------------------------------------------------
@zhenweilin zhenweilin changed the title Different result when using Jump wrapper and scs_solve Different results when using Jump wrapper and scs_solve Dec 18, 2024
@odow
Copy link
Member

odow commented Dec 18, 2024

You actually need to negate the values in A:

sol_res = SCS.scs_solve(
        SCS.DirectSolver,
        mA,
        nA,
        -A,

SCS does that sneakily here:

return -A.nzval, A.rowval, A.colptr

The standard form of scs_solve is:

  minimize        1/2 * x' * P * x + c' * x
  subject to      A * x + s = b
                  s in K

or rewritten slightly:

  minimize        1/2 * x' * P * x + c' * x
  subject to      b - A * x in K

So your data has used the negative convention. Just multiply all your input data by -1 and then you can use A and b directly.

@zhenweilin
Copy link
Author

Wow. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants