Skip to content

Instantly share code, notes, and snippets.

@ForceBru
Created January 7, 2023 23:19
Show Gist options
  • Save ForceBru/a90d215ae98bef6c414ce17831c77c3d to your computer and use it in GitHub Desktop.
Save ForceBru/a90d215ae98bef6c414ce17831c77c3d to your computer and use it in GitHub Desktop.
JuMP can use IPOPT to optimize a function, but OptimizationMOI.jl and NLPModelsIpopt.jl cannot

Given the same optimization problem, same data, same initial point and same optimizer, JuMP.jl obtains the optimum, but OptimizationMOI.jl and NLPModelsIpopt.jl don't

When optimizing using Ipopt.jl via OptimizationMOI.jl and NLPModelsIpopt.jl, the optimizer evaluates the objective function at an infeasible point, which throws:

DomainError with -2.4941978436429695:
log will only return a complex result if called with a complex argument. Try log(Complex(x)).

With the same setup but using JuMP.jl no errors occur, and the optimum is found:

EXIT: Optimal Solution Found.
u_opt = [0.09968760243429328, 6.941809365897269, 6.817463190450249]

Problem statistics from IPOPT

Columns with numbers correspond to Optimization, ADNLPModels and JuMP, respectively.

Number of nonzeros in equality constraint Jacobian...:        0 0 0
Number of nonzeros in inequality constraint Jacobian.:        3 3 2
Number of nonzeros in Lagrangian Hessian.............:        6 6 7

Total number of variables............................:        3 3 3
                     variables with only lower bounds:        2 2 2
                variables with lower and upper bounds:        1 1 1
                     variables with only upper bounds:        0 0 0
Total number of equality constraints.................:        0 0 0
Total number of inequality constraints...............:        1 1 1
        inequality constraints with only lower bounds:        0 0 0
   inequality constraints with lower and upper bounds:        0 0 0
        inequality constraints with only upper bounds:        1 1 1

Observations:

  • JuMP has 1 less nonzero in inequality constraint Jacobian than the others.
  • JuMP has 1 more nonzero in Lagrangian Hessian than the others.

Thus, JuMP constructed a different problem...

Intermediate objective values from IPOPT

iter  OptimizationMOI NLPModelsIpopt JuMP
   0  1.8621407e+00   1.8621407e+00  1.8621407e+00
   1  1.6724881e+00   1.6724881e+00  1.6724881e+00
   2  1.5562440e+00   1.5562440e+00  1.5562440e+00
   3  1.5191827e+00   1.5191827e+00  1.5191827e+00
   4  1.5107919e+00   1.5107919e+00  1.5107919e+00
   5  1.5046485e+00   1.5046485e+00  1.5046485e+00
   6  1.5073755e+00   1.5073755e+00  1.5073755e+00
   7  1.5089724e+00   1.5089724e+00  1.5089724e+00
   8  1.5062711e+00   1.5062711e+00  1.5062711e+00
   9  1.5072838e+00   1.5072838e+00  1.5072838e+00
  10  ERROR           ERROR          1.5066406e+00 Warning: SOC step rejected due to evaluation error

JuMP's model says this when OptimizationMOI and NLPModelsIpopt error out:

Warning: SOC step rejected due to evaluation error

"Evaluation error", that's very true.

Looks like OptimizationMOI and NLPModelsIpopt didn't inform IPOPT about this, but JuMP did, so IPOPT got around it!

import Pkg
Pkg.activate(temp=true)
@info "Installing Ipopt, Optimization, ADNLPModels and JuMP. Output is suppressed."
Pkg.add([
Pkg.PackageSpec(name="Ipopt", version="1.1.0"),
Pkg.PackageSpec(name="Optimization", version="3.10.0"),
Pkg.PackageSpec(name="OptimizationMOI", version="0.1.5"),
Pkg.PackageSpec(name="ADNLPModels", version="0.4.0"),
Pkg.PackageSpec(name="NLPModelsIpopt", version="0.10.0"),
Pkg.PackageSpec(name="JuMP", version="1.6.0"),
Pkg.PackageSpec(name="MathOptInterface", version="1.11.2")
], io=devnull)
Pkg.status()
@info "Loading OptimizationMOI code..."
module CodeOptimizationJL
import Optimization, OptimizationMOI
import Ipopt
const AV{T} = AbstractVector{T}
function model_constraints!(out::AV{<:Real}, u::AV{<:Real}, data)
# Model parameters
dt, a, b = u
out[1] = a - 1/dt # Must be < 0
end
function model_variance(u::AV{T}, data::AV{<:Real}) where T<:Real
dt, a, b = u # Model parameters
variance = zeros(T, length(data))
variance[1] = one(T)
for t in 1:(length(data) - 1)
variance[t+1] = (1 - dt * a) * variance[t] + dt * data[t]^2 + dt * b
end
variance
end
function model_loss(u::AV{T}, data::AV{<:Real})::T where T<:Real
variance = model_variance(u, data)
N = length(data)
-sum(
-(log(2π) + log(var) + r^2 / var) / 2
for (r, var) in zip(data, variance)
) / N
end
function model_fit(u0::AV{T}, data::AV{<:Real}) where T<:Real
func = Optimization.OptimizationFunction(
model_loss, Optimization.AutoForwardDiff(),
cons=model_constraints!
)
prob = Optimization.OptimizationProblem(
func, u0, data,
# 0 < dt < 1 && 1 < a < Inf && 0 < b < Inf
lb=T[0.0, 1.0, 0.0], ub=T[1.0, Inf, Inf],
# ^dt ^a ^b ^dt ^a ^b <= model parameters
lcons=T[-Inf], ucons=T[0.0] # a - 1/dt < 0
)
sol = Optimization.solve(prob, Ipopt.Optimizer())
sol.u
end
end # module CodeOptimization
@info "Loading JuMP.jl code..."
module CodeJuMP
using JuMP
import MathOptInterface as MOI
import Ipopt
const AV{T} = AbstractVector{T}
function model_variance(u::AV, data::AV{<:Real}, model)
dt, a, b = u
variance = Vector{JuMP.NonlinearExpression}(undef, length(data))
variance[1] = @NLexpression(model, 1.0)
for t in 1:(length(data) - 1)
variance[t+1] = @NLexpression(
model,
(1 - dt * a) * variance[t] + dt * data[t]^2 + dt * b
)
end
variance
end
model_variance_value(u::AV, data::AV{<:Real}) =
value.(model_variance(u, data, Model()))
function model_loss!(u::AV, data::AV{<:Real}, model)
variance = model_variance(u, data, model)
N = length(data)
@NLobjective(
model, Min,
-sum(
-(log(2π) + log(var) + r^2 / var) / 2
for (r, var) in zip(data, variance)
) / N
)
end
function model_loss_value(u::AV{T}, data::AV{<:Real}) where T<:Real
model = Model()
@variable(model, dt)
@variable(model, a)
@variable(model, b)
model_loss!([dt, a, b], data, model)
# https://jump.dev/JuMP.jl/stable/manual/nlp/#Querying-derivatives-from-a-JuMP-model
u_jump = zeros(T, 3)
u_jump[JuMP.index(dt).value] = u[1]
u_jump[JuMP.index(a).value] = u[2]
u_jump[JuMP.index(b).value] = u[3]
# WTF?!
ev = JuMP.NLPEvaluator(model)
MOI.initialize(ev, [:Grad])
MOI.eval_objective(ev, u_jump)
end
function model_fit(u0::AV{T}, data::AV{<:Real}) where T<:Real
model = JuMP.Model(Ipopt.Optimizer)
@variable(model, 0 <= dt <= 1, start=u0[1])
@variable(model, 1 <= a, start=u0[2])
@variable(model, 0 <= b, start=u0[3])
@NLconstraint(model, a <= 1 / dt)
model_loss!([dt, a, b], data, model)
optimize!(model)
value.([dt, a, b])
end
end # module CodeJuMP
@info "Loading NLPModelsIpopt.jl code..."
module CodeADNLP
using Statistics: mean
import ADNLPModels, NLPModelsIpopt
import ..CodeOptimizationJL: model_variance, model_loss
const AV{T} = AbstractVector{T}
function model_constraints(u::AV{<:Real}, data)
dt, a, b = u # Model parameters
[a - 1/dt] # Must be <0
end
function model_fit(u0::AV{T}, data::AV{<:Real}) where T<:Real
problem = ADNLPModels.ADNLPModel(
u -> model_loss(u, data), # Objective
u0, T[0.0, 1.0, 0.0], T[1.0, Inf, Inf], # Init value & bounds
u -> model_constraints(u, data), # Constraints
T[-Inf], T[0.0] # Constraints bounds (must be NEGATIVE)
)
solver = NLPModelsIpopt.IpoptSolver(problem)
stats = NLPModelsIpopt.solve!(solver, problem)
stats.solution
end
end # module CodeADNLP
# ========== DRIVER CODE ==========
@info "Defining data"
data = [
2.1217711584057386, -0.28350145551002465, 2.3593492969513004, 0.192856733601849, 0.4566485836385113, 1.332717934013979, -1.286716619379847, 0.9868669960185211, 2.2358674776395224, -2.7933975791568098,
1.2555871497124622, 1.276879759908467, -0.8392016987911409, -1.1580875182201849, 0.33201646080578456, -0.17212553408696898, 1.1275285626369556, 0.23041139849229036, 1.648423577528424, 2.384823597473343,
-0.4005518932539747, -1.117737311211693, -0.9490152960583265, -1.1454539355078672, 1.4158585811404159, -0.18926972177257692, -0.2867541528181491, -1.2077459688543788, -0.6397173049620141, 0.66147783407023,
0.049805188778543466, 0.902540117368457, -0.7018417933284938, 0.47342354473843684, 1.2620345361591596, -1.1483844812087018, -0.06487285080802752, 0.39020117013487715, -0.38454491504165356, 1.5125786171885645,
-0.6751768274451174, 0.490916740658628, 0.012872300530924086, 0.46532447715746716, 0.34734421531357157, 0.3830452463549559, -0.8730874028738718, 0.4333151627834603, -0.40396180775692375, 2.0794821773418497,
-0.5392735774960918, 0.6519326323752113, -1.4844713145398716, 0.3688828625691108, 1.010912990717231, 0.5018274939956874, 0.36656889279915833, -0.11403975693239479, -0.6460314660359935, -0.41997005020823147,
0.9652752515820495, -0.37375868692702047, -0.5780729659197872, 2.642742798278919, 0.5076984117208074, -0.4906395089461916, -1.804352047187329, -0.8596663844837792, -0.7510485548262176, -0.07922589350581195,
1.7201304839487317, 0.9024493222130577, -1.8216089665357902, 1.3929269238775426, -0.08410752079538407, 0.6423068180438288, 0.6615201016351212, 0.18546977816594887, -0.717521690742993, -1.0224309324751113,
1.7748350222721971, 0.1929546575877559, -0.1581871639724676, 0.20198379311238596, -0.6919373947349301, -0.9253274269423383, 0.549366272989534, -1.9302106783541606, 0.7197247279281573, -1.220334158468621,
-0.9187468058921053, -2.1452607604834184, -2.1558650694862687, -0.9387913392336701, -0.676637835687265, -0.16621998352492198, 0.5637177022958897, -0.5258315560278541, 0.8413359958184765, -0.9096866525337141
];
# u0 = [0 < dt < 1, 1 < a < 1/dt, 0 < b < Inf]
u0 = [0.3, 2.3333333333333335, 0.33333333333333337]
@info "Initial point" u0
@info "Model variances must be equal for OptimizationMOI, NLPModelsIpopt and JuMP. Code will crash if not."
@assert CodeOptimizationJL.model_variance(u0, data) ≈ CodeADNLP.model_variance(u0, data)
@assert CodeADNLP.model_variance(u0, data) ≈ CodeJuMP.model_variance_value(u0, data)
@info "Model losses must be equal for OptimizationMOI, NLPModelsIpopt and JuMP. Code will crash if not."
@assert CodeOptimizationJL.model_loss(u0, data) ≈ CodeADNLP.model_loss(u0, data)
@assert CodeADNLP.model_loss(u0, data) ≈ CodeJuMP.model_loss_value(u0, data)
@info "Fitting with OptimizationMOI. This is expected to error out."
try
CodeOptimizationJL.model_fit(u0, data)
catch e
@error "OptimizationMOI.jl" e
end
@info "Fitting with NLPModelsIpopt. This is expected to error out."
try
CodeADNLP.model_fit(u0, data)
catch e
@error "NLPModelsIpopt.jl" e
end
@info "Fitting with JuMP. This is NOT expected to error out."
u_opt = CodeJuMP.model_fit(u0, data)
@show u_opt
$ julia-1.8 --version
julia version 1.8.3
$ julia-1.8 CODE.jl
Activating new project at `/var/folders/ys/3h0gnqns4b98zb66_vl_m35m0000gn/T/jl_jzpRAG`
[ Info: Installing Ipopt, Optimization, ADNLPModels and JuMP. Output is suppressed.
Status `/private/var/folders/ys/3h0gnqns4b98zb66_vl_m35m0000gn/T/jl_jzpRAG/Project.toml`
[54578032] ADNLPModels v0.4.0
[b6b21f68] Ipopt v1.1.0
[4076af6c] JuMP v1.6.0
[b8f27783] MathOptInterface v1.11.2
[f4238b75] NLPModelsIpopt v0.10.0
[7f7a1694] Optimization v3.10.0
[fd9f6733] OptimizationMOI v0.1.5
[ Info: Loading OptimizationMOI code...
[ Info: Loading JuMP.jl code...
[ Info: Loading NLPModelsIpopt.jl code...
[ Info: Defining data
┌ Info: Initial point
│ u0 =
│ 3-element Vector{Float64}:
│ 0.3
│ 2.3333333333333335
└ 0.33333333333333337
[ Info: Model variances must be equal for OptimizationMOI, NLPModelsIpopt and JuMP. Code will crash if not.
[ Info: Model losses must be equal for OptimizationMOI, NLPModelsIpopt and JuMP. Code will crash if not.
[ Info: Fitting with OptimizationMOI. This is expected to error out.
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit https://github.com/coin-or/Ipopt
******************************************************************************
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 3
Number of nonzeros in Lagrangian Hessian.............: 6
Total number of variables............................: 3
variables with only lower bounds: 2
variables with lower and upper bounds: 1
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.8621407e+00 0.00e+00 2.10e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.6724881e+00 0.00e+00 5.18e-01 -1.0 1.83e-01 - 8.26e-01 1.00e+00f 1
2 1.5562440e+00 0.00e+00 2.36e-01 -1.7 2.65e-01 - 1.00e+00 1.00e+00f 1
3 1.5191827e+00 0.00e+00 1.61e+00 -1.7 2.12e+00 - 8.10e-01 1.00e+00f 1
4 1.5107919e+00 0.00e+00 5.73e-01 -1.7 1.24e+00 - 1.00e+00 1.00e+00h 1
5 1.5046485e+00 0.00e+00 2.36e-01 -1.7 1.89e+00 - 9.60e-01 1.00e+00h 1
6 1.5073755e+00 0.00e+00 9.81e-02 -1.7 6.68e+01 -4.0 5.60e-02 3.75e-02h 2
7 1.5089724e+00 0.00e+00 1.25e-01 -1.7 1.08e+01 -3.6 1.00e+00 1.00e+00h 1
8 1.5062711e+00 0.00e+00 4.23e-02 -1.7 8.94e+00 - 9.14e-01 1.00e+00h 1
9 1.5072838e+00 3.55e-01 3.44e-02 -1.7 5.79e+01 - 6.48e-01 1.00e+00H 1
┌ Error: OptimizationMOI.jl
│ e =
│ DomainError with -2.4941978436429695:
│ log will only return a complex result if called with a complex argument. Try log(Complex(x)).
└ @ Main CODE.jl:200
[ Info: Fitting with NLPModelsIpopt. This is expected to error out.
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 3
Number of nonzeros in Lagrangian Hessian.............: 6
Total number of variables............................: 3
variables with only lower bounds: 2
variables with lower and upper bounds: 1
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.8621407e+00 0.00e+00 2.10e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.6724881e+00 0.00e+00 5.18e-01 -1.0 1.83e-01 - 8.26e-01 1.00e+00f 1
2 1.5562440e+00 0.00e+00 2.36e-01 -1.7 2.65e-01 - 1.00e+00 1.00e+00f 1
3 1.5191827e+00 0.00e+00 1.61e+00 -1.7 2.12e+00 - 8.10e-01 1.00e+00f 1
4 1.5107919e+00 0.00e+00 5.73e-01 -1.7 1.24e+00 - 1.00e+00 1.00e+00h 1
5 1.5046485e+00 0.00e+00 2.36e-01 -1.7 1.89e+00 - 9.60e-01 1.00e+00h 1
6 1.5073755e+00 0.00e+00 9.81e-02 -1.7 6.68e+01 -4.0 5.60e-02 3.75e-02h 2
7 1.5089724e+00 0.00e+00 1.25e-01 -1.7 1.08e+01 -3.6 1.00e+00 1.00e+00h 1
8 1.5062711e+00 0.00e+00 4.23e-02 -1.7 8.94e+00 - 9.14e-01 1.00e+00h 1
9 1.5072838e+00 3.55e-01 3.44e-02 -1.7 5.79e+01 - 6.48e-01 1.00e+00H 1
┌ Error: NLPModelsIpopt.jl
│ e =
│ DomainError with -2.4941978436429695:
│ log will only return a complex result if called with a complex argument. Try log(Complex(x)).
└ @ Main CODE.jl:236
[ Info: Fitting with JuMP. This is NOT expected to error out.
This is Ipopt version 3.14.4, running with linear solver MUMPS 5.4.1.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 2
Number of nonzeros in Lagrangian Hessian.............: 7
Total number of variables............................: 3
variables with only lower bounds: 2
variables with lower and upper bounds: 1
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.8621407e+00 0.00e+00 2.10e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.6724881e+00 0.00e+00 5.18e-01 -1.0 1.83e-01 - 8.26e-01 1.00e+00f 1
2 1.5562440e+00 0.00e+00 2.36e-01 -1.7 2.65e-01 - 1.00e+00 1.00e+00f 1
3 1.5191827e+00 0.00e+00 1.61e+00 -1.7 2.12e+00 - 8.10e-01 1.00e+00f 1
4 1.5107919e+00 0.00e+00 5.73e-01 -1.7 1.24e+00 - 1.00e+00 1.00e+00h 1
5 1.5046485e+00 0.00e+00 2.36e-01 -1.7 1.89e+00 - 9.60e-01 1.00e+00h 1
6 1.5073755e+00 0.00e+00 9.81e-02 -1.7 6.68e+01 -4.0 5.60e-02 3.75e-02h 2
7 1.5089724e+00 0.00e+00 1.25e-01 -1.7 1.08e+01 -3.6 1.00e+00 1.00e+00h 1
8 1.5062711e+00 0.00e+00 4.23e-02 -1.7 8.94e+00 - 9.14e-01 1.00e+00h 1
9 1.5072838e+00 3.55e-01 3.44e-02 -1.7 5.79e+01 - 6.48e-01 1.00e+00H 1
Warning: SOC step rejected due to evaluation error
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 1.5066406e+00 0.00e+00 1.79e-01 -1.7 5.37e+00 -4.1 1.00e+00 5.00e-01h 2
11 1.5064900e+00 2.69e+01 6.50e-01 -1.7 5.49e+01 - 5.81e-01 1.00e+00H 1
Warning: SOC step rejected due to evaluation error
12 1.5073601e+00 0.00e+00 6.06e-01 -1.7 8.51e+01 -4.5 5.01e-01 1.00e-01h 2
13 1.5063547e+00 0.00e+00 3.07e-01 -1.7 9.19e+01 -5.0 1.00e+00 1.00e+00H 1
14 1.5078132e+00 3.88e+01 6.77e-01 -1.7 1.95e+02 - 1.00e+00 1.00e+00H 1
Warning: SOC step rejected due to evaluation error
15 1.5087427e+00 0.00e+00 1.23e+00 -1.7 1.19e+02 -4.6 4.07e-01 1.00e-01h 2
16 1.5083280e+00 0.00e+00 1.61e+01 -1.7 3.73e+01 - 1.00e+00 1.00e+00H 1
17 1.5070150e+00 0.00e+00 1.09e+01 -1.7 2.01e+01 -5.1 1.00e+00 1.00e+00h 1
18 1.5070482e+00 0.00e+00 1.36e+01 -1.7 1.02e+02 - 1.00e+00 1.00e+00h 1
19 1.5069021e+00 0.00e+00 1.39e-02 -1.7 1.96e+01 -5.5 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
20 1.5068666e+00 0.00e+00 5.25e-02 -3.8 3.40e+00 - 9.94e-01 1.00e+00h 1
21 1.5067795e+00 0.00e+00 5.62e-02 -3.8 1.18e+02 - 1.00e+00 1.00e+00h 1
22 1.5066492e+00 0.00e+00 2.69e-02 -3.8 1.96e+02 - 1.00e+00 1.00e+00H 1
23 1.5067055e+00 0.00e+00 3.06e-02 -3.8 5.48e-01 -6.0 1.00e+00 1.00e+00h 1
24 1.5066967e+00 0.00e+00 1.23e-03 -3.8 9.78e-01 -6.5 1.00e+00 1.00e+00h 1
25 1.5066938e+00 0.00e+00 7.59e-05 -3.8 2.32e+00 -7.0 1.00e+00 1.00e+00h 1
26 1.5066899e+00 0.00e+00 1.59e-04 -3.8 4.04e+00 -7.4 1.00e+00 1.00e+00h 1
27 1.5066402e+00 0.00e+00 1.18e-02 -5.7 4.19e+01 -7.9 8.72e-01 1.00e+00h 1
28 1.5065732e+00 0.00e+00 1.38e-01 -5.7 2.81e+02 -8.4 1.00e+00 6.01e-01h 1
29 1.5063308e+00 0.00e+00 1.06e-01 -5.7 6.45e+01 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
30 1.5061272e+00 0.00e+00 9.78e-02 -5.7 7.45e+01 - 1.00e+00 4.36e-01h 1
31 1.5062072e+00 0.00e+00 9.82e-02 -5.7 4.71e+01 - 8.79e-01 1.00e+00h 1
32 1.5050722e+00 0.00e+00 6.56e-02 -5.7 7.84e+00 - 1.00e+00 5.92e-01h 1
33 1.5040004e+00 0.00e+00 3.52e-02 -5.7 6.79e+00 - 1.00e+00 1.00e+00h 1
34 1.5031769e+00 0.00e+00 1.56e-02 -5.7 3.65e+00 - 1.00e+00 7.27e-01h 1
35 1.5027488e+00 0.00e+00 1.75e-03 -5.7 2.62e+00 - 1.00e+00 1.00e+00h 1
36 1.5034167e+00 0.00e+00 1.57e-02 -5.7 3.59e-01 - 1.09e-01 1.00e+00h 1
37 1.5027717e+00 0.00e+00 5.48e-03 -5.7 2.36e+00 - 1.00e+00 1.00e+00h 1
38 1.5027049e+00 0.00e+00 2.07e-03 -5.7 8.38e-01 - 8.72e-01 1.00e+00h 1
39 1.5027011e+00 0.00e+00 3.40e-05 -5.7 2.21e-01 - 1.00e+00 1.00e+00h 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
40 1.5027011e+00 0.00e+00 9.26e-07 -5.7 2.67e-02 - 1.00e+00 1.00e+00h 1
41 1.5027011e+00 0.00e+00 7.96e-11 -5.7 2.65e-04 - 1.00e+00 1.00e+00h 1
42 1.5027011e+00 0.00e+00 2.72e-08 -8.6 4.75e-03 - 1.00e+00 1.00e+00h 1
43 1.5027011e+00 0.00e+00 4.51e-14 -8.6 5.21e-06 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 43
(scaled) (unscaled)
Objective...............: 1.5027011040525418e+00 1.5027011040525418e+00
Dual infeasibility......: 4.5145186498206844e-14 4.5145186498206844e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5059035566699300e-09 2.5059035566699300e-09
Overall NLP error.......: 2.5059035566699300e-09 2.5059035566699300e-09
Number of objective function evaluations = 63
Number of objective gradient evaluations = 44
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 63
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 44
Number of Lagrangian Hessian evaluations = 43
Total seconds in IPOPT = 2.176
EXIT: Optimal Solution Found.
u_opt = [0.09968760243429328, 6.941809365897269, 6.817463190450249]
$
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment