TSSOS aims to provide a user-friendly and efficient tool for solving optimization problems with polynomials, which is based on the structured moment-SOS hierarchy. To use TSSOS in Julia, run
pkg> add https://github.com/wangjie212/TSSOS
Documentation |
---|
TSSOS has been tested on Ubuntu and Windows.
An unconstrained polynomial optimization problem could be formulized as
where
Taking
using TSSOS
using DynamicPolynomials
@polyvar x[1:3]
f = 1 + x[1]^4 + x[2]^4 + x[3]^4 + x[1]*x[2]*x[3] + x[2]
opt,sol,data = tssos_first(f, x, TS="MD")
By default, the monomial basis computed by the Newton polytope method is used. If one sets newton=false:
opt,sol,data = tssos_first(f, x, newton=false, TS="MD")
then the standard monomial basis will be used.
To compute higher TS steps of the TSSOS hierarchy, repeatedly run
opt,sol,data = tssos_higher!(data, TS="MD")
Options
nb: specify the first nb variables to be
TS: "block" by default (maximal chordal extension), "signsymmetry" (sign symmetries), "MD" (approximately smallest chordal extension), false (invalidating term sparsity iterations)
solution: true (extract optimal solutions), false
Output
basis: monomial basis
cl: numbers of blocks
blocksize: sizes of blocks
blocks: block structrue
GramMat: Gram matrices (set Gram=true)
flag: 0 if global optimality is certified; 1 otherwise
A constrained polynomial optimization problem could be formulized as
where
for some polynomials
Taking
@polyvar x[1:3]
f = 1 + x[1]^4 + x[2]^4 + x[3]^4 + x[1]*x[2]*x[3] + x[2]
g = 1 - x[1]^2 - 2*x[2]^2
h = x[2]^2 + x[3]^2 - 1
pop = [f, g, h]
d = 2 # set the relaxation order
opt,sol,data = tssos_first(pop, x, d, numeq=1, TS="MD")
To compute higher TS steps of the TSSOS hierarchy, repeatedly run
opt,sol,data = tssos_higher!(data, TS="MD")
Options
nb: specify the first nb variables to be
TS: "block" by default (maximal chordal extension), "signsymmetry" (sign symmetries), "MD" (approximately smallest chordal extension), false (invalidating term sparsity iterations)
normality: true (impose normality condtions), false
NormalSparse: true (exploit sparsity when imposing normality conditions), false
quotient: true (work in the quotient ring by computing a Gröbner basis), false
solution: true (extract optimal solutions), false
One could also exploit correlative sparsity and term sparsity simultaneously.
using DynamicPolynomials
n = 6
@polyvar x[1:n]
f = 1 + sum(x.^4) + x[1]*x[2]*x[3] + x[3]*x[4]*x[5] + x[3]*x[4]*x[6] + x[3]*x[5]*x[6] + x[4]*x[5]*x[6]
pop = [f, 1 - sum(x[1:3].^2), 1 - sum(x[3:6].^2)]
order = 2 # set the relaxation order
opt,sol,data = cs_tssos_first(pop, x, order, numeq=0, TS="MD") # compute the first TS step of the CS-TSSOS hierarchy
opt,sol,data = cs_tssos_higher!(data, TS="MD") # compute higher TS steps of the CS-TSSOS hierarchy
Options
nb: specify the first nb variables to be
CS: "MF" by default (approximately smallest chordal extension), "NC" (not performing chordal extension), false (invalidating correlative sparsity exploitation)
TS: "block" by default (maximal chordal extension), "signsymmetry" (sign symmetries), "MD" (approximately smallest chordal extension), false (invalidating term sparsity iterations)
normality: true (impose normality condtions), false
NormalSparse: true (exploit sparsity when imposing normality conditions), false
MomentOne: true (add a first-order moment PSD constraint for each variable clique), false
solution: true (extract an approximately optimal solution), false
One may set solver="Mosek" or solver="COSMO" to specify the SDP solver invoked by TSSOS. By default, the solver is Mosek.
The parameters of COSMO could be tuned by
settings = cosmo_para()
settings.eps_abs = 1e-5 # absolute residual tolerance
settings.eps_rel = 1e-5 # relative residual tolerance
settings.max_iter = 1e4 # maximum number of iterations
settings.time_limit = 1e4 # limit of running time
and run for instance tssos_first(..., cosmo_setting=settings)
The parameters of Mosek could be tuned by
settings = mosek_para()
settings.tol_pfeas = 1e-8 # primal feasibility tolerance
settings.tol_dfeas = 1e-8 # dual feasibility tolerance
settings.tol_relgap = 1e-8 # relative primal-dual gap tolerance
settings.time_limit = 1e4 # limit of running time
and run for instance tssos_first(..., mosek_setting=settings)
Output
basis: monomial basis
cl: numbers of blocks
blocksize: sizes of blocks
blocks: block structrue
GramMat: Gram matrices (set Gram=true)
moment: moment matrices (set Mommat=true)
flag: 0 if global optimality is certified; 1 otherwise
TSSOS also supports exploiting symmetries for polynomial optimization problems (thanks to SymbolicWedderburn.jl).
using DynamicPolynomials
using PermutationGroups
@polyvar x[1:3]
f = sum(x) + sum(x.^4)
pop = [f, 1 - sum(x.^2)]
order = 2 # set the relaxation order
G = PermGroup([perm"(1,2,3)", perm"(1,2)"]) # define the permutation group acting on variables
opt,basis,Gram = tssos_symmetry(pop, x, order, G)
Options
numeq: number of equality constraints
Output
basis: symmetry adapted basis
Gram: Gram matrices
Check out example/runopf.jl
and example/modelopf.jl
.
TSSOS supports more general sum-of-squares optimization (including polynomial optimization as a special case):
where
model,info = add_psatz!(model, nonneg, vars, ineq_cons, eq_cons, order, TS="block", SO=1, Groebnerbasis=false)
where nonneg is a nonnegative polynomial constrained to admit a Putinar's style SOS representation on the semialgebraic set defined by ineq_cons and eq_cons, and SO is the sparse order.
The following is a simple exmaple.
using JuMP
using MosekTools
using DynamicPolynomials
using MultivariatePolynomials
using TSSOS
@polyvar x[1:3]
f = x[1]^2 + x[1]*x[2] + x[2]^2 + x[2]*x[3] + x[3]^2
d = 2 # set the relaxation order
@polyvar y[1:2]
h = [x[1]^2 + x[2]^2 + y[1]^2-1, x[2]^2 + x[3]^2 + y[2]^2 - 1]
model = Model(optimizer_with_attributes(Mosek.Optimizer))
@variable(model, lower)
nonneg = f - lower*sum(x.^2)
model,info = add_psatz!(model, nonneg, [x; y], [], h, d, TS="block", Groebnerbasis=true)
@objective(model, Max, lower)
optimize!(model)
Check out example/sosprogram.jl
for a more complicated example.
Moment-SOS relaxations provide lower bounds on the optimum of the polynomial optimization problem. As the complementary side, one could compute a locally optimal solution which provides an upper bound on the optimum of the polynomial optimization problem. The upper bound is useful in evaluating the quality (tightness) of those lower bounds provided by moment-SOS relaxations. In TSSOS, for a given polynomial optimization problem, a locally optimal solution could be obtained via the nonlinear programming solver Ipopt:
obj,sol,status = local_solution(data.n, data.m, data.supp, data.coe, numeq=data.numeq, startpoint=rand(data.n))
TSSOS also supports solving complex polynomial optimization. See Exploiting Sparsity in Complex Polynomial Optimization for more details.
A general complex polynomial optimization problem could be formulized as
with
where
In TSSOS, we use
which is represented in TSSOS as
using DynamicPolynomials
n = 2 # set the number of complex variables
@polyvar x[1:2n]
f = 3 - x[1]*x[3] - 0.5im*x[1]*x[4]^2 + 0.5im*x[2]^2*x[3]
g1 = x[2] + x[4]
g2 = x[1]*x[3] - 0.25*x[1]^2 - 0.25 x[3]^2 - 1
g3 = x[1]*x[3] + x[2]*x[4] - 3
g4 = im*x[2] - im*x[4]
pop = [f, g1, g2, g3, g4]
order = 2 # set the relaxation order
opt,sol,data = cs_tssos_first(pop, x, n, order, numeq=3, TS="block")
Options
nb: specify the first nb complex variables to be of unit norm (satisfying
CS: "MF" by default (approximately smallest chordal extension), "NC" (not performing chordal extension), false (invalidating correlative sparsity exploitation)
TS: "block" by default (maximal chordal extension), "MD" (approximately smallest chordal extension), false (invalidating term sparsity iterations)
normality: specify the normal order
NormalSparse: true (exploit sparsity when imposing normality conditions), false
MomentOne: true (add a first-order moment PSD constraint for each variable clique), false
ipart: true (use complex moment matrices), false (use real moment matrices)
The sum-of-rational-functions optimization problem could be formulized as
where
for some polynomials
Taking
@polyvar x y z
p = [x^2 + y^2 - y*z, y^2 + x^2*z, z^2 - x + y] # define the vector of denominators
q = [1 + 2x^2 + y^2 + z^2, 1 + x^2 + 2y^2 + z^2, 1 + x^2 + y^2 + 2z^2] # define the vector of numerator
g = [1 - x^2 - y^2 - z^2]
d = 2 # set the relaxation order
opt = SumOfRatios(p, q, g, [], [x;y;z], d, QUIET=true, SignSymmetry=true) # Without correlative sparsity
opt = SparseSumOfRatios(p, q, g, [], [x;y;z], d, QUIET=true, SignSymmetry=true) # With correlative sparsity
Options
SignSymmetry: true (exploit sign symmetries), false
Groebnerbasis: true (work in the quotient ring by computing a Gröbner basis), false
The polynomial matrix optimization problem aims to minimize the smallest eigenvalue of a polynomial matrix subject to a tuple of polynomial matrix inequalties (PMIs), which could be formulized as
where
for some symmetric polynomial matrices
where
The following is a simple exmaple.
using DynamicPolynomials
using TSSOS
@polyvar x[1:5]
F = [x[1]^4 x[1]^2 - x[2]*x[3] x[3]^2 - x[4]*x[5] x[1]*x[4] x[1]*x[5];
x[1]^2 - x[2]*x[3] x[2]^4 x[2]^2 - x[3]*x[4] x[2]*x[4] x[2]*x[5];
x[3]^2 - x[4]*x[5] x[2]^2 - x[3]*x[4] x[3]^4 x[4]^2 - x[1]*x[2] x[5]^2 - x[3]*x[5];
x[1]*x[4] x[2]*x[4] x[4]^2 - x[1]*x[2] x[4]^4 x[4]^2 - x[1]*x[3];
x[1]*x[5] x[2]*x[5] x[5]^2 - x[3]*x[5] x[4]^2 - x[1]*x[3] x[5]^4]
G = Vector{Matrix{Polynomial{true, Int}}}(undef, 2)
G[1] = [1 - x[1]^2 - x[2]^2 x[2]*x[3]; x[2]*x[3] 1 - x[3]^2]
G[2] = [1 - x[4]^2 x[4]*x[5]; x[4]*x[5] 1 - x[5]^2]
@time opt,data = tssos_first(F, G, x, 3, TS="MD") # compute the first TS step of the TSSOS hierarchy
@time opt,data = tssos_higher!(data, TS="MD") # compute higher TS steps of the TSSOS hierarchy
Options
CS: "MF" by default (approximately smallest chordal extension), "NC" (not performing chordal extension), false (invalidating correlative sparsity exploitation)
TS: "block" by default (maximal chordal extension), "MD" (approximately smallest chordal extension), false (invalidating term sparsity iterations)
For more examples, please check out example/pmi.jl
.
- When possible, explictly include a sphere/ball constraint (or multi-sphere/multi-ball constraints).
- When the feasible set is unbounded, try the homogenization technique introduced in Homogenization for polynomial optimization with unbounded sets.
- Scale the coefficients of the polynomial optimization problem to
$[-1, 1]$ . - Scale the variables so that they take values in
$[-1, 1]$ or$[0, 1]$ . - Try to include more (redundant) inequality constraints.
Visit NCTSSOS
Visit SparseDynamicSystem
Visit SparseJSR
Given a measure
where
Then, for any
In practice, when solving a particular POP instance via moment-SOS relaxations, we have access to a sequence of pseudo-moments
For more information on how to construct Christoffel polynomials and how to use them to strengthen the bounds from (correlatively-sparse) Moment-SOS relaxations, visit CDK_Bound_Strengthening or docs/src/technique.md
.
[1] TSSOS: A Moment-SOS hierarchy that exploits term sparsity
[2] Chordal-TSSOS: a moment-SOS hierarchy that exploits term sparsity with chordal extension
[3] CS-TSSOS: Correlative and term sparsity for large-scale polynomial optimization
[4] TSSOS: a Julia library to exploit sparsity for large-scale polynomial optimization
[5] Sparse polynomial optimization: theory and practice
[6] Strengthening Lasserre's Hierarchy in Real and Complex Polynomial Optimization
[7] Exploiting Sign Symmetries in Minimizing Sums of Rational Functions
[8] Leveraging Christoffel-Darboux Kernels to Strenghten Moment-SOS Relaxations