Skip to content

Instantly share code, notes, and snippets.

@davidagold
Last active August 29, 2015 14:18
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save davidagold/d7b85e8fda3457f39a02 to your computer and use it in GitHub Desktop.
Save davidagold/d7b85e8fda3457f39a02 to your computer and use it in GitHub Desktop.
Perform the simplex method on a linear programming problem in canonical form
function simplex() # (A::Array{Float64, 2}, b::Array{Float64, 1}, c::Array{Float64, 1})
# Initialize the constraint matrix A, constraint vector b, objective function c
# In the future these will be passed as arguments to simplex()
A = Float64[4 2 5 15 4; 1 4 5 1 2; 1 0 3 1 11]
b = Float64[17, 9, 16]
c = Float64[15, 4, 10, 11, 27]
println("We first find a basic feasible solution to the given problem (or show that none exists).")
# Perform Phase I on given problem
x, possible = phaseI(A, b)
possible == false && return "There are no basic feasible solutions."
println("Ax = ", A*x)
# println("Ax - b = ", *(A, x) - b)
println()
println("We now perform Phase II on the basic feasible solution x.")
println("Step0 x = ", x, " with c(x) = ", (c'x)[1])
# Perform Phase II on x
for i in 1:20
x, optimal, cost = phaseII(x, A, b, c)
if optimal == true
println("An optimal solution is ", x, " with c(x) = ", cost)
println("Ax = ", A*x)
break
elseif cost == -Inf
println("There is no optimal solution.")
else
println("Step", i, " x = ", x, " with c(x) = ", cost)
end
end
end
#Simplex method Phase I
function phaseI(A::Array{Float64, 2}, b::Array{Float64, 1})
n = length(A[1,:])
m = length(b)
Aprime = hcat(A, eye(m))
c = vcat(zeros(n), ones(m))
x = vcat(zeros(n), b)
println("Step0 x = ", x)
for i in 1:50
x, optimal, cost = phaseII(x, Aprime, b, c)
println("cost = ", cost)
if optimal == true
if cost == 0
println("A basic feasible solution is x = ", x)
return x[1:n, 1], true
else
println("There are no basic feasible solutions.")
return x[1:n, 1], false
end
end
println("Step", i, " x = ", x)
end
end
#Simplex method Phase II
function phaseII(x::Array{Float64,1}, A::Array{Float64, 2}, b::Array{Float64, 1}, c::Array{Float64, 1})
case1 = true
subcase1 = true
n = length(x)
m = length(b)
s = Int64
t = Array(Float64, 1)
lambdaset = Array(Float64, 1)
lambda = Float64
xnew = zeros(x)
# Set the basis set of indices
basis::Array{Int, 1} = find(x)
# Set the m x m basis matrix AB
AB::Array{Float64, 2} = [ A[i, j] for i in 1:m, j in basis ]
# Set the M-dimensional basis vector xB
xB::Array{Float64, 1} = [ x[j] for j in basis ]
# Set the M-dimensional cost vector cB
cB::Array{Float64, 1} = [ c[j] for j in basis ]
# Find possible feasible solution y to dual problem
y::Array{Float64, 1} = *(transpose(inv(AB)), cB)
# If y is feasible for the dual problem, then we are in case 1
# If y^{T}A_{j} <= c_{j} for j in basis, then y is feasible for the dual problem
yfeas = [ (*(transpose(y), A[1:m, j])[1] - c[j]) for j in setdiff(1:n, basis) ]
# println("y^{T}A_{j} - c_{j} for j in basis = ", yfeas)
sindex = findfirst(x -> x > 0, yfeas)
sindex > 0 && ((case1 = false); s = setdiff(1:n, basis)[sindex])
### Case 1 ###
# If we are in case 1, then x is an optimal solution to original LPP
if case1 == true
return x, true, (c'x)[1]
else ### Case 2 ###
# If we are in case 2, we must check whether we are in subcase 1 or subcase 2
# Set the m-dimensional vector t
t = *(inv(AB), A[1:m, s])
# println("t = ", t)
# if t[i] <= 0 for i in basis then we are in subcase 1
findfirst(x -> x > 0, t) > 0 && (subcase1 = false)
### Subcase 1 ###
# If we are in subcase 1, then there is no optimal solution
subcase1 == false || return x, false, -Inf
### Subcase 2 ###
# If we are in subcase 2, we find a new basic feasible solution to the LPP with the following:
# We first find the largest lambda such that xnew(lambda) is feasible
lambda = minimum([ x[basis[i]]/t[i] for i in find(x -> x > 0 , t)])
# Set the new basic feasible solution xnew
xnew[s] = lambda
for h in 1:length(basis)
xnew[basis[h]] = x[basis[h]] - lambda*t[h]
end
# println(*(A, xnew) - b)
# println("The cost of x is ", *(transpose(c), x))
# println("The cost of xnew is ", *(transpose(c), xnew))
return xnew, false, (c'x)[1]
end
end
@davidagold
Copy link
Author

The function simplex() performs the simplex method (http://en.wikipedia.org/wiki/Simplex_algorithm) on a given linear programming problem in canonical form (or canonical problem). For now, the linear programming problem is initialized in the first few lines of the body of simplex().

A canonical problem consists of an m by n matrix A, an m-dimensional vector b, and a linear objective function (or cost function) c: R^{n} -> R, which is often represented as an n-dimensional vector. An n-dimensional vector u that satisfies the matrix equation Ax = b is called a feasible solution, and a feasible solution that also minimizes the objective function c(x) over all feasible solutions is called an optimal solution. A feasible (optimal) solution u is said to be a basic feasible (optimal) solution if the column vectors of A corresponding to the non-zero entries of u are linearly independent.

The simplex method has two phases: in phase I, we find a basic feasible solution to the given canonical problem (or we show that no such solution exists). In phase II, we take a basic feasible solution to the given canonical problem and either (i) show the input solution is optimal, (ii) show there exists no optimal solution to the given canonical problem, or (iii) produce a new basic feasible solution with cost strictly less than the input feasible solution. If the third case, we take the output basic feasible solution and run it through phase II again. We iterate until we arrive at an optimal solution or show that no optimal solution exists.

The overall structure of the algorithm is interesting. Phase I actually calls Phase II on a modified version of the input linear programming problem, and the first iteration of Phase II takes as input the basic feasible solution found in Phase I.

Note 1: It can be proved that there are only a finite number of basic feasible solutions, so the algorithm is guaranteed to stop... eventually.

Note 2: Technically we must impose a few additional constraints on the input canonical problem in order that the simplex method work, but they are generally satisfied in practice. In particular, we require: (i) that m > n; (ii) that A has rank m, which is to say that all of A's row vectors are linearly independet; and (iii) that the vector b cannot be written as a linear combination of fewer than m columns of A.


%%% Project ideas for Recurse Center pairing interview %%%

  1. One can show that any basic feasible solution to a canonical problem that satisfies the three conditions imposed in Note 2 will necessarily have m nonzero entries. Thus, Phase II should both take as input and return as output a vector with precisely m nonzero entries. However, phaseII() will occasionally return a vector that has fewer than m nonzero entries due to floating point approximations. This becomes problematic when iterating phaseII() on its own output vector since the body of phaseII() involves matrix computations that throw an error if the input vector has fewer than m nonzero entries. (In particular, on line 89 taking the inverse of AB will fail due to the specification of AB when the input vector x has fewer than m nonzero entries.) I'd like to implement a robust method for identifying and mitigating such floating point errors so that phaseII() can be guaranteed to return a vector with precisely m nonzero entries.
  2. Create an 'LPP' ('Linear Programming Problem') type and refactor simplex(), phaseI() and phaseII() for clarity.
  3. Optimize code for performance/make more idiomatic. What I have in mind right now is mostly figuring out which of my variables I can declare as constants...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment