-
combine metaheuristics with ILP solvers:
- usually using large neighborhood search
- metaheuristcs:
- exploiting problem information
- can be deceptive
- high-quality solutions
-
LNS: large neighborhood search
- small neighborhoods: fast to find improvement, local minima might be low
- large neighborhoods: local minima is high, finding fit neighbors is hard.
- ILP-LNS:
- generate a solution S, destroy partial, apply ILP solver, apply acceptance criterion.
- LNS works well:
- number of variables is linear
-
Focus question:
- What kind of general algorithm can we apply when the above conditions are not fufllfilled?
-
CMSA: Construct Merge Solve & Adapt
- doi:10.1016/j.cor.2015.10.014
- method:
- initialize: set all solution component age to 0
- probabalistically generate good solutions
- assemble a sub-instance, add to subset
- solve the sub-instance, increment age, set solution sub-instances to 0
- delete useless sub-components, ones that have maximum age
- LNS search space is solutions that contain partial solution sp
- CMSA solutions that only contain components from the subset
-
Extension: reducing sub-instances
- delete useless sub-components, ones that have maximum age
- removing all might be too radical
- use a ranking system instead, based on LP solution costs
- delete useless sub-components, ones that have maximum age
-
test: multi-dim knapsack
- use utility value to rank, greedy heuristic is based on solution ranking
- determinism rate, probability for choosing the best at each step
- standard MDKP problem
- using irace for parameter tuning
- parameter: percentage of old items to be removed, 100 is original CMSA algorithm
- all search found <100 for above param, proving use of extension
- beats CPLEX on MDKP
- seems more fit for small solution spaces
- CMSA vs LNS on MDKP, better than LNS on small solutions
- neutrality
- in SOO, s1 s2 are neutral iff f(s1)=f(s2)
- Neutrality in evolutionary algorithms… What do we know? 2011
- not much about neutrality in MOO
- try several definitions of neighbor neutrality
- pareto dominance
- epsilon indicator
- hypervolume indicator
- neutrality = "solutions with same or equivalent fitness"
quadrants | 1 | 2 | | 4 | 3 |
-
neighborhood partitioning
- PD, quadrants are 2: clear improve, 4: detriment, 1&3: unknown, maybe equivalent
- epsilon, split 1 and 3 quadrants in half, forward half is equivalent
- or apply a delta to your solution, take subsection of +delta, +/-delta in 1&3 quadrants
- hypervolume, use hypervolume to make lines in 1&3 quadrants, those can indicate equivalence
- similarly apply a delta bound around the solution
- use normalized hypervolume transformation with a delta
- creates subpartitioning in 1&3 like with epsilon or equivalent/neutrality
-
analysis on flowshop scheduling problem and TSP
- analyze neutral neighbors distribution
- compute neutral ratio: number of neutral neighbors to population size
- TSP generally has more neurtrality than FSP (across definitions)
- 3 objectives vs 2 objs increases neutral ratio
- using average value delta, 2 objectives, epsilon has more neighbors
- most in epsilon are in +/- delta, not just +delta
- similar trends in 3 objs
- HD with delta bound has half as many neighbors as HD without delta bounds
- many fewer than epsilon
- again, same trends in 3 objs
- varying delta
- linear decrease in epsilon
- logarithmic decrease in epsilon
-
main point is different tunable neighborhood definitions
- TSP is more neutral than FSP, needs more problem analysis
- neutrality is important tool for understanding MOO fitness
-
Motivation
- NKP is a subset of traveling theif problem
- NKP can be solved by (1+1)EA
- rent rate is an important constant in the problem
-
Problem
- a route N and set of items M to distribute with a vehicle of capacity W
- maximize benefit of carrying items
- benefit = total profit - renting rate * total travel cost
- renting rate is a constant, unbounded to infinity
-
Impact of the renting rate
- create bounds for all items, "general bounds"
- what about item-specific bounds?
- calculate bunds for each item e
- decide if item is trivial: above max for adding carry profit
- lambda, non trivial item ratio: try to maximize this
-
using EA to maximize non-trivial item ratio
- lambda-EA can almost reach 1: no non-trivial items based on R
- on harder NKP:
- two cities, one unique item with optimal solution
- all r other items give suboptimal reward, even combined
- (1+1)EA can't choose unique item for different values of r
- find other hard NKP problems:
- compare MIP to EA failure with random profits and weights
- maximize EA failure
- in analysis of renting rate, there are some that greatly increase difficulty
-
Conclusions
- there are hard instances of NKP for (1+1)EA
- item-specific bounds suggest the hard general renting rate bounds
-
Motivation
- big valley hypothesis: good solutions are clustered
- recent studies: multiple funnels
- similar findings in cont optimization
-
Multiple clusters in TSP
- local optimal networks of fitness landscapes have multiple components
- RQ: are there clusters in fitness landscapes? are they related to search difficulty
- method: apply community detection on fully evolved search spaces
-
Fundamentals
- Fitness Landscape Analysis
- construction of meta-heuristics
- triplet of
- search space
- fitness function
- neighborhood structure
- depeends on search operator
- requires distance metric d
- Kauffman NK family of landscapes
- binary search space
- local optima networks with escape edges
- map local optima based on edges
- iterated local search
- meta-heuristic principle
- intensification by local search (hill climbing in a basin)
- diversification by perturbation
- Fitness Landscape Analysis
-
Experiment
- generate 300 instances of NK model
- for each instance:
- extract LON with escape edges
- apply community detection to LON
- determine emperical difficulty
- community detection:
- used markov cluter algorithm
- DOI: 10.1007/BF00202749
- used markov cluter algorithm
-
Results
- quality of community structure
- modularity:
- number of edges falling within groups minus the expected number in an equivalent network with edges placed at random
- visual inspection of graphs related to imperical success rate
- clear to see more clusters, large global optima cluster
- local optima increased to decrease success rate
- weak correlation between cluster number and success rate
- success rate strongly correlated with cluster size of global optima
- quality of community structure
-
Conclusions
- more clusters can indicate search difficulty
- size of cluster is strongly correlated to success rate
- inter-cluster connection is sparse