Skip to content

Instantly share code, notes, and snippets.

@spitis
Last active December 27, 2017 00:27
Show Gist options
  • Star 7 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save spitis/dd5b71bd2d694c0f8e47185b3c5bbd0c to your computer and use it in GitHub Desktop.
Save spitis/dd5b71bd2d694c0f8e47185b3c5bbd0c to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# random optimization algos implementations for cs 7641\n",
"\n",
"this notebook provides simple implementations of random restart hill climbing, simulated annealing, genetic algorithms, and mimic, on **bitstrings only**.\n",
"\n",
"Usage:\n",
"\n",
" rhc(bitstring_length, fitness_function)\n",
" sa(bitstring_length, fitness_function)\n",
" ga(bitstring_length, fitness_function)\n",
" mimic(bitstring_length, fitness_function)\n",
" \n",
"Certain hyperparameters for each algorithm are also available. \n",
"\n",
"Requires tensorflow for MIMIC, but the network is simple enough that it should run on CPU just fine, and still way faster than the very slow alternative (https://github.com/mjs2600/mimicry). Please note that the method of modeling the probability distribution is very different from the one presented in the lectures / mimic paper (IMO, this approach is much better, more expandable, and faster).\n",
"\n",
"All algorithms use a \"patience\" parameter to determine when to stop. Since they don't know if the current maximum is the global maximum, and give up if they don't find a better result within patience tries. "
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import numpy as np\n",
"import random, numpy\n",
"from deap import base, creator, tools, algorithms\n",
"import tensorflow as tf, numpy as np"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def rand_neighbor(bitstring):\n",
" idx = random.randint(0, len(bitstring)-1)\n",
" bitstring = bitstring.copy()\n",
" bitstring[idx] = 1 - bitstring[idx]\n",
" return bitstring"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# implement hc"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def hill_climb(bitstring, evalfn, patience=50):\n",
" \"\"\"Hill climbing applied to `bitstring` using fitness function `evalfn`. \n",
" Gives up trying to climb after `patience` random neighbors fail to produce\n",
" a better maximum.\"\"\"\n",
" no_improvement = 0\n",
" current_fitness = evalfn(bitstring)\n",
" while True:\n",
" candidate = rand_neighbor(bitstring)\n",
" fitness = evalfn(candidate)\n",
" if fitness > current_fitness:\n",
" bitstring = candidate\n",
" current_fitness = fitness\n",
" no_improvement = 0\n",
" else:\n",
" no_improvement += 1\n",
" \n",
" if no_improvement >= patience:\n",
" break\n",
" return bitstring, current_fitness\n",
"\n",
"def rhc(stringlen, evalfn, patience=20):\n",
" \"\"\"Random restart HC applied to bitstrings of length `stringlen` with \n",
" fitness function `evalfn`. Restarts until `patience` consecutive restarts\n",
" fail to produce a better maximum.\"\"\"\n",
" no_improvement = 0\n",
" best_bitstring = np.random.randint(0, 2, [stringlen])\n",
" best_fitness = evalfn(best_bitstring)\n",
" while True:\n",
" bitstring = np.random.randint(0, 2, [stringlen])\n",
" candidate, fitness = hill_climb(bitstring, evalfn)\n",
" if fitness > best_fitness:\n",
" best_bitstring = candidate\n",
" best_fitness = fitness\n",
" no_improvement = 0\n",
" else:\n",
" no_improvement += 1\n",
" \n",
" if no_improvement >= patience:\n",
" break\n",
" return best_bitstring, best_fitness "
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"(array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 20)"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rhc(20, sum)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# implement sa"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def sa(stringlen, evalfn, init_temp=1., decay=0.99, patience=1000):\n",
" no_improvement = 0\n",
" best_bitstring = np.random.randint(0, 2, [stringlen])\n",
" best_fitness = evalfn(best_bitstring)\n",
" \n",
" bitstring = best_bitstring.copy()\n",
" current_fitness = best_fitness\n",
" temp = init_temp\n",
" while True:\n",
" candidate = rand_neighbor(bitstring)\n",
" fitness = evalfn(candidate)\n",
" \n",
" if fitness > current_fitness or random.random() < np.exp((fitness - current_fitness) / temp):\n",
" bitstring = candidate\n",
" current_fitness = fitness\n",
" if fitness > best_fitness:\n",
" best_fitness = fitness\n",
" best_bitstring = bitstring\n",
" no_improvement = -1 \n",
" \n",
" temp *= decay\n",
" no_improvement += 1 \n",
" if no_improvement >= patience:\n",
" break\n",
" \n",
" return best_bitstring, best_fitness"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"(array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 20)"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sa(20, sum)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# implement ga"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"creator.create(\"FitnessMax\", base.Fitness, weights=(1.0,))\n",
"creator.create(\"Individual\", list, fitness=creator.FitnessMax)"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def ga(stringlen, evalfn, pop=50, cxpb=0.5, mutpb=0.2, patience=10):\n",
" toolbox = base.Toolbox()\n",
" toolbox.register(\"attr_bool\", random.randint, 0, 1)\n",
" toolbox.register(\"individual\", tools.initRepeat, creator.Individual, toolbox.attr_bool, n=stringlen)\n",
" toolbox.register(\"population\", tools.initRepeat, list, toolbox.individual)\n",
" \n",
" def evalReturningTuple(x):\n",
" return (evalfn(x),)\n",
" \n",
" toolbox.register(\"evaluate\", evalReturningTuple)\n",
" toolbox.register(\"mate\", tools.cxTwoPoint)\n",
" toolbox.register(\"mutate\", tools.mutFlipBit, indpb=0.10)\n",
" toolbox.register(\"select\", tools.selTournament, tournsize=3)\n",
" \n",
" def run_5_gens(pop, hof):\n",
" if pop is None:\n",
" pop = toolbox.population(n=50)\n",
" if hof is None:\n",
" hof = tools.HallOfFame(1)\n",
" stats = tools.Statistics(lambda ind: ind.fitness.values)\n",
" stats.register(\"avg\", numpy.mean)\n",
" stats.register(\"min\", numpy.min)\n",
" stats.register(\"max\", numpy.max)\n",
"\n",
" pop, logbook = algorithms.eaSimple(pop, toolbox, cxpb=0.5, mutpb=0.2, ngen=10, stats=stats, halloffame=hof, verbose=False)\n",
"\n",
" return pop, hof\n",
" \n",
" no_improvement = 0\n",
" best_fitness = float('-inf')\n",
" pop, hof = None, None\n",
" while True:\n",
" pop, hof = run_5_gens(pop, hof)\n",
" fitness = hof[0].fitness.values[0]\n",
" if fitness > best_fitness:\n",
" best_fitness = fitness\n",
" no_improvement = 0\n",
" else:\n",
" no_improvement += 1\n",
" \n",
" if no_improvement >= patience:\n",
" break\n",
" return hof[0], hof[0].fitness.values[0]"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 20.0)"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ga(20, sum)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# implement mimic"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def mimic(stringlen, evalfn, latentlen = 100, num_samples = 100, samples_to_keep = 50, patience=20):\n",
" batchsize = tf.placeholder(tf.int32)\n",
" r = tf.random_uniform([batchsize, latentlen])\n",
" logits = tf.layers.dense(r, stringlen)\n",
" labels = tf.placeholder(tf.float32, shape=[None, stringlen])\n",
" loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits)\n",
" generated_samples = tf.floor(tf.nn.sigmoid(logits) + tf.random_uniform([batchsize, stringlen]))\n",
" train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)\n",
" sess = tf.Session()\n",
" sess.run(tf.global_variables_initializer())\n",
" \n",
" best_fitness = float('-inf')\n",
" bitstring = None\n",
" no_improvement = 0\n",
" while True: \n",
" samples = sess.run(generated_samples, feed_dict={batchsize: num_samples})\n",
" fitnesses = np.array(list(map(evalfn, samples)))\n",
" sorted_fitnesses = np.argsort(fitnesses)\n",
" kept_samples = samples[sorted_fitnesses][-samples_to_keep:]\n",
" fitness = evalfn(kept_samples[-1])\n",
" \n",
" if fitness > best_fitness:\n",
" bitstring = kept_samples[-1]\n",
" best_fitness = fitness\n",
" no_improvement = -1 \n",
" \n",
" no_improvement += 1 \n",
" \n",
" if no_improvement >= patience:\n",
" break\n",
" \n",
" sess.run(train_step, feed_dict={batchsize: samples_to_keep, labels: kept_samples})\n",
" \n",
" sess.close()\n",
" return bitstring, best_fitness"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"(array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
" 1., 1., 1., 1., 1., 1., 1.], dtype=float32), 20.0)"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mimic(20, sum)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment