Skip to content

Instantly share code, notes, and snippets.

@danielkelshaw
Last active September 29, 2020 10:04
Show Gist options
  • Save danielkelshaw/27ecaa62bebe14c2fdf66680af595661 to your computer and use it in GitHub Desktop.
Save danielkelshaw/27ecaa62bebe14c2fdf66680af595661 to your computer and use it in GitHub Desktop.

PSO Introduction

Particle Swarm Optimisation (PSO) is a fairly simple population-based, gradient-free, optimisation technique which can be used on a variety of problems. This Gist serves as a basic introduction to the topic.

Qualitative Overview

Imagine that you and 49 friends are lost in a mountain range. Your objective is to try and reach the lowest point - however there are a few challenges:

  1. All of you are blindfolded and cannot see which direction to head - though you do have GPS.
  2. At any point you know how high you are through the use of a barometer.
  3. You can communicate with your friends in order to collaborate.

In order to solve this you can do the following:

  1. Take a few steps in one direction.
  2. Everyone measures their height using their barometer.
  3. If this is the best height that you have achieved - note down the position you're at.
  4. Communicate your position with your friends - work out who has been in the best position so far.
  5. Through a combination of your current heading, personal best position, and the best position from all of your friends combined - alter your trajectory.
  6. Repeat this process until you find the lowest point.

Quantitative Overview:

Instead of 50 friends lost in a mountain range, we now think of this as being 50 Particles on the surface of an N-Dimensional function - the mountain range could have been a function, f(x,y).

Each Particle has an associated position and velocity. The velocity determines which direction the Particle moves in, as well as how large the steps are. At each step you will measure the Particle's Fitness - this corresponds with the barometer reading in the previous example, the lower the fitness the better.

The optimisation procedure can be summarised as follows:

  1. fitness = f(position)
  2. if (fitness < pbest_fitness); pbest_fitness = fitness, pbest_position = position
  3. if (fitness < gbest_fitness); gbest_fitness = fitness, gbest_position = position
  4. velocity = calculate_velocity(velocity, pbest_position, gbest_position, \theta)
  5. position = position + velocity
  6. Iterate until optimised.

In the process above I refer to the global best as gbest and peronal best as pbest.
Step 4 makes use of an update formula, and the \theta just refers to some constants.

Velocity Update Formula:

When updating the velocity there are three main contributions: inertial, cognitive, and social. In order to tweak the optimisation there are a few constants which can be used:

  1. Inertial Weight: w ~= 0.7
  2. Cognitive Weight: c1 ~= 2.0
  3. Social Weight: c2 ~=2.0

Tying these together, we arrive at the formula:

velocity = w * velocity + c1 * (position - pbest_position) + c2 * (position - gbest_position)
velocity = inertial + cognitive + social

Summary

Particle Swarm Optimisation is a very simplistic algorithm at the most basic form but can still be extremely powerful. As there is no gradient requirement, PSO makes a powerful blackbox optimiser and can be used to optimise any function.

The most important thing to consider when running an optimisation procedure is the parameterisation of the problem. These parameters essentially describe the position of each Particle in the Swarm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment