Thursday, 2 April 2020

Published April 02, 2020 by with 0 comment

Particle Swarm Optimization Algorithm - Explained


The PSO algorithm is a nature inspired meta-heuristic, used to find optimal solutions in large search spaces where brute-force and linear searches would not be viable in terms of time and resources. What makes PSO stand out from other algorithms such as The Genetic Algorithm and Ant Colony Optimization is that PSO excels in the continuous space(real numbers) which is useful in applications such as finding neural network weights, solving equations and anything else involving numbers.

PSO has better resilliance toward getting stuck in local-optimums(not globally best solutions). The founder of PSO actually started the project as a simulation model for the flocking behavior of birds, until they realized it showed potential for optimization. Therefore we have a particle based vocabulary describing the algorithm.

The Algorithm - Digitized

Initialize Population

A population otherwise known as the swarm of particles are created and placed randomly within the search space, the size of the population ranges anywhere from 20 to 150 particles depending on the size of the search space and the resources required by the evaluation function. The population keeps track of the best solution found during the running of the algorithm, also called the Swarm best position.

Coders Insight

The population will hold all the particles, keep track of the best solution found so far and pass that information to the particles on every update iteration.

Evaluate Particles

The Evaluation Function, also known as the Objective Function takes the particles position as input parameters and outputs a fitness score used to compare particles to each other. It can be a distance metric, reward system, penalty count or an ordered list if must be.

Update Particles Position

Particles adjusts their velocity(speed and direction) according to the Swarms(algorithms) best found position and the Particles best found position with random jumping factors, allowing for over-jumping, which aids in exploration of the search space and preventing particles from getting stuck in local optimums. Here's the famed equation:

w - Inertia(Velocity decay) factor
Pvel - Particle Velocity vector
R1, R2 - Random numbers 0 to 1
C1, C2 - Maximum Jump Constants
Pbest - Particle best position found vector
Ppos - Particle current position vector
Sbest - best position vector found during the algorithm

(Pbest - Ppos) gets the distance and direction toward the particles best past known position, C1 determines the maximum jump factor in that direction, value of 2 would mean it can jump twice the vector length. R1 is a random number from 0 to 1 that determines how far it will jump along that vector. C1 defaults to 1.5 to 2 for search space exploration reasons.

(Sbest - Ppos) gets the distance and direction toward the Swarms(algorithms) best position, C2 determines the maximum jump factor, C2 normally is the same as C1. R2 is a random number from 0 to 1 that determines how far it will jump along that vector. R2 is not the same as R1.

w*Pvel keeps the particles heading in the same direction as before, making it more resilient to falling into local optimums. w is the inertia factor that decays the particles heading, allowing the velocity to be influenced more easily. w is usually a number 0 to 1, where 1 is no decay; the default is 0.72.

Coder Insight

Particles have position and velocity vectors(arrays), particles keep track of the best position it has been in before. Here`s the equation written for coders:

P.vel[i] = w*P.vel[i] + rand()*c1*([i] - P.pos[i]) + rand()*c2*([i] - P.pos[i])
P.pos[i] = P.pos[i] + P.vel[i]


When the Swarms best position gains a high enough score or ceases to progress satisfactorily.


Hope this article improved your understanding of PSO. This article is a work in progress and comments are welcome, Thank you for reading.

Quick Code! (Alpha)

class Particle {
  constructor(searchSpaceSize, searchSpaceMaximum) {
    // position and velocity vectors sized according to search space
    this.position = Array(50).fill(searchSpaceSize);
    this.velocity = Array(50).fill(searchSpaceSize);

    // Search Space bounds
    this.searchSpaceMaximum = searchSpaceMaximum;

    //memory of the best positition found so far
    this.bestPosition = Array(50).fill(0);

    //score/fitness value
    this.score = 0;

  // can be an array of bounds for each element
  placeRandomlyInSearchSpace() {
    for (let i = 0; i < this.position.length; i++) {
      this.position[i] = Math.random() * this.searchSpaceMaximum;

  updatePosition(swarmBestPosition) {
    // Decay and distance constants
    const w = 0.72;
    const C1 = 2;
    const C2 = 2;

    // for the random numbers
    let R1 = 0;
    let R2 = 0;

    for (let i = 0; i < this.position.length; i++) {
      //Generate Random Numbers
      R1 = Math.random();
      R2 = Math.random();

      // Update Velocity
      this.velocity[i] =
        w * this.velocity[i] +
        R1 * C1 * (this.bestPosition[i] - this.position[i]) +
        R2 * C2 * (swarmBestPosition[i] - this.position[i]);

      // Update Position
      this.position[i] = this.position + this.velocity[i];

  evaluatePosition(evaluationFunction) {
    // Evaluate the current position and get score
    this.score = evaluationFunction(this.position);

    // Check if it's the best position found so far by particle
    if (this.score > this.bestScore) {
      this.bestScore = this.score;
      this.bestPosition = [...this.position];

    return this.score;

class Swarm {
  ) {
    // Evaluation Function as a callback
    this.evaluationFunction = evalFunc;

    this.particles = Array(numberOfParticles);

    // Best position found in history of object
    this.bestPosition = [];
    this.bestScore = 0;

    // Some params defining the search space
    this.searchSpaceSize = searchSpaceSize;
    this.searchSpaceMaximum = searchSpaceMaximum;

  // Creates Particles and passes search space info to particles
  initializeParticles() {
    for (let i = 0; i < this.particles.length; i++) {
      this.particles[i] = new Particle(

  run() {

    let running = true;
    while (running) {
      for (let i = 0; i < this.particles.length; i++) {

        score = this.particles[i].evaluatePosition(this.evaluationFunction);

        // Check if Particle Score better than ever found
        // If so save
        if (this.particles[i].score > this.bestScore) {
          this.bestPosition = this.particles[i].position;
          this.bestScore = this.particles[i].score;
Read More

Tuesday, 17 February 2015

Published February 17, 2015 by with 16 comments

The Genetic Algorithm - Explained

Paradigm of Evolution

Every organism contains an array of traits - called DNA, when 2 parents mate they produce a child containing a mixture of their DNA. Depending on how well those traits work together to help the child survive so that he may reproduce, will determine if those traits will pass into future generation. Rarely a random trait enters a child's DNA that was not inherited from the parents, we call this mutation. If the mutation is beneficial, the organism will survive long enough to carry the mutation over to future generations. So the cycle continues, after many generations we continue to optimize the population by "mixing and matching", "trial and error" of  DNA.


Genetic Algorithms are used to find optimal solution by method of evolution-inspired search and optimization; Generally used in problems where linear/brute-force searches are not viable in terms of time, such as – Travelling Salesman Problem, Timetable Scheduling, Finding Neural Network Weights, Sudoku, Trees(data-structure) etc..  The first requirement is an encoding scheme suitable for representing individuals, second requirement being a evaluation function for representing the fitness of an individual.

Read More

Thursday, 8 January 2015

Published January 08, 2015 by with 8 comments

Neural Network Illustrated – Step by Step

I1 and I2 are the inputs scaled to [-1,1] or [0, 1], depending on the activation function used
f()=Activation Function=Tanh(), Sigmoid() or any differential-able function
W=Current neurons input weights, initialized randomly between [-1, 1].
Wb=Bias Weight, connected to nothing, used as a threshold, initialized same as W
N=The output of the current neuron.

Read More
Published January 08, 2015 by with 17 comments

Markov Chains - Explained

Markov Chains is a probabilistic process, that relies on the current state to predict the next state. For Markov chains to be effective the current state has to be dependent on the previous state in some way; For instance, from experience we know that if it looks cloudy outside, the next state we expect is rain. We can also say that when the rain starts to subside into cloudiness, the next state will most likely be sunny. Not every process has the Markov Property, such as the Lottery, this weeks winning numbers have no dependence to the previous weeks winning numbers.

Read More
Published January 08, 2015 by with 6 comments

Bloom Filters - Explained

The Bloom filter is a space efficient, probabilistic data structure, designed to test the membership of elements to a set. The trade-off for being a space efficient data structure is it may return false positives, but always returns definite negatives: Meaning Bloom filters can accurately test an element for non-membership to a set, but can only with probability test an element for membership. Bloom filters find application in circumstances where testing for non-membership saves resources such as calls to a web server, checking a proxy cache. Google uses Bloom filters in the Chrome browser as a preliminary check for malicious URL's.

Read More