Introduction to Biologically Inspired Computation

This assignment was conducted in October 2017 in the Biologically Inspired Computation class at Heriot-Watt University and received the grade of 86%.

It mainly consisted in studying different parameters of a Multi Layer Perceptron (MLP) (which is one of the most basic form of Artificial Neural Network (ANN)) and develop a small Evolutionary Algorithm (EA) to solve a problem. In this case, the Traveling Salesman Problem (TSP).

The code is hosted here .

Solving the TSP problem with an Evolutionary Algorithm#

Introduction to Evolutionary Algorithm by solving a Traveling Salesman Problem.

Definition:

The TSP can be defined as such: given a collection of cities and the cost of travelling between each pair of them, the goal is to find the shortest route visiting each member of a collection of locations and returning to the starting point.

Structure:

➜  TSP git:(master) tree
.
├── bioTSP
│   ├── ChainedList.py
│   ├── Individual.py
│   ├── Mutation.py
│   ├── Population.py
│   ├── Selection.py
│   ├── TSP_Cost.py
│   ├── Update.py
│   └── __init__.py
├── map
│   ├── ei1001.tsp
│   ├── ei15.tsp
│   └── ei8246.tsp
└── results
    ├── ei1001_tsp
    │   └── ...
    ├── ei15_tsp
    │   └── ...
    ├── ei8246_tsp
    │   └── ...
    └── hillclimbing
        └── ...

The .tsp files in the map folder are different version of the same file (ei8246.tsp). The name of the file indicates the number of points the algorithm has to visit to solve the problem. I created smaller versions of this file to conduct faster experiment with the algorithm and to check the influence of the maximum number of iterations without any update.

How to run#

Go in the folder TSP/ and run in a shell the command:

$ python3 main.py [file to use] [method] [number of generations]

The default arguments are ./map/ei8246.tsp hillclimbing 5 which will run 5 interations of the hillclimbing algorithm on the file ei8246.tsp. [method] also accepts main as argument to run tests using a tournament selection. More details are given in the pdf report.

In the main file, the arguments sizeOfPop, sizeOfSelection, maxNbIter and MaxNbIterWithoutUpdate are the main parameters of the EA to change. The hillclimbing algorithm is basically identical to the main program: if we have a population of one individual, this individual will always be selected. Thus the main algorithm is the same.

The main classes of the EA are in the files Individual.py which describes individuals of the population, and in Population.py which is the population. The main method of this class is runEvolution() which selectsindividual, clones it, mutates the clone, checks its fitness and then updates the population if the fitness of the mutant is more interesting.

Conclusion of the assignment#

The hill-climbing method seems to give similar results as the tournament selection method in solving this trade salesman problem. However, when an algorithm requires several hours to find a solution which is not necessarily the best one due to the random aspect of the generation, the hill-climbing method is more interesting because having a population of one individual speeds up the computation.


Artificial Neural Network#

Introduction to NN and MLP (Multi Layer Perceptron). In this folder is the Python package I developed for the assignment.

How to run:#

 $ python3 main.py

or

 $ python3 main_simplified.py

Structure:

➜  ANN git:(master) ✗ tree
.
├── ANN
│   ├── ANN.py
│   ├── __init__.py
│   ├── activationFunctions.py
│   └── functionsToStudy.py
├── main.py
├── main_simplified.py
└── results
    ├── fct_cubic__af_sigmoid_1-5-1_20171021_141212.png
    ├── fct_cubic__af_sigmoid_1-5-1_20171021_141212.txt
    ├── ...
    ├── fct_sine_af_sigmoid_scale_1-5-1_20171021_173715.txt
    └── feedforward_only
        ├── Figure_1_1-5-1_cubic_sigmoid.png
        ├── ...
        └── Figure_2_1-5-1_cubic_sigmoid.txt

Adjust settings used for the assignment:

The main function takes the following as arguments

  • [int, [int], int]: architecture of MLP
    • int: number of input neurons
    • [int]: size of list is number of layers, value is number of neurons in the layer
    • int: number of output neurons
  • functionToStudy: the function we try to approximate
  • activationFunction
  • scaleOrNorm: bool - default: False. Select normalization of vectors or not.

Use the package#

The package can be used outside the assignment purposes as I wanted to make it independent. To create an ANN object, the required arguments are:

  • The number of neurons on the input layer, A list whose length is the number
  • of hidden layers and whose values are the number of neurons at the different hidden layers,
  • The number of neurons on the output layer, The activation function

Then happens the initialisation of the ANN with the method initANN(min, max) to which you give the minimum and the maximum values of the randomly generated weights. The training of the ANN is done, either with the method runTraining(inputSet, expectedOutput) to run a Batch training, or with the method runOnlineTraining(inputSet, expectedOutput) to run an Online training. Once the ANN has been trained, use the method useANN(inputSet) to get results of the training.

© Antoine Cougny - 2020 - Theme used is @hugo-theme-codex