A friend of mine named Corrie daCosta and I shared a small but interesting discussion on evolution during the Christmas holiday. Corrie is a scientist (Biologist) who researches topics I don’t understand. However; on evolution we found very interesting middle ground between Biology and Computer Science. After the holidays, Corrie sent me this interesting article from Seed Magazine called Algorithmic Inelegance which contrasted the complexity of genes to the elegance of well designed computer algorithms.
The article reminded me of an artificial life simulation I created while studying Genetic Algorithms. This was a simple simulation which replicated the behavior of an ant colony using artificially generated algorithms for the ants. Allow me to explain:
Ant Colony Development Simulation using a Genetic Algorithm:
- Each ant has a primitive set of ant sensor inputs:
- Can detect Pheromone? (Which direction is stronger, which direction is weaker)
- Are you next to food?
- Are you carrying food?
- Each ant has a series of primitive actions:
- walk around randomly
- follow the pheromone from weakest to strongest
- follow the pheromone from strongest to weakest
- pick up food
- release pheromone
- There is a large environment with a nest at the center and groups of randomly distributed food sources. The food source will slowly be depleted as the ants harvest the food, and the pheromones will decay over time.
Each simulated ant within the colony had a unique logic tree, a computer generated algorithm of inputs and actions, which dictated the ant’s actions within the environment and colony. Every time an ant moved, it would execute the logic tree to decide what to do next. The logic tree or algorithm is analogous to the ant’s genes.
The simulation started off with a population of 500 ants, all with randomly generated logic trees. A generation would run for a limited time, and each ant’s success (fitness) is measured by the number of food objects they collected and brought home. At the end of the first generation, none of the ants are successful.
After each generation the simulation would produce 500 new offspring ants based on natural selection and evolution from the previous generation. The better an ant performs, the more offspring it will have. In logic: Choose 2 ants from the top 20% of food collectors [Natural Selection], split and merge their logic trees (Reproduction), and add in some random mutation on the genes because replication is high fidelity, but not perfect.
The simulation would be repeated for many generations. Surprisingly, it only takes a small number of generations before ant-like behavior starts to emerge, and only a few more generations before the simulation produced full fledged ants that worked as a cooperative colony to collect food.
Ant logic for behaving as a cooperative colony, the ideal algorithm:
- walk around randomly until you find pheromone or food.
- when you find pheromone, follow it to food.
- when you find food, follow the pheromone home, and emit pheromone along the way.
There are many interesting ant development plateaus during the generations. For instance, an ant would not learn to follow pheromone until a peer ant learned how to drop pheromone properly. In order for these traits to propagate across many descendants, these two traits would have to be used simultaneously to harvest more food reward then other ants without these traits. However; once discovered, it only took a few generations of this emergent behavior to spread to the entire colony. This is not the point of the our discussion, but it is interesting how phenotypes emerge in symbiotic and competitive clusters. In competitive simulation I built between a hockey goalie and forward on a breakaway, the forward would not learn how to deke until the goalie learned how to save a simple shot.
Ok, that was a lot of setup and hopefully it makes sense.
The interesting part is that the ideal ant algorithm can be described simply in a few lines. It is also a very simple logic tree for humans to design and construct containing about 10-15 input and actions on the logic tree. The evolutionary approach, on the other hand, created much larger logic trees which were 10 to 100 times larger than necessary. That is to say, they contained many extra, unecessary steps and extra complexity. These final logic trees are next to impossible to read, but with interpretation they could be reduced to the simple ideal.
The main reason for this is that the reproduction system lacks a selection force which prefers smaller logic trees to larger logic trees. The system lacks either a gene complexity penalty, or gene simplicity bonus. The reward for new genes which yield a superior individual far outweigh the neutral effect of losing a redundant gene. What matters is the amount of food collected, and this simulation rewards positive influences, penalizes negative influences, and is simply indifferent to the addition or removal of redundant influences. It is these redundant genes which we consider sloppy, inelegant, and overly complex.
“That’s all evolution needs from developmental processes: something that works well enough, no matter how awkward or needlessly complex it may seem.” – PZ Myers, Algorithmic Inelegance
We can take a lot from this simulation because it is so simple. We can fully comprehend the elegance of the ideal algorithm, and we can compare the results that the genetic algorithm produces against this ideal. In this experiment, the results fully support the article in that evolution yields a far more complex structure than a deliberately designed solution. Furthermore, from this example we can see that while the elegant designed solution is more satisfying to us, it has little to no bearing on the success of the ants in the experiment. There are many solutions which get the job done, but only a small percentage would be considered elegant. Without a selection force which prefers simpler genes, there is simply no tangible reason for genes to be elegant.