Elysia Brain Simulation
Andrew Carroll      Daniel Horn     Nghia Vuong
New applications of AI apply the exponentially increasing computing power available in the same ways of the past in the hopes that intelligence will "emerge" from handwritten code. This approach is analogous to trying to join two pieces of glass together with a hammer and nail. No matter how big you make your hammer, you will fail because the approach is wrong. It would be better to try to find some glue.
We are arrogant to think we could code a system as complicated as intelligence. Just because computer power increases exponentially it does not follow that our ability to meaningfully code does as well. We should consider nature's solution: allow intelligence to wire itself.
This project is an attempt to bring a biological approach to AI. Our goal is to generate a framework that encompasses all of the meaningful properties of neurons (brain cells) and place them in an architecture where they have the greatest possible freedom to associate and evolve. Where similar attempts in the past have simplified many aspects of the biology, we believe that a complete representation of the important biology is essential to success.
Click on the image for an animation of the simulation in action.
The above screenshot shows a present model for our artificial brain. This is the user interface we use for the wiring of the brain. The large, transparent boxes are lobes and the small blue boxes are neurons. The green connections are synapses between the lobes, and the blue connections represent firing synapses. At the left, the properties of the highlighted neuron are shown including the strength of the connection, whether the synapse is exciting or inhibitory, the chemical released, the function of output, the tiredness of the neuron, etc...
In the next section I will detail the specific features of our approach. I have two goals here. Most importantly, I want to excite you to the possibilities of combining human expertise in biology with computer science. The section will also get you up to speed on the ideas we have and the decisions we have made
Biology of Neurons
Faithfully representing neural biology is essential to this approach. In biology each neuron represents the fundamental computational unit in the brain. Where previous artificial neurons have been treated like simple state machines applying a simple function to input and passing it to output, we try to capture all of the the rich "internal life" of a neuron. Neurons are modeled with the following features:
Synapse: The synapse represents the output of the neuron. When a neuron fires, it activates all of its synapses to send a signal onto their connections. These synapses can form onto another neuron, which then activates that neuron. They can also form onto other synapses, which causes them to serve similarly to a logic gate. When this occurs, they will influence the firing of that synapse, but will not cause the synapse to fire without its parent neuron firing. Synapses can be activating or inhibiting on their targets, and can release different chemicals (more on that later).
Thresholds: A neuron will fire when the chemicals on it are between a lower and an upper threshold. This allows neurons to achieve the qualitative behaviors found in biology. Standard neurons have a low lower threshold and a high upper threshold, while certain neurons have a lower threshold to respond to a window of input. Other neurons can have a lower threshold of 0 and a low upper threshold to be inactivated by stimulus. New neurons that develop would randomly choose to be "standard ones" with the others developing by mutation to fill specific niches.
Chemical Type: There are different chemicals in the brain. Right now they are differentiated only by decay rate. This enables neurons to fire to generate a long or short lasting signal. Sensory information that is constantly present tends to be short lasting while emotional state neurons tend to have long lasting chemicals.
Habituation: Habituation represents how easily a neuron gets tired. Biological neurons often tire to ignore unimportant annoying stimuli that do not warrant a response. It is also essential in preventing seizures in the brain. Some neurons do not habituate in our model (some sensory neurons) while most other neurons habituate by one of two models. In one they habituate until they reach a threshold then do not fire for a recovery period. In the other model the neurons ramp up their lower thresholds to the point where they do not fire. This enables certain neurons to act as a change detector or differentiator.
Firing function: The function relates how a neuron translates input into output. There is a continuous range of functions that vary from a linear function multiplying input directly into output, an asymptotic function which responds differently at levels barely above the input and similarly for all high values, a flatline that fires the same strength regardless of input, and an inverted function which fires weaker at higher inputs.
Lobes: Neurons are contained in lobes. A lobe represents information about the functions of the neurons within it (do they tend to be sensory or processing) and governs where these neurons tend to synapse, what kinds of functions, chemical types, and habituation types neurons within the lobe tend to have, and the types of connections they make onto neurons in other lobes. Though each neuron is capable of independent evolution contrary to the general parameters of the lobe, the lobe contains guidelines that allow evolution to group changes together in a meaningful way. If a set of neurons is to develop for function X, instead of them having to independently find the necessary values, the lobe enables them to evolve together as a functional unit.
Though the principles behind the brain are meant to be general, we are making our first test in a text-based world. This choice was made because it is well known how to make worlds and poorly known how to make brains.
We wanted a simple world we could easily expand as our brain became more powerful. To start with there are four types of objects in the world: food, poison, snark, and grue. Snarks and grues are the objects with brains. Snarks have to eat food to survive while grues have to eat snarks. Grues can only eat snarks in the darkness. At the same time, there is less food in the light, pressuring snarks to venture into the dark when their population grows. In addition, poison can be combined with food to make meal, a much better energy source. However, simply carrying poison is an energy cost which forces snarks to make decide whether the prospect of finding a food to cook a poison with overcomes the cost of potentially being stuck with the poison. Since these odds will change based on what part of the world it is in and how many other snarks there are, good reasoning brains will survive far better than a poor ones.
In this text based world, snarks and grues can evolve to change their desciptions. A grue can evolve to look like a food or another snark and snarks can evolve to camouflage themselves as well, forcing the brains to evolve measures to defeat the camouflage.
Each character in this text based world acts as a property, similar to how "red" or "round" are properties in our world. Analogously, food is food in this world because it has a bun "f" two pieces of cheese "o" and mustard "d"
The snarks and grues behave like clients that connect to the world server. They can issue a set of commands (get, seek mate, yes mate, eat, go north, etc...) This will allow different users to contribute processing power or "compete" their populations against each other.
Genetics, Learning, and Evolution
We have not finalized the details of our genetics and learning algorithms. Our current plan is to combine learning and genetics so that when a useful learned connection reaches a certain threshold it is saved in the genetic code and passed onto children (similar to the idea of a giraffe stretching its neck to eat high leaves and giving its children longer necks). This decision was made to accelerate evolution so that a creature is "evolving" over its entire life. Each lobe has a gene associated with it that determines its properties. Very strong neuron connections as well as their parents also get genes. In addition to this, certain patterns are treated like "chromosomes" which represents a neuron type across lobes throughout the brain (generating for example a set of neurons that correspond to an object across many lobes in the brain so that processing can be done on them).
The current plan for learning is that it occurs by strengthening the strength of a synapse: increasing the chance that a firing of the precursor neuron will cause the recipient neuron to fire. When many synapses onto a single neuron fired and received reward, we plan to strengthen the connection of each as well as increase the lower threshold so that the neuron will learn to respond to that specific pattern and not spurious firings.
In terms of evolution, using the computer gives us great advantages. We can evolve from very lousy brains by dumping in brains to the world and repeating with mutated copies of those that lasted the longest. We can experiment with many parameters for our defaults (chemical decay rates, learning and mating algorithms, etc...) and do a fitness assay (study how rapidly the population improves its survival and reproduction) to determine a good set of parameters. We can also impose a cost based on the number of organisms in the world so selective pressure is very high in a crowded world and more permissive in an empty one (or vice-versa).
Our current plan is to test a large number of learning algorithms in the above manner. We are VERY receptive to any ideas you may have about a good way to reward the brain in a meaningful way.
Random Connections and the Birth of Special Connections
Neurons that are found to have a high importance (rewarded many times) will acquire their own gene and be passed down as their own inheriting unit. Diversity is formed by lobes specified to hold neurons that have not yet acquired that confidence. These are wired with some randomness at the creation of the brain. The gene for a lobe governs the type of these neurons, the average number of synapses they have, and onto which lobes the synapses tend to form. If the random connection are useless they will detach and seek a new connection, if they are rewarded they will acquire a name and will inherit as a useful unit. Lobe and neuron genes can duplicate, mutate, and be deleted just like real genes. Duplicated lobes will be able to specialize in different ways now that the redundancy protects some of their essential functions when mutation occurs.
The above figure shows a randomly wired brain placed into our world.
The current sensory system consists of a general method for taking a set of objects and breaking each object into its properties These properties are then pooled together. The sensory system can learn which properties are the most interesting and will pick out the most interesting properties. The most interesting set of properties is tracked back to an object in the room. This object is selected as one to pay attention to, its input is subtracted from the remaining room object and the system repeats until the brain chooses a number of objects (presently 7) that it will pay attention to. These are used for further input while the remaining complex input that was not interesting enough to warrant attention is ignored.
The brain uses asynchronous processing steps for this sensory information. It cycles very rapidly between the objects it pays attention to and put each piece of information in to neurons with chemicals that persist long enough for them to continuously perceive the objects. All of this calculation is done using the neuron properties themselves, except for some code which the brain uses to tell the world what objects it is paying attention to and a comparison algorithm to the things in the room.
A driving aspect behavior is the internal context of the mind as well as the input from the environment. Complex creatures require internal emotions to help them modulate their behavior. Presently emotions are represented by neurons with very slow chemical decay. Emotions will synapse broadly onto the brain to influence which connections fire. Neurons associated with hunger might for example activate neurons associated with finding and eating food while inhibiting other neurons.
Our present model from the brain involves the addition of a bayesian reasoning program that can receive input from the brain and provide suggestions to the brain based on the input. Bayesian reasoning is a method for determining from a series of nodes (or events) a correlation between the events and assigning a probability to each correlated event. The Bayesian reasoner is capable of looking at the previous actions the brain experienced and deducing the relationship between the actions and the outcome. This is useful for reward prediction where the brain has to decide between a number of competing plans (trying to mate, trying to cook food, etc...) that have a balance of immediate and distant rewards. As the creature learns its bayesian reasoner becomes more accurate and powerful.
Reasoning systems of this nature are well known and reasonably powerful. They excel in simple situations, demolishing opposition in most simple games. In more complicated worlds it becomes difficult for a bayesian reasoner to make sense of which cause resulted in which effect. Eventually the time required for it to draw a reasonable conclusion is prohibitive.
This is why the bayesian reasoner receives input from the brain. The brain can learn which input tends to be important to the causal patterns. It may learn, for example, that taste inputs tend to relate to getting sick later on (eating bad food) and choose limit the information passed to the reasoner to taste to give the reasoner the best chance of drawing good conclusions. Similarly, the brain can ignore advice from the reasoner in situations where it has learned the reasoner is inadequate (new situations, systems with rapidly changing rules, etc...). The inclusion of these two processes represents research into the human brain that shows that certain decisions are governed by bayesian reasoning (determining expected results) and others are governed by emotional response (economic exchange, social interaction, and moral dilemmas).
Almost all of the processing occurs before the output functions. There is a lobe associated with possible actions the brain can take, the neuron in the output which exceeds a certain threshold first generates an action. The action neurons then reset and the decision for the next action builds.
I think we have some good ideas and we've had good results implementing them so far. We're always eager to bring people in who are enthusiastic about artificial intelligence and understanding what makes the human brain the incredible instrument it is.
- Finish wiring of emotional networks
- Determine sensible constants for randomly associating lobes
- Generate the bayesian reasoner and tie it to the brain
- Determine a sensible data structure for the exchange of genetic information during mating
- Optimize so brain can run as rapidly as possible
- Run many parallel simulations to find best learning algorithms and neuron defaults
- User interface finished
- Biology of neurons finalized and put into structure
- Random lobe wiring completed
- Artificial text based world finished
- Sensory system has been wired and successfully parses complex input
How to explore currently running simulation