Выбрать главу

The networks of neurons in your brain consequently combine information, with each layer building up a more detailed picture of what you are experiencing. Information can also be combined across the senses, with links between neural networks devoted to working out what you are touching, smelling, looking at and hearing. As your networks of neurons build up a picture of what you are experiencing, the brain also filters out redundant information. A neuron only fires, sending an electrical current down the axon and a signal on to other cells, if enough synapses are triggered. What this means is if there is very little information to suggest you are looking at a picture of Jennifer Aniston, then your Jennifer Aniston neurons will not fire.

A Network of Neurons

There is significant overlap in what neurons do within the brain. Several neurons in a part of the brain called the hippocampus could fire if you see a picture of Jennifer Aniston. Other neurons might also fire that are in some way associated with Jennifer Aniston. For example, a neuron linked to Courtney Cox, who, along with Aniston, starred in Friends. This suggests that some neurons may code for the abstract concept of Friends rather than just one of the actresses. What to do when different neurons fire that encode different information? This response has been studied in detail in another part of the brain, the neocortex.

Much of the heavy lifting your brain does in terms of identifying patterns happens in your neocortex. It is folded in complex ways, and is typically between 2 and 3 millimetres thick. If removed and laid out flat post-death, your neocortex would cover a square with sides of about 45 centimetres. Via careful measurement with high-tech, high-powered microscopes, researchers have estimated that there are 100,000 neurons within each rice-grained size volume of your neocortex, linked by half a billion synapses. These neurons are arranged into between one and two million pillars known as cortical columns. In humans, each of these columns contains a little over one hundred neurons, arranged into six layers.

The majority of neurons in the neocortex are arranged in a plane running from the skull towards the centre of the brain. Other neurons are oriented across this plane. These neurons shuttle information between different parts of the neocortex. Perpendicular neurons somehow appear able to achieve a consensus on what you are experiencing. For example, if you are looking at a picture of Jennifer Aniston, and twenty Aniston neurons fire and only three Courtney Cox neurons fire, the consensus reached will be that you are looking at a picture of Jennifer Aniston. The details on exactly how this is achieved are not yet fully understood although progress in neuroscience is currently so rapid it likely won’t be long before we understand it better.

Another area of active research is how we lay down memories. You must have seen pictures of what you are looking at to know what it is. We don’t instinctively know what something looks, smells or tastes like, but instead we need to learn. We do this as we experience things. For example, most of us have never met Jennifer Aniston and have instead learned who she is from watching her act in blockbuster movies or TV shows. We learn the faces of our friends, acquaintances and work colleagues through meeting and talking with them, and synapses form that allow us to recognize them. Although scientists have yet to work out the details of how new memories are formed and stored, and old ones are lost, it is clear that sleep is when much of the work is done. Getting a good night’s sleep really does help you remember.

I have so far described how the brain identifies pictures or smells, rather than a dynamic series of events, such as a movie or a smell getting worse. I have explained how the brain interprets a picture of Jennifer Aniston, but not what she is doing in the film Marley and Me. The brain is very good at identifying stills, but it can do so much more. To survive in the real world, we need more than a series of static snapshots. Instead, the simulation our brain produces needs to be dynamic. It is more like a video game than a picture postcard, and it needs to predict. To create a dynamic simulation of the world, the brain needs to monitor moving objects, either visually, auditorily, via touch or smell, and to predict where they will be in the future. The way the brain does this is a little bit like the autofocus tracking feature on modern-day cameras that can maintain focus on objects as they move across the camera’s viewfinder, but rather than tracking just one object, the brain can track many simultaneously, can work out the distance between them, and can predict the consequences of these objects being rotated in three dimensions.

The way that the neocortex creates the dynamic simulation of the world in our head is described by something called frames of reference theory. A frame of reference is a way of using coordinates to describe the location and movement of objects, and is most intuitively understood by focusing on vision, although the logic can be extended to hearing, touch and even to language and complex thought. Frames of reference are used to describe the origin, orientation and distances between objects, and their rates of change with time. The origin is the central point, and in the case of the brain, it is you, the observer. The orientation describes how objects are rotated with respect to one another. For example, if you are upright and are standing in front of a clock face looking at the hour hand, it appears vertical and pointing upwards when it is noon, vertical and pointing downwards when it is pointing to the number six, and horizontal and pointing left or right when pointing to the nine and three numerals respectively. If you rotate the clock face, in any direction you wish, the orientation will change. Our brain is excellent at tracking the changing orientations of objects.

The distance between two objects allows the brain to work out how close or distant they are from one another. The distance is usually thought of as being between the observer and the object, but a frame of reference system can be used to work out the distance and orientation between any two objects, regardless of whether one of them is at the origin or not. You can tell when two cars are on a collision course with one another, and can predict when the crash will happen, by repeatedly tracking the distance to each car, the distance between each car, and the fact they are on a course to collide.

A frame of reference describes how objects move relative to one another and relative to you, the observer. The brain monitors how the orientation and distance between objects change with time. Our neocortex is remarkable as it can simultaneously work out frames of reference for multiple objects in our fields of sensory perception. It does this via large numbers of synapses firing as they are stimulated by a particular pattern, but also via using what are known as grid and place cells to construct a map of the world around us. Our remarkable brains can take all this information and construct a simulation of the world and predict what is about to happen.