Some claims of quantum biology are still controversial, and more evidence will be required to convince sceptical scientists. Many biological molecules are too large to behave like single atoms, and some researchers have argued that the warm, wet world inside our cells would prevent life from exploiting the quantum properties of the very small. Nonetheless, given our biological understanding that interactions between pairs of molecules impact many cellular processes, and that this is the scale at which random quantum interactions happen, there is potentially a route via which randomness could make our existence down to chance. A number of scientists now argue that the warm, wet environment inside cells is ideal for quantum processes to scale up to influence the dynamics of whole organisms. However, this does not answer how quantum mechanics might result in consciousness and free will. At this point, the evidence becomes sparse and I am straying into conjecture. The Many World proponents may now see an opportunity to accuse me of generating an untestable hypothesis. But I would argue that my theory is more testable than theirs.
In the chapter on the evolution of consciousness I described how the neocortex processes information. The neocortex, a highly folded part of the brain, is very large in humans, and is the part of the brain that makes us intelligent. Inputs to the neocortex from our sense organs – the eyes, ears, nose, skin and tongue – lead to a propagation of signals through networks of cells until neurons fire that allow us to recognize particular objects: remember the Jennifer Aniston neuron. Neurons in different clusters across the neocortex may fire, with signals from multiple neurons processed before a consensus is reached about what we are experiencing. Other neurons then propose how we should respond and a decision is made.
Although data are largely lacking, a number of psychologists, mathematicians, philosophers and physicists have proposed a role for quantum mechanics in consciousness, and given that sentience appears to arise from electrical activity of neurons interacting via synapses, there is certainly potential for stochastic quantum processes at the level of interacting molecules to contribute to consciousness, and ultimately to free will. Exactly how is currently a matter of speculation. Nonetheless, we cannot rule out a role for quantum randomness across the hundreds of thousands of synapses that fire every second that contribute to the way we make decisions. I have strayed from what science can currently tell us into guesswork, so let me bring it back to a line of evidence that is a little more concrete.
Many scientists and philosophers have drawn comparisons between computers and the human brain. They usually point out how aspects of the way computers operate mimic the workings of bits of the human brain. Such comparisons are made more and more frequently as artificial intelligence becomes more embedded in our everyday lives. Many of the algorithms that computers use to make choices when playing a game such as chess, or to find optimal, or near-optimal, solutions to currently analytically intractable problems, rely on random numbers. For example, ChatGPT relies on random numbers when you ask it to complete a task and computer scientists use randomness to help solve all sorts of computational problems that, today at least, cannot be solved without recourse to chance. Randomness can be used to identify solutions to problems, and if genetic mutations are due to chance, as seems probable, then we owe our existence to random events. It might seem counterintuitive that randomness can help evolution find solutions to make organisms more competitive, or computer scientists can use randomness to find optimal solutions to problems, so I will briefly give an example of how I have used random numbers in my research to solve problems.
My Ph.D. research investigated the impact of seed predators and herbivores on tree reproduction in forests in the US and the UK. In hindsight, it didn’t move the field forward much, but I don’t think it set it back at all either. I did learn a lot, and one thing I learned was how I could use a random number generator on the computer to construct a statistical model of seed and seedling predation by squirrels. I had collected quite a lot of data from experiments where I placed acorns in various densities across the forest floor, some in deer- or chipmunk- or squirrel-proof enclosures, others not. I would then return to see how many had been eaten. I then wrote down a mathematical model to describe the impact of vertebrate seed predators on the likelihood of seeds in different places germinating. The next job was to combine the mathematical model and the experimental data to identify the values of parameters that gave me the model that did best in explaining variation in the data I had collected.
I wrote a program in a computer language called Turbo Pascal to find the most likely values of each parameter in the model. The program implemented something called the Metropolis–Hastings algorithm that uses random numbers to hone in on the most likely solution. The way the algorithm works is akin to finding the highest point in a mountain range in thick fog at night using a teleportation device and random numbers.
At each location in the finding-the-highest-point analogy you can measure the altitude. Having done so, you randomly select a direction to move in and then teleport a fixed, pre-determined distance before measuring the altitude at this new location. You then compare the old and the new altitude. If the new location is higher, you move to it 70 per cent of the time, but if it is lower, you stay put in your original position, also 70 per cent of the time. You do this again, and again, exploring the mountain range. In the Metropolis–Hastings algorithm you are exploring a likelihood surface. Instead of latitude and longitude coordinates of the surface of our planet, and an altitude, the lats and longs are replaced by parameter names – perhaps alpha and beta – and altitude is replaced with a statistical quantity called the likelihood. The likelihood describes how well the model predicts the data you have collected given the values of alpha and beta. The highest value of the likelihood is called the maximum likelihood and it is analogous to the peak of the highest mountain in a range. The Metropolis–Hastings algorithm is one way to explore the likelihood surface and find the maximum likelihood.
The mountain-range analogy breaks down a little when a model has more than two parameters. If a model has, say, four parameters called alpha, beta, gamma and delta, instead of the latitude and the longitude coordinates on our mountain range analogy we now have four dimensions to explore instead of two – one for each parameter. With altitude, the latitude and longitude coordinate produce a total of three dimensions, while with four parameters and the likelihood my example model has five. I cannot visualize a surface that has more than three dimensions, but mathematically it does exist. I find it much easier to imagine a mountain range, and that is why I use the analogy. The Metropolis–Hastings algorithm works in any number of dimensions, allowing scientists to find the values of all the parameters in a model that maximize its likelihood.
There are two tricks in getting the Metropolis–Hastings algorithm to work. First, you need to select the distance to move such that you can explore the entire likelihood surface. If the distance between where you are now and the next place to sample is small, all that happens is you end up on the top of the mountain you start out on. You might be high, but there may be taller peaks in the range, which means you won’t be at the highest point. In maths-speak, you would be at a local maximum rather than the global one you are looking for. Second, how often should you not move to a higher location? The 70 per cent value works well, but other acceptance rates may on some occasions be slightly better (it depends on the specifics of the problem being addressed).