Can we choose between alternative courses of action? Yes of course we can. We do it all the time. But do we make those choices under the control of our own spirit, or are we robots acting mechanically?
One thing’s for sure, because it’s already happening – the robots of the future will have something that looks pretty much like free will.
So will they really have it?
Do we have it now?

 . . . and one more thing – Since Jan 2024 I have started putting my essays on Substack and with audio too! Please follow me there and give it likes and shares. Pretty please all that good stuff.

Free Will

I never used to think much about free will. We make choices all the time: right?  Well maybe, maybe not. Josh Szeps invited Robert Sapolsky into one of his excellent ‘Uncomfortable Conversations,’ and I can’t say I was impressed by Sapolsky’s argument against the existence of free will. I’m not denying his conclusion. I’m just saying that his argument was thin.

However that may be; does it matter? I believe it does. This discussion has profound implications for our future in a world of AI and/or human disillusionment, so I’m rather annoyed that Sapolsky didn’t satisfy. But first I’ll paraphrase his argument here. 

He describes a thought experiment.

Let’s say you were expecting to get a good result in an exam because you know your stuff, but a sledgehammer falls on your foot on your way into the examination hall. Let’s say that you sit the exam anyway with pain pulsing through your body. Now anyone would understand that your mental ability and therefore the answers you give in the exam would be affected by this event in your life history. So why can’t everyone agree with the view that all our responses to the world follow the same process? The only difference is that this example shows one big event and an immediate reaction, but in real life we are talking about millions of small events in our whole life history and their combined effects on all our actions. 

In Sapolsky’s view – the differences are only in degree and number. So we don’t actually have the power to choose anything. When we think we are choosing responses to alternatives using our free will, those choices are only the mechanical result of everything that has previously happened in our life.

Now I’m reserving my position on the outcome of the discussion, but I can say immediately that is a lousy argument. Sapolsky’s thought experiment doesn’t prove anything. He describes something that may be true, and then he jumps from that description of something simple to a conclusion about one of the cornerstones of complex human experience. 

First he doesn’t discuss how the sledgehammer affects the exam answers. Perhaps they would just be altered by bad grammar or by omitting some clever discussions? We aren’t invited to consider for example that if we go into an exam with the belief that that dogs are nicer than cats, then a sledgehammer will make us say otherwise. If we could all agree on that, his example might be more impressive.

My bigger objection is that from this contrived and simple scenario, he extrapolates in one jump to extreme complexity. Complex systems, by definition, aren’t simple. 

Give me a moment here to invent a new law of reasoning. Ahem. James’ Law states that For any rule that governs simple systems, there is a threshold of ‘sufficient complexity’ to make nonsense of that rule. Einstein only makes nonsense of Newton when we get close to the speed of light.  Gödel’s incompleteness theorems only apply to formal systems of mathmatics that are of sufficient complexity to be self- referential. It’s as if Sapolsky is saying ‘Apples fall off trees so Einstein got it wrong.’ or ‘2+2 = 4, so Gödel must have made a mistake.’

Now it may be that he has a formal proof derived from this exam-sledgehammer story tucked away somewhere; that he thought it was too rarified for Josh’s listeners. But I should say that even though I don’t like his chain of logic he’s given me enough for what I want to say already so I’ll quit the rant and take over from there.

I’d rather look at this from a different angle. 

First. Lack of free will doesn’t imply that history and the future are predetermined – or (cue spooky music) . . . does it? Josh and Sapolsky discuss what people or societies ought to do in various circumstances. Sapolsky says for example that a guilty verdict on a murderer may be sufficient to detain the criminal in quarantine to save society from further harm, but we shouldn’t blame him or punish him for his crime any more than we blame or punish a car with faulty brakes. This kind of statement shows that even while proposing the strict absence of free will, a philosopher can acknowledge that we still make decisions all the time. How can that be?

Sapolsky would agree that we can decide what actions to take, it’s just that these decisions  are not produced by human spirit but they are absolutely constrained by our history (including the immediate history which includes the stimulus) He’s not even saying that decisions are predetermined, just that the owner of the decision is acting mechanically not freely. In his universe, people without free will make decisions all the time.  But that doesn’t mean that the process and outcome are calculable. Why not; how can that be?

You might say that the result of a dice throw is a calculable function of the geometry of my hand and the dice before the toss, the mass and accelerations of those components at the moment of the toss, maybe a bit of air friction and the physical and geometrical characteristics of the table where the dice land. Just a few variables and a result that is constrained, but in practice incalculable. With even such a simple example, pseudo-random and random are impossible to separate in practice.

So with everything that influenced my life, and all the thirteen billion year history of the universe that influenced my environment, predicting a dice toss is simple arithmetic compared with calculating the chance of ‘without-free-will-me’ choosing chocolate ice cream tomorrow. (The chance that I chose it yesterday is of course 100%!)

So to sum up, in a world without free will: Decisions are taken by people, they lead to actions which in slightly different circumstances might have been different. The results may be unpredictable by anyone other than the decider, and even by the decider himself.  Bottom line: a world in which we have free will looks exactly the same as Sapolsky’s world where we don’t. Neither he nor the rest of us can trace the paths leading to choices that have already been made or predict the choices that will be made. The only significant point here is that he argues against assigning guilt or triumph to those who have made those choices.

I can’t remember which of his episodes it was, but I first started questioning my own views on this subject after Alex O’Connor mentioned a few months back that he doesn’t think we have free will either.  If I remember correctly, his argument was very different though. He said something to the effect that we all act selfishly. Even when we make acts of charity, we only ever do that because we calculate that the feelgood or social income (to ourself) will outweigh the cost. He said we have to do that. A change of mind just before a decision, for example, is nothing but a re-evaluation of the scoreboard. So we must do what our calculation tells us even if we have made an error. I wondered at the time what he would think about dice or coin tosses.

Say I promise to donate $1,000 to charity on a ‘heads.’ Then it’s still my decision after the coin toss to comply or to break my promise. Alternatively, beforehand, I could have rigged myself up to some machine of torture to force my compliance – but in a world of free will that too would have been my decision. And therefore in a world without free will it is not my decision. Alex’s argument is deeper and richer than Sapolsky’s, even if the result is the same.  But there is still nothing in our observation of the world either before or after the decision is made that definitively proves or disproves the existence of free will.  Maybe that’s why according to Wikipedia:  “These questions predate the early Greek stoics, and some modern philosophers lament the lack of progress over all these centuries.”

So again I think Alex’s argument, though more compelling, is still not sufficient.

Thank you, Josh Szeps for your input, because you brought us back from the unknowable hall of mirrors to real-world effects that this stuff has on life as it is lived.  So what are the effects on our culture of these discussions? Why did I say at the start of this essay ‘This discussion has profound implications for our future in a world of AI and/or human disillusionment.’ 

I actually said that for two different reasons. One is about design and the nature of mind the other about behaviour and influence. Here goes.

The simpler one for me to consider is the second one. It’s about behaviour and influence. Josh mentioned that he once went to a lecture by a writer who had spoken about the need for aspiring writers to keep getting back up from rejection and disappointment. That inspiration made a positive impact on him. Josh resisted Sapolsky’s arguments – not on logical grounds, but on the practical stance that he believes that an optimistic view of one’s own agency is a good thing. Yes, Josh. 100%! Thank you!

In last week’s essay, I talked about nihilism and miserable young men. Feed Sapolsky into Nihilism, and you have the perfect vicious circle – downward spiral – perpetual motion generator of depression. Scream with me here, ‘Not only is there no meaning in my life, but I don’t even have the power to disagree.’ Please, please, miserable people. Don’t listen to this stuff! Go sniff a rose! Get drunk! Anything but this! How ironic that someone purporting to deny free will might precipitate suicidal decisions in others!

But apart from the possible effect of the reverberations of that discussion in an uncertain age,  my other avenue of thought wonders about a possible resolution of the discussion itself. Maybe rather than this question being proven by deduction it will be demonstrated by our current experiments with Artificial General Intelligence.

This discussion has implications for machine consciousness, or maybe I should say self awareness.  If we don’t have free will, and perhaps we don’t, then why does the experience of being human feel so much as if we do? There are echoes here of Alan Turing. ‘A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.’ OK, Alan. The world passed that point on 17 June 2022

So, what about the next step on that ladder? If neither humans nor computers have free will, but we humans think we have, then maybe it’s no longer such a big stretch for a computer to think that it has too?

Photos from Unsplash by Robert Anasch (doors), Lyman Hansel Gerona (C3PO) Jens Lelie (forest path) and Nika Benedictova (dice)

Nick James      Posted in:



17 January 2024, Brittany

Header Image:

Drawing by the Author