Do billiard balls (or humans) have free will?
Billiard balls
If free will is “the power of acting without the constraint of necessity or fate; the ability to act at one’s own discretion,” then it’s pretty clear that billiard balls do not have free will.
Why? Because obviously when the cue ball strikes the eight ball, the eight ball doesn’t decide to bounce left or right. The eight ball doesn’t decide anything at all. Its movement is wholly constrained by physics.
Self-driving cars
With more complexity comes more sophisticated behavior.
Self-driving cars can ‘decide’ for themselves when and where to turn. The atoms in the microchips that run the software that steers the vehicle are governed by physics, but in a loose sense the car can act of its own accord.
More sophisticated behavior is not the same as free will, however. We don’t ascribe human behavior to self-driving cars. Self-driving cars do only what they were programmed to do.
AI robots
What about some future artificial intelligence robot that mimics human behavior so thoroughly that it’s able to learn and behaves so convincingly that it’s able to pass the Turing test?
For example, suppose that when traveling to the United Kingdom, the robot (while driving a car) figures out that it’s best to drive on the left side of the road and so chooses to drive on the left side, contrary to its original programming in the United States.
Is that free will? Could it have decided differently?
Free will
That’s not free will in the sense defined above, because the robot is still behaving solely in accord with its programming. The programming might have been something like: if new danger X is encountered, add it to the list of dangerous items and avoid X the next time it’s encountered. Driving on the right hand side of the road, in the United Kingdom, is a new X.
Suppose we introduce a random number generator into the algorithm, so that our human-like robot becomes unpredictable, much in the same way that humans are unpredictable? Does that get us any closer to free will (or a ghost in the machine)?
No, it only makes the machine’s actions difficult if not impossible to predict. We still wouldn’t say that the machine decided to turn left instead of right. All it does is follow its programming.
The test
Suppose we put two identical AI robots in the same situation at the same time and at the same location and then we watch how they behave. We can’t do this in real life, of course, because two physical objects can’t occupy the same space at the same time.
So suppose instead that we build two matrix-like worlds in cyberspace, two identical simulations. And we set our robots loose in the simulation at the same time and same place, etc. So that the worlds are the same. Exactly the same.
If our duplicated robots are exactly the same, and if they encounter the same stimulus at the same time and place, could our robot (singular) behave differently in cyberworld-one vs. cyberworld-two?
If so, what would account for the difference?
(That random number generator again?)
The human upgrade
Suppose next that we enhance our robot in such a way that it’s self-reflecting. That is, suppose we add sub-routines to the robots ‘brain’ routine so that it’s able to reflect upon itself, and reflect upon how others react to it.
Eventually our robot may come to behave (`think’, act) as if it’s the master of its own destiny. It may come to believe that it’s deciding, that it’s making decisions, that it could have turned right instead of left, even if, as the master programmers, we know that it could not.
Is it possible to have chosen otherwise?
If we answer, “No, in identical worlds our robot would [must] behave the same,” then it has no free will in the sense defined above. It could not have chosen otherwise, even though it ’thinks’ it could have.
Is that what we mean by ‘free will’ when we say that humans don’t have free will? Do we mean that even though it feels like you chose to read this, that in fact you had no real choice at all, no more than the billiard ball could choose to go left instead of right?
We live as if we do have free will, of course. For example, we choose not to drink and drive. We choose who to vote for. We choose who to marry.
But the question remains: Do we really choose, in the free will sense defined above, or are we actually evolutionary-generated meat robots who only self-deceivingly think we choose?
Are we self-deceiving billiard balls with an attitude, or do we actually have agency in the sense that we could have chosen otherwise?