Your parents were involved in a car accident this morning between two autonomous vehicles on their way to lunch. The Other driver, a young female, survived unscathed. The logs from the car reveal your parents had been travelling at a constant speed, with no obstacles in its path when the other car altered course and smashed into the side of them.
Who is she?
The young female is a member of a wealthy family, which meant that she had abortive collision insurance installed which crashed the car in such a way which ensured her survival without consideration of other cars that don’t have the same insurance.
The young woman’s vehicle sensed the collision and implemented an escape programme which calculated every possible escape scenario. The car on her left hand side had collision abortive insurance which meant the car was unable to veer that way. Unfortunately, your parents were the closest car without it.
Welcome to the future and my biggest fear for the impending automation of industry and the growth of AI — human selfishness.
This is a huge philosophical question which harks back to the trolly car problem
The worry that machines and robots will take over the world is overblown and misleading, the reality is that they won’t. Most likely, at most, they will assist us in our everyday tasks, improving our performance far beyond what is currently imaginable.
That’s not to say AI shouldn’t be questioned or kept in check
The deeper question though delves into the possibilities that are about to emerge which creates far more questions that must be answered.
Active assists on vehicles are great, anything which improves the performance of a vehicle beyond human capabilities should be utilised as extensively as possible as to ensure the lowest possible loss of life.
But AI cannot be developed in such a way that preferential treatment is given to people who are able to afford programmes that preserves their life above others when faced with the probability of catastrophe
As with the above, the extension of those possibilities to alternative realms is entirely imaginable. Autonomous vehicles are the easiest to imagine due to the fact they will be rolled out widely within the next decade.
iRobot provides a window to future probabilities
In the opening scene the robot saves the life of a one human more likely to survive than the other. This showcased the logical reasoning which I believe would be most sensible, but what if the the opposite could also be true?
What if the wealthiest members of society wore hardware which superseded the logical reasoning of computers meaning they would save them regardless of the alternative possibilities?
This brings forth a whole other level of moral reasoning which juxtaposes the current conversation of what happens if machines kill humans.
Our fate could be decided by what we can afford
And that is what we must contend with. Detrimental effects of automation are an inevitability of progress, but casualties that arise will be scrutinised fervently as their injuries will not have been caused by the actions of somebody else, they will have resulted as a direct consequence of the actions of a computer.
The jump is stark
Human progress has went from autonomy of self, to control of machines through to autonomy of machines which take control.
The first two we were responsible for the consequences, the latter we are passengers to our own fate
We have ceded control in hope that the time regained from undertaking menial and tedious tasks allows us to focus our effort on things that matter more, but it is essential that we first create a memorandum of rights which is universally applicable. In order to progress we need to be clear with the established protocol to understand the implications of technological development.
Oversight after the fact will not be good enough
And that is what we must understand. We need to know why something is going to happen before it does with automation. We need to see and hear about what will happen before it does because it is all predictable. It is predictable because every eventuality is predicated by the code we give it. Autonmous vehicles at the end of the day are still governed by the laws that we fix.
Whoever writes the code could be playing god