Life and Death — The future of AI

Chris Herd
4 min readSep 21, 2016

--

Both your parents are dead.

What happened?

They were involved in a car accident this morning between two autonomous vehicles while driving to lunch. The Other driver, a young female, survived unscathed. The logs from the car reveal your parents had been travelling at a constant speed, with no obstacles in its path when the other car altered course and smashed into the side of your parent’s car.

Who is she?

The young female is a member of a wealthy family, which meant that she had abortive collision software installed which crashed the car in such a way which ensured her survival without any care or attention for the loss of others’ life. And it is all completely legal, you’ll receive compensation for your inconvenience.

Why?

The young female’s vehicles on board computer sensed the collision and implemented an escape programme which instantaneously computed every possible scenario. The car on her left hand side also had collision abortive software and therefore the car was unable to veer that way. Unfortunately, your parents were the closest car without it and were therefore the unfortunate casualties in this incident.

How?

Welcome to the future and my biggest fear for the impending automation of industry and the growth of AI — human selfishness.

The worry that machines and robots will take over the world is overblown and misleading, the reality is that they won’t and at most they will assist us in our everyday tasks, improving our performance far beyond what is currently imaginable. That’s not to say there aren’t elements of AI which should be questioned, of course there are. The deeper one delves into the realms of AI and the possibilities that are about to emerge the more question arise that must be answered.

Active assists on vehicles are great, anything which improves the performance of a vehicle beyond human capabilities should be utilised as extensively as possible as to ensure the lowest possible loss of life. But AI cannot be developed in such a way that preferential treatment is given to people who are able to afford programmes that preserves their life above others when faced with the probability of catastrophe.

As with the above the extension of those possibilities to alternative realms is entirely imaginable. Autonomous vehicles are the easiest to imagine due to their proliferation into public consciousness and the fact they will be rolled out widely within the next decade. iRobot provides a window into the future of possibilities. In the opening scene the robot saves the life of a one human more likely to survive than the other. This showcased the logical reasoning which I believe would be most sensible, but what if the the opposite could also be true?

What if the wealthiest members of society wore hardware which overrode the logical reasoning ability of computers and would save them regardless of the alternative possibilities? This brings forth a whole other level of moral reasoning which juxtaposes the current conversation of what happens if machines kill humans. Now we are forcing the decision by what we can afford.

And that is what we must contend with. Detrimental effects of automation are an inevitability of progress, but casualties that arise will be scrutinised fervently as their injuries will not have been caused by the actions of somebody else, they will have resulted as a direct consequence of the actions of a computer. The jump then is stark. Human progress has went from autonomy of self, to control of machines through to autonomy of machines which take control. The first two we were responsible for the consequences, the latter we are passengers to our own fate.

We have ceded control in hope that the time regained from undertaking menial and tedious tasks allows us to focus our effort on things that matter more, but it is essential that we first create a memorandum of rights which is universally applicable. In order to progress we need to be clear with the established protocol to understand the implications of technological development.

Oversight after the fact will not be good enough.

And that is what we must understand. We need to know why something is going to happen before it does with automation. We need to see and hear about what will happen before it does because it is all predictable. It is predictable because every eventuality is predicated by the programme we give it. Autonmous vehicles at the end of the day are still governed by the laws that we fix.

So don’t fear AI and autonomy, embrace it, but be ready to be furious at the way in which humanity utilises the most important tool of the 21st century.

At the end of the day we will only have ourselves to blame.

Enjoyed the read? I’d really appreciate if you clicked the ❤ below to recommend it to other readers!

Want more like this? Follow me on Medium,Twitter, Facebook, or visit www.chrisherd.co.uk

You may repost this article on your blog, website, etc. as long as you include the following (including the links): “This article originally appeared here. Follow@Chris_Herd for more articles like this.”

--

--

Chris Herd
Chris Herd

Written by Chris Herd

CEO / Founder / Coach @FirstbaseHQ Empowering people to work in their lives not live at work ✌️✌

Responses (9)