Your Gateway

Gateway CU

Driverless cars

They are just around the corner and will appear on a road near you in the not too distant future. Audi has announced that its driverless A8 luxury limousine will be released in 2017. Nissan has declared that it will release a self-drive car by 2018. And Ford says its fully autonomous vehicles will be on the market by 2020. 

Most major auto brands plus tech giant, Google, are in the race to put a robot chauffeur in our vehicles. In the coming years, sitting in the driver’s seat of a car exclaiming, “look, no hands!”, will not be a joke but a reality. Google has released a prototype of a self-driving car which has no steering wheel, accelerator or brake pedal. 

Proponents of fully autonomous vehicles assert that they will eliminate human error which is the cause of most accidents. Humans are emotional creatures who can and do make rash decisions when behind the wheel. As fallible drivers we speed, cut corners, run red lights, stray into the wrong lane and get frustrated in heavy traffic.

Driverless cars, on the other hand, are void of these human foibles and operate on logic. The computer-driven “horseless carriages” of the future will be programmed to behave cautiously and obey instructions. A car’s control system will not get tired, drunk or impatient. It will always signal when turning and won’t put the pedal to the metal, talk on the phone or succumb to road rage. 

It is claimed that driverless cars could dominate roads in the next 15 years. Driverless technology uses a range of devices - including cameras, sensors, radars, GPS and computer vision - that constantly monitor a vehicle’s surroundings. These control systems interpret sensory information to identify appropriate navigation paths as well as obstacles.

Another claim is that driverless vehicles will “talk” to each other using vehicle-to-vehicle (V2V) communication technology. This should provide motorists with not only a quicker journey but a safer one - V2V can activate an emergency stop to prevent a collision. 

These purported benefits make state-of-the-art, self-drive vehicles sound like auto utopia. But is this really the case? Driverless cars will free us to work, rest, play and chat while in transit, so commute time will no longer be down time. However, I’m not ready to get in my car, go to sleep and wake up at my destination as if being transported like cargo.

I actually like driving and being in control of my vehicle. Also, I’m not convinced that all the computer bugs have been ironed out yet. What happens if glaring sun blinds a car’s cameras? How will a driverless car react to road rage from another vehicle? What about insurance? Who pays if your robot-driven car causes an accident? 

Test trials of driverless vehicles have shown that the technology’s record is not squeaky clean. Google’s test fleet of autonomous cars has been involved in 11 accidents. Nonetheless, I have no doubt that the technology will get better and that sooner rather than later driverless cars will get the green light from traffic authorities. But I won’t be an early adopter of robotic cars. 

In 1942, science fiction writer, Isaac Asimov, suggested “rules” to govern the behaviour of robots. These three rules have since become known as Asimov’s Laws and state:

-  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

-  A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

-  A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

While Asimov tried to give machines a moral code, this is not possible in reality. So what will a driverless car do when faced with a moral dilemma? Ethicists say that you face a moral dilemma when you are required to do each of two actions but can only do one of the actions. Let me illustrate this dilemma with an example.

A driverless car has an obligation to keep its passengers safe. It also has an obligation to do no harm to pedestrians. But what if those two obligations clash? Let’s say a runaway truck is careering out of control and will crash head-on into a driverless vehicle. The vehicle’s computer has a duty of care to protect the passengers and take evasive action. 

But in taking the only evasive action available, the robot driver must mount the footpath (sidewalk) where a group of primary school children is standing. I can’t say with certainty what a mechanical driver would do in this situation, but I know what I would do as a human driver. 

The bottom line is that life is not always black and white. Sometimes the line between right and wrong is blurred. We humans rely on nuanced readings of complex situations while machines have no such subtlety. That’s why the human brain is considered the most sophisticated machine in the universe. At this stage, I’d rather put my life in my own hands than that of a machine.


Paul J. Thomas

Subscribe to blog - RSS

Posted Monday, August 31, 2015    View Comments 0 Comments    Make a Comment Make a comment  

RSS Feed