Will we ever be able to trust driverless cars?
When you’re sitting in the driver’s seat at 60mph on a rain-lashed motorway, covering your eyes would normally be a dangerous, if not downright suicidal move.
Putting on a virtual reality headset, obscuring the view of the road altogether, might seem even crazier.
But that’s exactly what I did recently.
To start with, I was looking at a computer simulation of the motorway in front of me. Then the road disappeared altogether, the car took off, and I began flying through an alien landscape.
This was the rather unsettling gimmick chosen by Renault to illustrate the potential of its new self-driving concept car, the Symbioz. The idea is that if you’re not driving, you can turn your mind, and eyes, to other things.
But will we ever be able to trust driverless technology enough to do that, and would we be right to do so?
My experience in the Symbioz – a car designed for fully hands-off driving – hardly filled me with confidence. The hi-tech sensors fogged up, the system stopped working, and a safety driver – usefully equipped with dual controls – had to take over.
To be fair, this was a prototype, and Renault admits it will be years before systems like this are ready to go onto the market.
But while there’s no doubt that fully autonomous self-driving cars are on their way, there are concerns that many of us may confuse assisted driving technologies – cruise control, lane keeping, automatic braking, collision avoidance systems and so on – with full autonomy.
And this could make us dangerously complacent.
Matthew Avery is a director of Thatcham Research, which tests new vehicles on behalf of the insurance industry. He says it is vital drivers know what they’re dealing with – and that a clear distinction is drawn between “hands-on” and “hands-off” set-ups.
“The systems we have got today are assisted-driver systems,” he says. “They are there to support the driver. But there is a risk that drivers become accustomed to them, and maybe think they’re automated when they’re not.
“There are really two levels. Either you’re assisted in your driving, but you’re still in the loop, or it’s automated driving, where the driver can even get in the back and read a book or go to sleep.
“We want cars to make that very, very clear.”
Tesla’s Autopilot system does many of the things you’d expect of a fully autonomous machine. It can brake, accelerate and steer by itself under certain conditions.
Other companies like Volvo and Mercedes have similar mechanisms on some models. And Audi’s new A8 enables completely hands-off driving in certain very specific circumstances.
But crucially, these cars are not designed to be left to their own devices. The driver is meant to be alert and able to take over at any moment, and for good reason.
In 2016, a Tesla owner was killed when his car failed to spot a lorry crossing its path. The US National Transportation Safety Board (NTSB) found that Tesla’s Autopilot system was partly to blame.
In 37 minutes of driving, the driver had his hands on the wheel for just 25 seconds, the NTSB found.
Since the accident, Tesla has introduced new safeguards, including turning off Autopilot and bringing the car to a halt if the driver lets go of the wheel for too long.
Cars like Renault’s Symbioz will probably be similar to conventional cars, but equipped with a system that acts like an advanced form of cruise control. Drivers will be able to use it on major motorways and over long distances, turning it on and off at will.
Meanwhile companies like Google’s sister firm Waymo and ride-hailing firms Uber and Lyft, are developing driverless taxis as well.
But handing increasing amounts of control to computers comes with other risks, too, not least of which is the danger of being targeted by hackers.
Increasingly, modern cars come with internet connections, to help operate entertainment and navigation systems, or to allow them to be unlocked and started remotely using a phone.
That makes them vulnerable.
In 2015, for example, security researchers Chris Valasek and Charlie Miller made headlines when they showed how they could hack into a car remotely – and take control of key functions, including the brakes and the steering.
“If the car is connected, hackers can use that connection to remotely break in and take control of the vehicle,” says Kathleen Fisher, computer security professor at Tufts University, Massachusetts, and a former programme manager at the US defence research agency Darpa.
She believes companies simply don’t have enough economic incentives to make their products hacker-proof.
“Even if one car company was really motivated to make their cars as secure as technology knows how to do, the problem is that costs money,” she says.
Top-notch security is not necessarily a selling point, she believes, and advertising it may simply make customers more worried. It might also act as a challenge to would-be hackers.
But Chris Valasek, who now works for General Motors’ self-driving cars division Cruise, thinks the potential benefits of driverless cars outweigh the risks.
“They can’t drive drunk, they can’t drive tired, and they don’t look at Twitter on their phone while they drive,” he says.
“So while there’s the risk that someone could hack them, at the same time millions of people are going to be exponentially safer with this type of technology.”
And safety is the overriding benefit, experts say.
“More than 90% of the accidents that you see today are caused, one way or another, by human error,” says mobility consultant Sven Beiker, a former head of Stanford University’s Center for Automotive Research.
“On a global basis, that’s about 1.2 million people who die in traffic accidents. That’s motivation enough.”
So it does look as though cars are going to become more and more automated over the next few years.
When it comes to driving, it seems, human beings just aren’t good enough.