Jamie Bartlett

The driverless car revolution will open up all sorts of dilemmas

The driverless car revolution will open up all sorts of dilemmas
Text settings
Comments

Philip Hammond wants fully autonomous driverless cars on our roads by 2021. That’s not too far away, is it? I know it sounds like a science fiction year, but it’s only about fifty months off. Technologically, it’s plausible. Earlier this year I travelled over 100 miles in a driverless truck across Florida with the BBC. True, it was on long straight highways and not through Slough town centre in the rain, but still. Millions are being spent on this technology, and in the race between Google, Uber, Tesla and the rest, there will be rapid progress. And there is no doubt that driverless cars will be safer than these killing machines being operated by texting, confused and tired humans. Thousands are killed every year by human drivers. There will of, course, be some accidents, but the outcome overall will be positive.

But the real impediment won’t be technology. It’ll be us. First of all, think of the politics. One application of this technology – not immediately, but in time – will be to eliminate those who get paid to drive: buses, taxis, lorries. Given that the RMT goes on strike over pay increases and a handful of train operators, expect some major opposition to the decimation of an industry's worth of jobs. Of course no-one has a plan for these people, and no-one ever does. Least of all the people building the tech. Of course they’ll be new jobs – but I doubt the drivers will fill them. When I visited Silicon Valley recently, I was told repeatedly that unemployed truckers in their fifties should retrain as web-developers and machine learning specialists. This is the kind of idealism associated with not living in the real world. Driving employs an awful lot of people in this country – and it’s often a first rung on the job ladder for newly arrived migrants. We have until 2021 to work out how we might smooth this transition – otherwise the disputes over Uber’s license will seem like a breeze in comparison to what’s coming.

And think of all the tedious Jeremy Clarkson articles about nothing-like-the-wind-in-your-hair-in-an-Alfa-Romeo you’ll have to wade through between now and then. About the roar of the Mustang, the freedom of the open road. That alone is enough to put me off the whole idea.  

But perhaps the most interesting question is which philosophy the driverless vehicles will be programmed to follow. Imagine a scenario in which someone dashes out in front of a robot car. Continue and the man dies. Swerve and a different man dies. Stop suddenly and the passenger in the car dies. How would it decide?

As it happens, there’s a whole branch of philosophy dedicated to this problem, known as ‘trolleyology’. The scenario is of a runaway train steaming toward five people tied to the track. You are standing nearby and can pull a lever to divert the train onto a side track. However, there is one person tied up on the side track too. So what do you do? The utilitarian would of course pull the lever, and save the most lives as possible. The deontologist might prefer not to get involved in committing a moral wrong and let the train run its course. And from that premise, all sorts of scenarios can be sketched out. Would you pull the lever if: You knew the person tied on track 1, but not the person on track 2; the person on track 1 was very young, and the person on track 2 was extremely old; the person on track 1 was a criminal, the person on track 2 was a priest; and on and on.

I always thought this was some silly undergraduate philosopher’s game. But these are the sorts of dilemmas that the programmers of these driverless cars will face. I’m not sure which I’d choose, but I’d certainly be upset if a family member was paralysed because my utilitarian car decided to swerve in front of bus to avoid a squadron of drunk cyclists. And just think of the lawsuits. Our legal system is not really set up to deal with trawling through complex proprietary machine learning algorithms to figure out if there was a faulty line of code which meant that the car made a wrong decision. Who is to blame: the software engineer? The guy who signed off? The person who wrote the source code? The pedestrian who stepped out? Human-driven cars at least benefit from a relatively clear determination of responsibility.   

Until these human problems are resolved – and maybe they can’t be resolved – driverless cars won’t be going anywhere. Personally, I’d like a little lever in my own car so I could decide for myself – perhaps switching it up depending on the mood I’m in? Although I’d make sure the default setting was to ‘save driver’s life at all costs’, since I am, of course, a far safer and better driver than all those other maniacs on the road.