Cruise’s driverless car accident underlines the risks of AI

Disastrous incident could weaken trust in the technology of autonomous vehicles and provoke regulatory intervention

Article by RICHARD WATERS

It’s not unusual for a new tech product to be recalled. Promising new gadgets don’t always work perfectly from the outset. But it doesn’t usually happen after the technology in question has just collided with a pedestrian and then dragged them 20ft across the street.

The disastrous accident that has brought a halt to operations at Cruise, the General Motors driverless car division, is the kind of setback that champions of autonomous vehicles have long dreaded. It has the potential to weaken trust in the technology and provoke tough regulatory intervention, but it need not set the cause of robotaxis back years — as long as Cruise and its rivals act quickly and show they have truly taken the lessons to heart.

In early October, one of the company’s cars ran over a pedestrian who had been thrown into its path after being struck by another vehicle. The Cruise car stopped, but then moved a further 20ft in what the company described as a safety manoeuvre to make sure it didn’t cause a hazard — all the time with the seriously injured pedestrian trapped underneath. California authorities, which suspended the company’s licences to operate two weeks ago, also claimed Cruise executives didn’t initially disclose the car’s second manoeuvre to regulators, though the company has denied that.

The mess has underlined a number of uncomfortable truths about autonomous vehicles and, by extension, much of the artificial intelligence industry. One is that the kind of races that break out around potentially world-changing new technologies create an inevitable tension. On one hand there is the Silicon Valley culture of rapid deployment of new technologies, and on the other the kind of safety cultures and processes that take years to evolve in more mature markets.

In the US, Cruise has been racing against Tesla and Waymo, part of Alphabet, to develop robotaxi services, and parent GM has set an ambitious revenue target of $1bn by 2025. It has now voluntarily suspended all its operations and promised a complete overhaul of its safety processes and governance arrangements. This may be welcome, but it came after California regulators barred the company from operating. Cruise and its rivals need to show they can get ahead of public expectations about safety, rather than simply being reactive.

A second uncomfortable truth is that deep learning, the technology behind today’s most advanced AI systems, is not yet advanced enough to anticipate accidents like the one at Cruise. It may never be. The mishap is a reminder that supervised learning systems are only as good as the data that has been fed into them. And no matter how much data there is, it’s simply impossible to train them on everything the world may throw at them.

Cruise is at least able to use this accident in future training: all its vehicles from now on will learn from the experience. It also estimates that this particular accident was only likely to happen every 10mn-100mn miles of driving. Yet there will always be new situations that have not been encountered before.

To regain public trust, Cruise and its rivals will have to show not just that their cars have fewer accidents than humans, but that they don’t sometimes make the kinds of serious mistakes that a human could easily have avoided. That is still too high a bar for today’s technology.

A third issue raised by the accident concerns regulation. While there has been much discussion about how AI should be regulated, there has been less about who should actually do the regulating — and what say ordinary citizens and their elected representatives at different levels of government should have about a technology that may deeply affect their lives. In Cruise’s case, approval from California’s state-level regulators was enough to give its robotaxis free access to San Francisco’s streets, despite protests from city transit authorities, the mayor’s office and groups representing citizens that the vehicles hadn’t been fully tested.

Allowing greater city-level oversight would create a thicket of regulations that would make it hard for driverless car companies to scale. Yet the fallout from the Cruise accident suggests that the balance struck in California is inadequate.

The situation isn’t lost. Cruise’s response has been straight out of the crisis-management textbook, from the external investigations it has launched into its technology and handling of the accident to the voluntary recall of its cars. Rather than just a damage limitation exercise, it needs to convince the world that this is a true turning point.