Receive free Driverless vehicles updates
We’ll send you a myFT Daily Digest email rounding up the latest Driverless vehicles news every morning.
UK start-up Wayve has created a self-driving car system that can explain its actions in what the company claims is a “first of its kind” that will give vehicle owners confidence in the technology.
The system aims to tackle one of the biggest criticisms of the latest advances in artificial intelligence: a lack of transparency over why an AI system produces a certain output or outcome.
The Wayve model, called Lingo-1, can answer questions in plain English to clarify and explain what informed its decisions, for example why it slowed down if it spots a pedestrian near a crossing.
“We recognised the challenge of interpretability and safety for these systems, and we thought that language would be a great way to solve it,” said Wayve chief executive Alex Kendall.
“This is the first time that language and robotics have been combined in this way [and] from a regulation point of view, you can just ask the system what it understands and doesn’t.”
When the Financial Times tested the model, the AI could explain why it was travelling at a certain speed and could identify details in its surroundings, such as how many storeys a building had.
However, when asked if it could park on the left where there was a bus stop, it said it could, despite repeated questioning. Kendall accepted the model needed “a little more training”.
Tom Leggett, vehicle technology specialist at Thatcham Research, which focuses on the automotive industry, said making AI more transparent to potential passengers of self-driving cars was essential to allowing their widespread deployment.
“If you don’t understand the workings and the thought process of the AI, I don’t believe consumers are ever going to trust these systems. And without that, we’ll never have safe adoption,” said Leggett, whose research is funded by the insurance industry to understand automotive sector risks.
Wayve, which was founded in Cambridge in 2017, raised $200mn in a series B round in January last year led by Eclipse Ventures. It has not disclosed its valuation. Other investors include Baillie Gifford, Microsoft and angel investor and AI pioneer Yann LeCun.
Competition in the autonomous car industry has been mainly concentrated in San Francisco, with Alphabet-owned Waymo and GM-owned Cruise both based and operating commercial services in the city.
Wayve says its technology works differently to most other autonomous driving systems. Instead of being based on rules determined by engineers — the approach that Waymo and Cruise started out with more than a decade ago — Wayve’s technology learns to drive by processing videos of humans driving and learning by itself.
Lingo-1 seeks to address the challenge of abandoning the rules-based approach, namely that it is harder for engineers to understand how the AI makes decisions, as well as being more difficult to convince regulators that such a system is sufficiently safe.
Securing regulatory sign-off was one of the largest hurdles to rolling out passenger trials of self-driving fleets in the US. The question of exactly how safe an autonomous car needs to be remains unanswered, with regulators reluctant to endorse systems that have fewer fatalities than humans, yet still may lead to a fatal accident.
Wayve’s so-called “end-to-end” self-driving system is similar to the approach that Tesla announced in May. Wayve’s technology also incorporates rules from the Highway Code into its training.
Lingo-1 uses recent advances in large language models, the technology behind AI chatbots such as ChatGPT, to use commentary from human drivers on their behaviour and driving data to produce answers in text.
Regulators and insurers also have concerns about the accuracy of building explainability into such systems, as it is difficult to verify whether the language output from the AI matches the driving behaviour.
Wayve said Lingo-1 was about 60 per cent accurate compared to human answers and was improving rapidly. The company has not given a date for when the model will be incorporated into its self-driving cars.
“There is still a lot of work to be done as if you want to hold somebody culpable for an accident, you cannot 100 per cent guarantee that the explanations justify the underlying model,” said Daniel Omeiza, a researcher at the Oxford Robotics Institute.
Read the full article here