Tesla gambles on ‘black box’ AI tech for robotaxis

(Reuters) – Tesla aims to stun investors Thursday night with its long-awaited “robotaxi unveil,” a potential milestone after a decade of Elon Musk’s unfulfilled promises to deliver self-driving vehicles.

The automaker is widely expected to showcase a prototype called the “Cybercab” rather than a road-ready driverless taxi.

Convincing regulators and passengers of the vehicle’s safety could prove much harder and take much longer — while its main competitors, such as Alphabet’s Waymo, expand robotaxi fleets they’re already operating in select cities today.

Tesla has to date pursued a different technological path than all of its major self-driving rivals – one with potentially higher rewards but also higher risks to both its business and its passengers, according to Reuters interviews with more than a dozen executives, consultants and academics specializing in self-driving technology and three former Tesla autonomous-vehicle engineers.

Tesla’s strategy relies solely on a combination of “computer vision,” which aims to use cameras the way humans use eyes, with an artificial-intelligence technology called end-to-end machine learning that instantly translates the images into driving decisions.

That technology already underpins its “Full Self-Driving” driver-assistance feature that, despite its name, can’t be operated safely without a human driver. Musk has said Tesla is using the same approach to develop fully autonomous robotaxis.

Tesla’s competitors – including Waymo, Amazon’s Zoox, General Motors’ Cruise and a host of Chinese firms – use the same technology but typically layer on redundant systems and sensors such as radar, lidar and sophisticated mapping to ensure safety and win regulatory approval for their driverless vehicles.

Tesla’s strategy is simpler, and much cheaper, but has two critical weaknesses, industry executives, autonomous-vehicle experts and one of the Tesla engineers told Reuters. Without the layered technologies used by its peers, Tesla’s system struggles more with so-called “edge cases” — rare driving scenarios that self-driving systems and their human engineers struggle to anticipate.

The other major challenge: The end-to-end AI technology is a “black box,” the Tesla engineer said, making it “nearly impossible” to “see what went wrong when it misbehaves and causes an accident.” The inability to precisely identify such failures, he said, makes it difficult to safeguard against them.

Tesla did not respond to a request for comment on its technology.

Nvidia founder and CEO Jensen Huang used the same “black box” description in an interview to describe the weaknesses of end-to-end technology, without specifically addressing Tesla’s system. End-to-end artificial intelligence involves training a computer to make decisions directly from raw data, with no intermediate steps requiring additional engineering or programming.

Nvidia, the world’s leading producer of AI-computing chips, also uses end-to-end technology in autonomous-driving systems it’s developing and plans to sell to automakers. But Nvidia, Huang told Reuters, combines that approach with more conventional computing systems and additional sensors such as radar and lidar.

The end-to-end technology usually — but not always — makes the best driving decisions, said Huang, which is why Nvidia takes a more conservative approach. “We have to build the future step-by-step,” he said. “We cannot go directly to the future. It’s too unsafe.”

ROBOTAXI PIVOT

Tesla’s ability to deliver robotaxis has taken on heightened importance this year as its sales and profits have declined amid softening electric-vehicle demand globally and fierce competition from rising Chinese EV makers.

If Tesla can overcome the technical challenges of its autonomous strategy, the payoff could be enormous. While competitors like Waymo already have robotaxis on the road, they’re operating far more expensive vehicles in relatively small, comprehensively mapped zones.

Tesla aims to sell affordable robotaxis that can drive themselves anywhere.

Musk has a long history of bold promises about self-driving cars. In 2016, he predicted drivers would be able to summon their vehicles from across the country within two years. In 2019, Musk predicted Tesla would produce operational robotaxis by 2020.

The announcement of this week’s robotaxi reveal came on April 5, the day Reuters exclusively reported that Tesla had abandoned plans to build a $25,000 electric vehicle for the masses known informally as the Model 2, initially sending Tesla shares down. Musk responded by posting later that day on his X social-media platform: “Robotaxi unveil on 8/8,” sparking intense investor speculation. Tesla later delayed the event until this week.

That April day marked a fundamental shift in Musk’s stated priorities. He had previously promised to make Tesla a Toyota-sized EV giant, an expectation that underpinned Tesla’s soaring stock price, making it the world’s most valuable automaker. Now he vowed to dominate self-driving tech.

Abrupt cost-cutting measures followed, including mass layoffs, as Musk diverted investment away from EV-manufacturing priorities such as battery development, gigacasting, and expansion of the automaker’s Supercharger network.

The retreat from mass-market EVs only intensified investor pressure on Tesla’s autonomous-vehicle development. Musk leaned into the scrutiny, saying in April that anyone doubting Tesla will “solve autonomy” shouldn’t invest in the company.

Nicholas Mersch, portfolio manager at Purpose Investments, a Tesla investor, said Musk “has a lot of convincing to do.”

Still, Mersch called Musk’s autonomy strategy a “really bold bet” with a potentially immense payoff, even if it takes Tesla substantially more time to crack the code. “You have to keep the big picture in mind, in terms of how much iterative innovation is happening” at Tesla, he said. “I wouldn’t discount them.”

DATA DRIVEN

For now, unlike its robotaxi competitors, Tesla only offers semi-autonomous solutions in its “Autopilot” and “Full Self-Driving” features. The naming and marketing of those systems have sparked investigations and lawsuits over whether Tesla has put drivers at risk by overstating its vehicles’ self-driving capabilities.

A U.S. National Highway Traffic Safety Administration (NHTSA) investigation published in April found that 542 crashes, including 14 with fatalities, had occurred in Tesla vehicles with Autopilot or FSD engaged between January 2018 and August 2023.

Putting Autopilot and FSD into high-volume models, however, does give Tesla a distinct competitive advantage: A massive trove of data, collected by cameras on millions of vehicles, that it can analyze and use to develop self-driving tech.

Two of the former Tesla engineers said the relatively low cost of its technology enables the massive scale of its data collection, compared with relatively tiny fleets of competitors like Waymo. One of the engineers said Tesla’s high-resolution cameras cost far less than lidar and could eventually allow the automaker to produce fully autonomous vehicles customers can afford.

Lidar uses lasers to produce three-dimensional images of a vehicle’s surroundings as it navigates around obstacles.

Speaking to analysts and investors this summer, Musk boasted of “exponential” improvement and predicted Tesla might achieve unsupervised driving “by the end of this year,” adding that he would be “shocked if we cannot do it next year.”

Sasha Ostojic – a former driverless car engineer and software-development executive at Nvidia, Cruise and Zoox – said he believes it will take Tesla at least “three-plus years” just to match the level of autonomous driving Waymo achieves today. Ostojic now advises a Palo Alto venture capital firm, Playground Global, on technology investments.

“I don’t see Tesla converging toward truly ‘eyes off, brain off’” autonomous driving, he said, “on the timelines Elon Musk has been promising.”

ERROR RATES AND EDGE CASES

Tesla once dabbled in multiple autonomous-driving technologies, too, but it started removing radar from its vehicles in 2021 and 2022 and by last year removed ultrasonic sensors designed to detect objects with sound waves.

The company’s sole reliance on AI-enabled computer vision leaves it with the challenge of eliminating a small but unacceptable error rate that could result in injuries and deaths if left unchecked, with no human driver, said specialists in autonomous-driving technology.

Missy Cummings, a robotics and AI professor at George Mason University and a former advisor to NHTSA, cited several studies that have shown computer vision is highly accurate but still fails to recognize objects about 3% of the time.

“What happens if it doesn’t see a pedestrian crossing the road or on the sidewalk?” she asked.

John Krafcik, Waymo’s former CEO, told Reuters the company’s use of additional sensors including radar and lidar make it “orders of magnitude more capable than Tesla” in perceiving objects. It’s technology is also more transparent when something goes wrong: The inability of end-to-end machine learning systems to pinpoint dangerous glitches “may be an intractable one for a company serious about safety,” Krafcik said.

“If one of your cars has a significant at-fault crash,” he said, “one should be able to explain why it happened.”

Waymo did not comment.

The former Tesla engineer who called its technology a “black box” said it’s never clear how the automaker’s system arrives at driving decisions. And that makes it hard to tell whether Tesla is close — and, if so, how close — to producing safe and fully autonomous vehicles. The engineer called it “impossible” for AI systems or their human engineers to anticipate every “edge case,” no matter how much data it analyzes.

“You could argue there are an infinite number of crazy things happening on the road,” the engineer said.

Reporting by Norihiko Shirouzu in Austin, Texas and Chris Kirkham in Los Angeles; editing by Anna Driver and Brian Thevenot