Do Driverless Cars Hallucinate Electric Sheep?

VWIDBuzzMOIA.jpg

Waymo is operating driverless taxis in Los Angeles, Phoenix, and San Francisco, partnering with Uber in Atlanta and Austin, and expanding into Miami and Washington DC soon. Volkswagen is making a driverless ID Buzz available to ride-hailing and taxi companies anywhere. Tesla has begun offering robotaxis in Austin, with some opening glitches. Amazon is set to build 10,000 driverless taxis a year. A company called Aurora is testing driverless trucks between Dallas and Houston.

People have responded to the growth of driverless vehicle programs with predictable horror. Some worry that driverless cars will increase congestion and take people’s jobs. Others fear that they will be dangerous on the highway, especially if they begin “hallucinating.” Perhaps due to fears like these, ICE protesters set five Waymo cars on fire in Los Angeles.

Some of these fears are without foundation. Although large language models like Chat-GPT, game-playing programs such as AlphaGo and Deep Blue, and autonomous vehicles have all been labeled “artificial intelligence,” these are in fact very different things.

Wikipedia says that the term “artificial intelligence” embraces such things as machine problem solving, planning and decision making, social intelligence, and natural language processing, these are non-overlapping programs. While researchers may have an ultimate goal of creating an artificial general intelligence that could do all of these things, they are nowhere near that and probably won’t be for decades.

ChatGPT and similar programs do natural language processing and they do it well enough that they could probably pass a Turing test. Yet they are incapable of simple arithmetic and more complex analysis is well beyond them. They are unable to tell the difference between truth and falsehoods or even to understand the meaning of truth.

Someone has posted a database of cases in which lawyers relied on such programs to write their legal briefs only to discover (after submitting the briefs to a court) that the program fabricated legal citations. The idea that these language models, which some have called “stochastic parrots,” will soon evolve into a general intelligence or be self-aware should be completely laughable, yet it makes a great fund-raising tool to attract gullible investors.

Read the rest of this piece at The Antiplanner.


Randal O'Toole, the Antiplanner, is a policy analyst with nearly 50 years of experience reviewing transportation and land-use plans and the author of The Best-Laid Plans: How Government Planning Harms Your Quality of Life, Your Pocketbook, and Your Future.

Photo: Volkswagen’s ID Buzz configured for driverless taxi service. Courtesy the Antiplanner.

Subjects: