By Jeff Sanford
Toronto, Ontario — July 2, 2015 — Auto industry media went wild recently with a story that suggested two self-driving cars had come close to colliding.
The close call was said to have occurred on San Antonio Road in Palo Alto, California. The reports stated that a Google-operated self-driving Lexus nearly collided with a prototype self-driving Audi Q5 operated by parts-maker Delphi. The head of Delphi’s local lab, John Absmeier, was a passenger in his company’s Audi Q5 at the time. He later spoke with a Reuters reporter and seemed to suggest that the Google self-driving car suddenly cut off the Delphi car. The original story was published by Reuters and quickly went viral. The auto-based media was all over the story, with good reason.
This would be the first close encounter between two self-driving cars in history. To this point self-driving cars have been lauded as being perfectly safe, but that two almost crashed seemed to work against that idea. The story seems to appeal directly to a deep fear: could there be a flaw deep in the computer code operating these vehicles? Could these new robo-cars actually be dangerous? Maybe humans can’t be replaced? Maybe this whole self-driving car trend is going to be a flop?
Not quite, as it turns out. The story isn’t quite what Absmeier seemed to suggest to the Reuters reporter. Google quickly hit back after the Reuters story went out on the wires. The company told the BBC that early reports saying the two cars were in a near miss was “inaccurate.” Both cars had reacted as they were supposed to in these situations: the Google car changed lanes, and the Delphi vehicle reacted as it was supposed to as well. Both Delphi and Google later released statements denying that there was anything like a close call.
Delphi has since retracted the anecdote told to the Reuters reporter, clarifying that there was no “near miss” between the two vehicles. Reuters, however, stands by its initial report.