Toronto, Ontario — Tesla’s decision in October to cut ultrasonic sensors from production in favour of a more cost-effective computer vision system was met with skepticism from a largely LiDAR-centric auto industry, but recent analysis shows that the leading EV maker may be onto something.
The removal of ultrasonic sensors from Tesla vehicles made the company an outlier when it comes to current trends around autonomous driving development, as it instead put its eggs into the basket of Tesla Vision—a proprietary computer vision system based on Nvidia’s CUDA platform, according to analysis from Makeuseof.com.
Where LiDAR and radar systems make use of light and radio waves to operate, Tesla Vision is a camera-based, machine-learning platform that continually trains itself on autonomous driving performance.
A 2019 research paper from Cornell University showed that stereo cameras (a camera with two lenses acting in tandem to produce a single image, similar to that seen on a modern iPhone) could be capable of generating a 3-D map of nearly equivalent quality to that of a LiDAR system.
The biggest difference for Tesla, however, is the comparatively minuscule cost to equip a vehicle with a simple camera system and Tesla Vision integration, versus a LiDAR device that averages about US$7,500.
As just about any collision repair technician working today will tell you, more sensors rarely equates to a simpler repair, and it would appear that Tesla has taken that notion to heart.
Given the limited network of repair facilities certified to repair its vehicles, as compared to the relative ubiquity of legacy OEM procedures, keeping repairs as simple as possible may actually be the wisest move possible for Tesla as it continues to refine its first-party repair and insurance services.
Where do you stand on the debate between cameras and sensors? Does the efficacy of sensors outweigh their complexity? Or do you prefer a more simple camera system?