Lex Fridman (@lexfridman) conducts research in human-centered artificial intelligence, deep learning, autonomous vehicles and robotics at MIT. During a presentation on the state of deep learning in 2020, Fridman compares the benefits and drawbacks of Waymo’s approach to self-driving systems to that of Tesla’s development process. Fridman describes his comparison as lidar (Waymo) versus vision (Tesla).
The benefits to Waymo’s development approach, which primarily uses lidar and mapping, is the process is explainable and consistent. Waymo’s system is also accurate, and uses less data. The problems with Waymo’s approach, according to Fridman, are their system is less amenable to machine learning. It’s expensive (the high cost of lidar units). And it requires a safety driver or teleoperation fallback. For Tesla’s development process, which uses vision sensors and deep learning, Fridman describes it as having the highest resolution of information; being feasible to collect data at scale and learn; cheap; and very appropriate for vision systems because roads are designed for human eyes. However, the challenges to Tesla’s process are that it needs large amounts of data to be accurate; it’s less explainable; and a driver must remain vigilant.
The reason deep learning “is the cake” for Tesla is due to the vast amounts of data being generated from Teslas already in consumers hands (+2 billion miles of data versus Waymo’s +20 million miles data). And it’s this data which drives the deep learning that’s required for Tesla’s self-driving vision system. In fact during an April 2019 artificial intelligence podcast, Musk stated that his company has 99% of all of the self-driving data due to the full-sensor suite of Tesla cars on the road.
Disclosure: I own shares of Alphabet and Tesla stock