Tesla Believes Deep Learning A.I. is the Future of Autonomous Vehicles

Photo Credit: Alex Knight 

Forever the iconoclast, it’s not very surprising that Tesla Motors CEO Elon Musk takes a very different view on the future of cars from the rest of the automobile industry.

For starters, Tesla has built a name for itself by promoting sustainable products such as electric vehicles, solar panels, and energy storage devices. While each of these ideas was helped the company to attract interest from around the world, they are all somewhat radical departures from what most people would expect from a car company.

Perhaps that is a great part of why Tesla continues to be such an amazing company to watch: they don’t follow the traditional corporate rule book, and they are in the process of shredding the one governing the automotive industry.

For better or worse, Tesla is one company leading the charge to dramatically transform our world and the products we use.

As a result, it should come as no surprise that Musk sees the future of driverless car technology in a very different way than his contemporaries whether they be American, Japanese, Korean or German.

Instead of relying on LiDAR and High Definition Maps, Tesla imagines a future where driverless cars rely chiefly on artificial intelligence to safely navigate.

“The two main crutches that should not be used — and, in retrospect, will be obviously false and foolish — are Lidar and HD maps. Mark my words,” the Tesla CEO was recently quoted as saying.

These are bold words coming at a time when there is a global scramble underway by automotive manufacturers to secure the professionals, materials, and resources they need to master self driving technology.

For most car companies, LiDAR is one of the most essential components of self-driving technology. This is why it is so intriguing that Tesla CEO Elon Musk believes his company has found a way to circumvent this technology all together.

View this post on Instagram

Model 3 by @minimal_duck

A post shared by Tesla (@teslamotors) on

Self-Driving Innovation without LiDAR?

The premise of LiDAR is rather simple: a laser sends out beams of light in various directions and waits for them to connect with nearby objects. When the laser connects with the object, a sensor is able to accurately measure its distance away.

As this process repeats, a self-driving car equipped with LiDAR technology is able to create a map of sorts which allows it to understand where it is and what objects, whether pedestrians, other cars, signs, or other obstacles could be in the way.

Up until now, the LiDAR industry has seen incredible growth as car companies from around the world have jockeyed to get their hands on the fastest and most innovative solutions.

To date, most LiDARs use one of four types of laser beam steering technologies to run their systems:

  • Spinning lidar. Velodyne created the modern lidar industry around 2007 when it introduced a lidar unit that stacked 64 lasers in a vertical column and spun the whole thing around many times per second. Velodyne’s high-end sensors still use this basic approach, and at least one competitor, Ouster, has followed suit. This approach has the advantage of 360-degree coverage, but critics question whether spinning lidar can be made cheap and reliable enough for mass-market use.

  • Mechanical scanning lidar uses a mirror to redirect a single laser in different directions. Some lidar companies in this category use a technology called a micro-electro-mechanical system (MEMS) to drive the mirror.

  • Optical phased array lidar uses a row of emitters that can change the direction of a laser beam by adjusting the relative phase of the signal from one transmitter to the next. We’ll describe this technique in detail in the section on Quanergy.

  • Flash lidar illuminates the entire field with a single flash. Current flash lidar technologies use a single wide-angle laser. This can make it difficult to reach long ranges since any given point gets only a small fraction of the source laser’s light. At least one company (Ouster) is planning to eventually build multi-laser flash systems that have an array of thousands or millions of lasers—each pointed in a different direction.

According to Tesla CEO, Elon Musk, none of these technologies will be able to outlast the innovations his company is focused on delivering. Bold words and promises from a man who if nothing else is truly one of the world’s great Svengalis.

Boldly Going Where No Car Company Has Gone Before

Tesla believes that its production vehicles already have enough sensing technology and on-board computer systems to drive themselves, safely and reliably. The company plans to release a software update by the end of the year which will allow its cars to go into fully autonomous mode.

It is presumed that Tesla will seek to achieve this by employing artificial intelligence software based around a neural network.

One major snag in this plan however is how governments in the world will respond to the world’s first fully autonomous vehicle. Another question is how insurance companies will proceed.

The European Union recently released “An Ethics Guideline for Trustworthy A.I”. This document contains some key information which could drastically impact companies like Tesla, especially if they are seeking to employ neural networks in place of things like LiDAR.

According to the guidelines, trustworthy AI should be:

(1) lawful –  respecting all applicable laws and regulations

(2) ethical – respecting ethical principles and values

(3) robust – both from a technical perspective while taking into account its social environment

The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches

  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.

  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.

  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.

  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.

  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

One of the key challenges Tesla will face in proving its technologies to the world will be whether or not they can establish that their self-driving system can explain its decision making processes. This has been a major hurdle for artificial intelligence and specifically for processes guided by neural networks.

If Tesla has truly been able to resolve this issue, it will not only be a bold achievement for the automotive industry but perhaps the dawning of a new age for humanity.

View this post on Instagram

Model Y – Thurs 8pm PDT Livestream.Tesla.com

A post shared by Tesla (@teslamotors) on

The Audacity to Dream

Elon Musk likened LiDAR and HD Mapping technology to “crutches”.

You might be wondering how he came to this conclusionâ€Ĥ

LiDAR technology, despite recent innovations, are exceptionally expensive. Tesla believes that by using a silicon microchip in place of a pricy sensor, it will be able to drastically reduce the cost of offering self-driving cars to the general public.

To truly accomplish this however, Tesla will need to find a way to radically scale the production of its microchips. In 2018, more than 1.5 billion smartphones were sold. At this volume, it is understandable why components such as the chips for these phones have become so much cheaper than they were even a few short years ago.

Musk is betting that he will be able to pull off a similar feat with cars. The only problem with this logic however, is the fact that in 2018, only 82 million cars were sold. In addition, it will likely be some time before a majority of the cars on the market are driverless.

HD mapping systems, according to Musk, are flawed because they only work in areas which have already been explored. When traveling in new areas, or in areas which have been modified as a result of construction or detours such as during a parade, these systems fail and create danger.

High-definition maps, meanwhile, are used to help driverless cars understand their surroundings, reducing the amount of raw data they need to collect and process. This makes it necessary to “geofence” them, only allowing them to travel in areas that have been very precisely mapped.

Instead of relying on geofenced, HD maps, Tesla envisions a future where its cars can actually see the world around them thanks to an array of cameras placed around their vehicles. As backups, they will also use a forward facing radar and dozens of ultrasonic sensors to help the vehicles determine where they are in relation to other objects or people in the road.

Tesla believes it has the hard and software innovations required to allow its vehicles to completely circumvent the need for LiDARs or HD Mapping technologies.

Two weeks ago, the company released information about a computer chip it had designed which is able to process tremendous amounts of image data allowing its cars to safely navigate in a new and novel way. Nvidia, known for producing microchips used in the A.I. systems of other cars, even acknowledged Tesla’s ability to “raise the bar”.

To guide its cars, Tesla plans to use an artificial intelligence process known as deep learning. This approach creates an artificial neural network which is modelled after the way humans and other animals process visual information.

To date, it has been incredibly challenging for data scientists to program neural networks. They only operate correctly if they have seen the same thing over and over again. To further complicate things, it is essential for humans to label and code these objects or stimuli so that the system can eventually learn how to identify it on its own.

It has been a steep hill to climb but one which Tesla seems invigorated to tackle in the years to climb.

Welcome to the Pilot Study

Tesla currently has 400,000 cars on the road. Each of these is equipped with various cameras as well as software which records in real-time the actions, choices, and behaviors of drivers around the world.

In Palo Alto, the company’s headquarters, leading information scientists from around the world are analyzing the data the company is collecting and using it to fine tune the deep learning tech at the core of Tesla’s ambitious autonomous driving technology.

What this means is that every current Tesla driver is operating whether they realize it or not, in the largest artificial intelligence pilot study ever conducted.

Despite the wealth of data being collected and analyzed, neural networks are a particularly tough nut to crack. Even with real world information, there is still always the chance that the system will encounter something it has not seen before and it does not know how to react to.

Despite these issues, CEO Elon Musk is confident that by feeding enough information into his deep learning system, his company will be able to create game changing self-driving solutions.

Sources: European Commission, Financial Times, Arstechnica