Well, the battle to win the self-driving market, is an ambitious project by Elon Musk. Is Tesla behind what it was expected to be?
Even though currently, EVs are ruling the chart of the automobile industry and with synthetic fuel, the debate of not banning petrol is around the corners. But what about the AVs? A lot of partnerships in form of the joint venture are been seen but are the results target-oriented?
As claimed by Musk in 2020, Tesla is close to achieving level five autonomy with no human driver input, the situation got rebuttal by saying that the company is at level 2. Autopilot in Tesla’s suite of ADAS technology includes lane centering, traffic-aware cruise control, self-parking, semi-autonomous navigation of roads, automatic lane changes, etc.
“We are in bad shape when it comes to transportation. We have these metallic objects traveling really quickly with really high kinetic energy. We are putting meat in the control system; it is quite undesirable.” said Andrej Karpathy, senior director of AI at Tesla, at the CVPR 2021 event.
So what technology, Tesla is using to improve its ADAS?
Full Self Driving Software with AI inference chips with every small architectural and micro-architectural improvement. It also maximizes the silicon performance per watt. Floor-planning, timing, and power analyses with robust test writing and scoreboards for functionality and performance. Driver implementation and communicating with chip for performance optimization and redundancy.
AI training chips for powering the Dojo system to implement bleeding-edge technology. It works from the smallest training nodes to multi-die training tiles. It is designed and architect for maximum performance.
It is built from silicon firmware interfaces to high-level software APIs. It solves har problems with state-of-the-art technology for high-power delivery and cooling and writing control loops and monitor software that scales.
At the CVPR 2021 event, Karpathy introduced a predecessor for Dojo that he touted as one of the world’s fastest filesystems. He said the unnamed supercomputer has ‘ 720 nodes, each powered by eight of Nvidia’s A100 GPUs (the 80GB model), for a whopping 5,760 A100s throughout the system’.
The problems that are ranging from perception to control are solved with the use of deep neural networks. The semantic segmentation, object detection, and monocular depth estimation are performed by using per-camera networks for raw images to be analyzed. The road layout, static infrastructure, and 3D objects are plotted with birds-eye-view networks as videos taken from all cameras.
It makes use of the most complicated and diverse scenarios in the world to train the vehicle in real-time scenarios. The Autopilot neural network consists of 48 networks to take 70,000 GPU hours for training. The output is 1,000 distinct tensors predictions at each timestep.
For the neural networks to predict the surroundings correctly, algorithmically is used with large-scale ground truth data to combine the information from the car’s sensors across space and time.
The company is also involved in the general-purpose, bi-pedal, humanoid robot that is capable of tasks that are unsafe, repetitive, or boring.
Autopilot feature in optimising the route or making the needed adjustments for the vehicle to be automatically steered in highway interchanges and exits depending on destination. Autosteer+ features in navigating complex roads using advanced cameras, sensors, and computing power. The vehicles are manoeuvered around obstacles with a smart summon.
The company is switching from sensors and shifts to a vision-only approach.
Open and closed-loop hardware-in-the-loop evaluation tools and infrastructure at a large scale for driving innovation, performance improvement tracking, and avoiding regressions.