We are witnessing the advent of Level 3 autonomous vehicles in 2018, and it's a testament that automated car technology is gradually seeping into the commercial mainstream. Audi's A8 sedan offering Traffic Jam Pilot (Figure 1) and GM's Cadillac CT6 with Super Cruise are just two examples.
The Level 3 autonomous cars still require a steering wheel and driver for taking over vehicle control in case of a problem. It's a modest level of automated driving, but it clearly marks the race toward fully autonomous cars.
In other words, these are still the early days for self-driving cars, but they are coming for sure. For naysayers, who call autonomous cars a manifestation of science fiction transport, it's worthwhile to quote the president of the Michigan Savings Bank. He warned people about investing in Ford Motor in 1903: "The horse is here to stay, but the automobile is only a novelty, a fad."
At the same time, however, when it comes to the future of autonomous cars, the answer to the recurring question "Are we there yet?" is apparently, “No.” What we see now is a constant interplay with technical hurdles, especially the ones relating to two fundamental technology building blocks: sensors and processing.
Sensors Plus Co-Processors
Cameras, lidar, and radar—acting as the eyes of autonomous vehicles—complement each other by compensating for their respective flaws. For instance, current radars lack imaging capability and 360 degree coverage, which are provided by lidars. But lidars don't work at night or in fog.
However, using both radar and lidar increases cost and design overhead. Now, upstarts like EnSilica claim that lidar can be bypassed by first increasing the resolution of a radar and then aiding it with an imaging co-processor that handles significant amounts of sensor data in real-time.
That also allows the main processor to focus on safety-critical tasks like target identification and fusing radar data with a camera's video information. A new breed of co-processors such as EnSilica's eSi-ADAS is boasting computer vision and artificial intelligence capabilities in cameras and radars while handling challenges like data overload and resolution.
These co-processors are being aided by dedicated engines for pedestrian detection, vehicle detection, lane detection, and moving object detection.
Another automotive venue where we see a lot happening in 2018 is sensor fusion. For example, NXP has announced that its S32 platform will be available by the middle of 2018. The leading automotive chipmaker asserts that it's the only automotive platform that spans the entire vehicle. And it allows designers to optimize code reuse, software, and other design resources.
The sensor fusion affair, as shown by NXP's S32 technology, is closely tied to the automotive processing platforms, which brings us to the second key ingredient of the autonomous car design recipe: the main CPU.
Auto Processor Hits 320 TOPS
The picture looks even rosier on the main processor front, which is the brain of autonomous vehicles and has extreme processing requirements. Here, Nvidia has been making waves with its third-generation DRIVE PX processing platform, codenamed Pegasus, which has taken automotive computing to a whole new level: 320 trillion operations per second (TOPS).
The multichip platform that is the size of a car number plate boasts datacenter-class processing power and can handle a massive amount of data produced by the autonomous vehicles. The Pegasus automotive processing platform is comprised of up to two Xavier automotive SoCs and a couple of discrete GPUs based on the new Volta architecture alongside hardware acceleration.
It also offers a lot of redundancies and fallback mechanisms while running on a QNX operating system with ASIL D features. Pegasus, the production-ready processor platform to be released in the second half of 2018, will initially be implemented in geofenced and gated communities.
The German courier firm DHL, for instance, is planning to launch a pilot fleet that will allow drivers to distribute a neighborhood’s packages on foot while the delivery van waits at the end of the block. DHL is partnering with Nvidia and ZF, a tier 1 automotive supplier, to deploy a fleet of autonomous light trucks in 2019.
The autonomous car's journey from Level 1 to Level 5 is a testament to its evolution, not revolution. So while there is a wide consensus in the industry that fully autonomous vehicles will arrive in 2021 or beyond, it doesn't mean that the driverless car cultural shift will arrive only in 2021.
The article shows that we have already reached Level 3 autonomous driving on this journey and that the processing resources necessary for Level 5 designs are starting to become available, as demonstrated by Nvidia's Pegasus platform.
Next, thorny integration issues relating to sensor fusion could well be handled by emerging new platforms like NXP's S32. A lot seems to be happening during 2018 and that's exactly what's needed to turn cars into driverless supercomputers.