One of the biggest challenges in developing functional autonomous vehicles is the ability to detect and classify objects on the road.
The solution comes in the form of sensors that must accurately assess vehicle surroundings before safely adjusting to traffic, roadway regulations or obstacles. These sensors come in the form of a suite of cameras, lidar, radar, computing and mapping technologies.
Intel Corp. recently broke down the different levels of autonomy that will encompass self-driving vehicles, and the company has also issued a breakdown of how its sensors will work in the autonomous vehicle system it is developing with Mobileye, BMW and Continental.
Cameras
There are 12 cameras in Intel’s 360° configuration: eight to support self-driving and four short-range cameras to support near-field sensing for self-driving and self-parking. The cameras are a high-resolution sensor capable of hundreds of millions of samples per second, and are capable of detecting both shape and texture so that the car can see other vehicles, pedestrians, road markers, traffic signs and more.
Artificial intelligence and vision capabilities help to build a full-sensing state from the cameras.
Lidar
In Intel’s self-driving vehicle system there are six total sector LiDARs — three in the front and three in the rear. Lidar sensors are used to detect objects by measuring reflected laser light pulses. Lidar, in combination with radar, is used by the system to provide a fully independent source of shape detection.
This technology works in addition to the camera system and is used for specific tasks such as long-distance ranging and road contour. This low workload for lidar results in a lower cost compared to lidar-centric systems, as well as provides easier manufacturing and volume at scale, Intel said.
Radar
In the Intel self-driving system, six radar units will be used to provide a 360° cocoon of coverage around the vehicle. Radar uses reflected radio waves to detect objects and define their speed. It is particularly good with metallic objects and is effective in inclement weather.
Radar will be used in combination with lidar to provide independent object-detection systems, enabling true redundancy with the camera system.
Computing
Intel will employ its Atom microprocessor as well as Mobileye’s EyeQ system-on-chip (SoC), which is optimized for a variety of deep neural networks, computer vision, signal processing and machine learning tasks.
Intel is developing both its hardware and software platform under the same roof in order to provide advantages in computational performance, power consumption and cost, versus general purpose chips offered by competitors.
Mobileye’s test fleet of autonomous vehicles is powered by four EyeQ4 SoCs, which is just 10% of the computational power Intel will ultimately deploy in production versions of its L4/L5 system. The production version of the process will use an Intel Atom-based chip with two MobileQ5 SoCs, Intel said.
Roadbook
Roadbook is a high-definition (HD) map that is used for redundancy to the camera system for textual information, such as driving path geometry and other static scene semantics including lane markings, road boundaries and traffic signals.
The HD map is crowdsourced through serial production vehicles that are equipped with a front-facing camera in support of Mobileye EyeQ, SoC-enabled advanced driver assistance systems (ADAS). These non-autonomous vehicles are used to build the map and send data packets to the cloud, where the information is aggregated into HD maps that can be used by high-level autonomous vehicles. Because the map is crowd-sourced, the cost is cheap and is constantly updated, Intel said.