Intel Corp. and its subsidiary Mobileye have announced that it has begun the first phase of testing a 100 autonomous vehicle fleet in Jerusalem in order to demonstrate its Responsibility-Sensitive Safety (RSS) model and to gather data regarding how the technology handles aggressive traffic conditions in the city.
In the coming months, Intel said it will expand the fleet to the U.S. and other regions. The Intel-Mobileye driving technology uses computer vision and artificial intelligence designed to meet goals of safety and economic scalability.
Intel chose Jerusalem because Mobileye is based in Israel and the company wanted to demonstrate that the technology can work in any geographic location under all driving conditions. Intel said Jerusalem is famous for aggressive driving, for not marking its roads well and for people not using crosswalks.
Intel said the environment has allowed the company to test the cars and the technology while refining the driving planning or decision-making of its autonomous vehicles.
During the first phase, the fleet will be powered only by cameras. In a 360-degree configuration, each vehicle will use 12 cameras, with eight cameras providing long-range surround view and four cameras utilized for parking.
The goal of the first phase is to prove that Intel can create an end-to-end solution from processing only the camera data.
“We characterize an end-to-end AV solution as consisting of a surround view sensing state capable of detecting road users, drivable paths and the semantic meaning of traffic signs/lights; the real-time creation of HD-maps as well as the ability to localize the AV with centimeter-level accuracy; path planning (i.e., driving policy); and vehicle control,” said Amnon Shashua, Senior Vice President at Intel and CEO and CTO of Mobileye. “The sensing state is depicted in the videos above as a top-view rendering of the environment around the AV while in motion.”
Intel called this camera-only phase “true redundancy” of a sensing system consisting of multiple independently engineered sensing systems, each of which can support fully autonomous driving on its own. This is different than garnering raw sensor data from multiple sources together, which results in a single sensing system.
Intel said this provides two major advantages: The first being the amount of data required to validate that the perception system is massively lower; and in case of the failure of one of the systems, the vehicle can continue to operate safely.