A new study on improving traffic, energy efficiency, road safety and air quality is being conducted at the University of California, Irvine (UCI), where 25 intersections around the campus will be equipped with Velodyne’s lidar systems.
Velodyne’s intelligent infrastructure lidar was selected by HORIBA Institute for Mobility and Connectivity2 (HIMaC2) to participate in the study as part of the $6 million grant awarded to Horiba by the Vehicle Technology Office of the U.S. Department of Energy. The companies claim the study will be the largest lidar-based traffic monitoring solution in the world.
HIMaC2 plans to create a public road network platform for the development, evaluation and deployment of emerging and future connected and autonomous vehicle technologies. Using Velodyne’s lidar, the network will be able to monitor traffic networks and public spaces.
Velodyne said traffic coordination of intersections and autonomous vehicles can reduce congestion between 20% to 30% and emissions between 5% and 15% while also improving safety.
“The program looks to advance connected and autonomous transportation and show how they can contribute to smarter, safer infrastructure for our communities,” said Scott Samuelsen, principal investigator in the HIMaC2 program and professor of engineering at UCI. “By deploying Velodyne’s automated monitoring and control in an intersection network, backbone data can be generated and utilized to demonstrate improved safety, energy efficiency and traffic flow to which cities aspire.”
The study
The project includes Bluecity, Argonne National Laboratory (ANL), The UCI Institute of Transportation Studies, Toyota Motor of North America, Pony.ai and Hyundai Mobis.
HIMaC2 will study how traffic coordination can be improved through data and analytics generated by the lidar. Using advanced infrastructure monitoring as a vehicle to everything solution, the program will generate critical data for traffic and crowd flow, path planning and protect users in all weather and lighting conditions.
The lidar will not identify individuals’ facial characteristics and can leverage as few as one unit per intersection to support scalability.