As the technologies enabling the rollout of autonomous self-driving vehicles mature en route to Level 5 (as defined in the SAE’s J3016 standard) fully automated self-driving, redundant fail-safe and fail-operational systems are going to be crucial in ensuring the safety of all road users.
Already in Level 3 vehicles, safety-critical functions are placed in the ‘hands’ of the vehicle under certain traffic or environmental conditions — although a human driver is still in place to take over if required. In Levels 4 and 5, however, there is no driver in the loop (DiL), meaning that the fully autonomous vehicle (AV) has to operate with exceptional precision, accuracy, safety and with almost zero tolerance for failure.
To achieve this under all driving conditions — even with a key-component failure — requires redundancy to be engineered into all safety critical systems. But when it comes to redundancy of sensing systems, not all AV architectures meet the requirements.
This was highlighted by the events that led to the first recorded AV fatality in 2016, when the camera-based system on a Tesla Model S, ‘blinded’ by the sun, failed to correctly identify a light-colored semi-rig crossing the road as an obstacle. In this case, a second redundant system, such as lidar or radar, may have prevented the incident, if correctly fused into the vehicle’s architecture. The Model S was equipped with radar, but the vehicle’s perception system was not fully configured for functional redundancy.
One major challenge in designing SAE Level 3 to 5 automated driving systems (ADS) is to define requirements for the perception system that would enable argumentation for safe operation. The safety requirements of the perception system can only be fulfilled through redundancy in the sensor hardware. It is, however, a challenge to specify the redundancy that is required in the sensor system.
Sensors do not improve redundancy
An increasingly complex range of on-board smart sensors such as cameras, radar, ultrasonic, infrared and lidar are being added to gather data in real-time so that the connected automated car can create a picture of its environment at any point in time. Each of these sensors, however, comes with its own strengths and weaknesses:
- Ultrasonic is good for judging a car’s distance to objects, but only at short ranges.
- Radar can detect objects at long ranges regardless of the weather, but has low resolution.
- Lidar has high resolution, but loses sight in heavy snow and rain, and depending on the type, can be costly.
- Cameras, on the other hand, lead the way in classification and texture interpretation. By far the cheapest and most available sensors, cameras unfortunately generate massive amounts of data, and also rely on good visibility.
It is common practice to combine arrays of these sensors to create a detailed digital representation of the vehicle’s environment. However the individual shortcomings of each sensor type cannot be overcome by merely using the same sensor type multiple times. Instead, it is essential to make judicious use of the sensor data by combining the information coming from the different types of sensors through “sensor fusion” to best interpret the surroundings and, at the same time, ensure the system’s functional redundancy.
What is more, any of the sensors can at any time be reporting a false positive or a false negative. The self-driving car typically relies on AI to verify the accuracy of the data. However without planned and intentional system redundancy, this can be hard to do. The AI will typically canvas all its sensors to try and determine whether any one unit is transmitting incorrect data.
Some AI driven systems will judge which sensor is right by pre-determining that some of the sensors perform better than others in certain conditions, or the system might revert to a voting protocol where, if X sensors vote that something is there and Y do not, then if X is greater than Y by some majority, the decision is carried.
Another popular method is known as the Random Sample Consensus (RANSAC) approach. The RANSAC algorithm is a learning technique to estimate parameters of a model by random sampling of observed data. Given a dataset whose data elements contain both inliers and outliers, RANSAC uses the voting scheme to find the best-fit result.
A different approach currently under development by Mobileye — an Israeli subsidiary of Intel — is to separate the sensors into two channels, one for cameras and another for radar and lidar, and task both with sensing all elements of the driving environment.
In this way, the company argues it can achieve full system redundancy by having each of the channels create their own independent and diverse world models, each filtered independently through its Responsibility-Sensitive Safety framework. When combined into a complete, production-ready AV, the camera-only system will form the backbone, while the radar/lidar subsystem will serve as a diversified and redundant safety back-up.
While sensors are key components in the perception and localization of the AV in its environment, there is another element — control or actuation — which is equally critical to the safe operation of the AV and therefore also needs to be discussed in the context of safety and system redundancy.
Redundancy in braking, steering systems
If autonomous vehicles are to gain consumer confidence, they will have to be proven safe; remaining maneuverable even when signal transmission is interrupted or a subsystem fails. The challenges in developing such systems lies in creating executable solutions for key-system redundancies at an acceptable cost.
Typically, redundant control systems are designed to fail in one of two modes:
- Fail safe, where for instance, in an electrically assisted power steering system an electrical failure would simply result in control reverting back to manual steering.
- Fail operational, where, for example, a solution would be to add a second steering motor.
While the common solution to redundancy is to double up on the critical components, such as Bosch’s Electric Power Steering with its fail-operational function, these systems add complexity, cost and weight.
In an attempt to avoid these penalties several institutions are carrying out research into controlling the vehicle’s direction using differential braking, either through traditional friction brakes or via torque-vectoring on electrified powertrains.
In a research paper, titled “Steering redundancy for self-driving vehicles using differential braking,” M. Jonasson and M. Thor from Volvo Cars and The Royal Institute of Technology respectively, proposed just such a redundant system using the braking system as a fail operational redundant backup.
To retain control after a complete loss of steering torque from the EPAS, in the context of providing redundant fail-operational control for self-driving vehicles, the paper describes how, by applying brakes on the inside wheels of the vehicle, differential braking can be used to turn a vehicle.
There are however physical limits on the magnitude of curvature and lateral acceleration that could be achieved, compared with steering.
And, of course, this assumes that the braking system is functional, with its own built-in redundancy, which raises the question: Would a brake-by-wire system be able to deliver?
The answer to that is quite simple — according to Bastien Russery, customer line-manager at Chassis Brakes International (recently acquired by Hitachi): “If you have four autonomous brakes, you have natural redundancy. Should you have a defect with one, the other three are still available to stop the vehicle and can be individually modulated to keep the vehicle under control.”
The company’s ‘Smart Brake’ brake-by-wire concept also eliminates the vacuum booster, hoses, clamps and hydraulic fluid, as well as the “evacuate and fill” process required to fill and bleed the system.
In order to realize fully automated driving on the path to an accident-free future, redundancy in safety-critical systems — such as perception, positioning and control — is an absolute.
What is more, the importance of redundancy in the rollout of automated driving goes beyond just the technological function, it will ultimately build consumer confidence as drivers understand these systems are designed with deep levels of complexity to safely handle any situation.