The development of autonomous vehicles (AV) has spawned questions and discussions around trust and responsibility in engineering. Studies find that U.S. consumers are not entirely comfortable with AVs. A March 2016 poll by the American Automobile Association (AAA) found that 75 percent of Americans fear riding in a self-driving car. This anxiety is driven in part by the fact that responsibility for a potentially fatal AV crash is muddled. AV programming also has the potential “responsibility” to make moral decisions that may be different from those made by a human operator or in violation of current laws.
Autonomous Vehicles and Deadly Dilemmas
A well-known ethical thought experiment applied to autonomous vehicle safety is the trolley problem developed by Philippa Foot in 1967. The experiment imagines there is a runaway trolley barreling toward a group of five people tied up on the tracks ahead. An individual onlooker is standing next to a switch that, if pulled, would divert the train onto a second track where only one person is tied up. The person has two options: either do nothing and witness the trolley kill the five people, or divert the train and kill only one person.
There are multiple correct or at least more correct answers depending on ethical viewpoint. The onlooker’s role in the process is key here, and a utilitarian view would say he is morally obligated to switch the track and kill only one person. An opposing view would say that moral wrongs are already occurring despite the onlooker’s involvement, so actively throwing the switch makes him partially responsible for the fatal accident instead of remaining a passive onlooker.
Doug Newcomb posed a similar AV-related question in PC Magazine in 2014. If a self-driving car is approaching a single-lane tunnel and encounters a child stumbling into the road within the tunnel, is it “right” for the car to protect its occupants and hit the child, or swerve and hit the tunnel entrance, avoiding the child but potentially injuring its occupants? AV programmers face a number of similar questions. Does the AV employ straightforward written-out rules in emergencies? Or does the car cede control to the human driver? Does it follow a utilitarian principle and try to maximize the number of surviving people, hitting the child if more than one person is riding in the car?
Moral responsibility adds a wrinkle to the problem. If an AV sensor fails due to bad weather and causes a fatal accident, who is responsible? There is no clear sense of whether it would be the vehicle manufacturer, sensor manufacturer, vehicle dealer or even the human driver. Humans who make wrong split-second decisions are easily forgiven, but AV programmers face greater responsibility because they have much more time to design an acceptable solution.
Responsible Engineering: A Problem of Many Hands
Engineers view the trolley problem differently. From their viewpoint, the dilemma faced by an onlooker or driver is a result of a number of prior design decisions. A well-designed trolley, for example, would incorporate a fail-safe like a dead man’s switch to prevent the situation from arising in the first place. An engineer is more likely to assign responsibility for the deadly trolley accident not to an onlooker but to the designer who failed to include such a device.
The onlooker’s decision to act is directly dependent on the failure to include the safety switch, a concept known as the path dependence of the trolley’s design history. Research, design and innovation are complex processes involving many hands, and more responsible engineering throughout the design history reduces the incidence of dilemmas later.
Greater Responsibility In Research and Innovation
Questioning technology from an ethical viewpoint may be more important today than ever in history. Developing technologies like nanomaterials, stem cell research, artificial intelligence and genetically modified organisms are often contested because of perceived threats to social needs and ethics. The risks and concerns surrounding new technologies are usually considered just before market introduction, after a long period of research, development and investment. If these risks are significant, the innovative product or service may encounter public or business moratoriums, leading to wasted investments in time and money that cannot be recovered.
The European Union (EU) introduced the concept of responsible research and innovation (RRI) to reduce these risks and streamline research, in part through more efficient ethical questioning. The 2013 European Commission report Options for Strengthening Responsible Research and Innovation describes RRI in detail.
The EU defines RRI as a comprehensive approach of proceeding in research and innovation in ways that allow all research and innovation stakeholders to accomplish three objectives at earlier stages in the innovation process:
- Obtain relevant knowledge on the consequences of the outcomes of their actions and on the range of options open to them
- Effectively evaluate both outcomes and options in terms of societal needs and moral values
- Use these two points as functional requirements for design and development of new research, products and services
Building on these three requirements, RRI describes a research and innovation framework that actively considers ethics and contributing to social needs. The EU hopes to use RRI to reshape and focus differing policies on a multi-national scale to cut waste and leverage technology to solve societal challenges. RRI-inspired policy is a key piece of Europe 2020, an EC strategy aimed at “smart, sustainable, inclusive growth” through stronger coordination of European policy.
Can RRI Help Radical Innovation?
Radical innovations like AVs can be revolutionary but frightening to the public. They tend to introduce new values and disrupt current systems, leading to new ways of living. Consider life before and after the internet, for example. Despite the fear and negativity now surrounding autonomous vehicles, they have the potential to revolutionize transportation and significantly reduce traffic congestion, emissions and accidents.
Understanding the needs and inputs of all stakeholders early on, as RRI suggests, is a sound way of balancing important values like sustainability, safety, ethics, transparency and accountability in the design process. While tough questions about emerging disruptive technologies may cause public anxiety, assuming responsibility for them at an early stage prevents much greater losses in time, financial investment and life later in the game.
References
European Commission—Responsible Research and Innovation
IEEE Spectrum—75% of Americans Fear Self-driving Cars, But It’s an Easy Fear to Get Over