A new groundbreaking study from the Institute of Cognitive Science at the University of Osnabruk has found that human morality can be modeled, proving that machine-based moral decisions are possible. The research was published in Frontiers in Behavioral Neuroscience. The authors used immersive virtual reality to study human behavior in simulated road traffic scenarios.
The participants of the study were asked to drive a car in a suburban neighborhood on a foggy day. They experienced unavoidable driving dilemmas like inanimate objects, animals and humans in the road, and had to decide which was to be spared. The results were conceptualized by statistical models, resulting in rules with an associated degree of explanatory power in order to explain the observed behavior. The research proved that moral decisions in the scope of unavoidable traffic collisions can be explained and modeled by a single value-of-life for every human, animal or inanimate object.
Leon Stufeld is the first author of the study. He says that it had been assumed that moral decisions are context-dependent and cannot be modeled or described algorithmically, until now. "But we found quite the opposite," Stufeld says. "Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal or inanimate object." This finding implies that human moral behavior can be described by algorithms, and therefore could be used by machines as well.
The findings also have major implications in the debate about the behavior of self-driving cars and other self-operating machines in unavoidable situations. For example, the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has 20 ethical principles related to self-driving vehicles in relation to behavior in unavoidable accidents. This makes the assumption that human moral behavior cannot be modeled.
Professor Gordon Pipa, who is the senior author of the study, says it now seems possible that machines can be programmed to make human-like moral decisions. He says it is crucial that society starts an urgent and serious debate: "We need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?"
For example, a child running onto the road would be classified as significantly involved in creating the risk, therefore less qualified to be saved when compared to an adult standing on a sidewalk as a non-involved party. Research attempts to determine whether this is a moral value held by most people and the size of the scope for interpretation.
"Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma," explains Prof. Peter König, a senior author of the paper. "Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machine ... act just like humans."
The authors say that autonomous cars are just the beginning. With hospital care robots and other artificial intelligence systems becoming more commonplace, autonomous technology is also on the rise. The researchers warn that society is now at the beginning of a new epoch and will need clear rules, or machines will start making decisions without human interaction.