Artificial intelligence (AI) is rapidly advancing. In many applications, AI is as good as, or better than, humans performing the same task. For example, AI based on machine learning techniques like neural networks can defeat the best human Go players, diagnose cancers as accurately as trained physicians and drive cars autonomously.
And there isn’t a good reason to expect that progress in AI will end any time soon. A long-term goal of AI researchers is to develop artificial general intelligence (AGI) that escapes the specialized niches of current AI and applies its intelligence as competently as any human to as wide an array of tasks as humans are capable of completing.
A survey carried out in 2012 and 2013 of AI experts found that respondents think there is a 50:50 chance that such high-level machine intelligence will be developed by 2040, and a 90% chance that it will exist by 2075.
Already, researchers are making strides toward a potentially crucial aspect of AGI – self-awareness – with robots that can generate their own self-models, analogous to people thinking about their own bodies.
Because AI is increasingly permeating human society at the same time it is advancing in capability, it is wise to consider the ethical and moral implications involved with this field of technology.
Automated algorithms
One way to view the ethical issues associated with AI is to examine AI’s affect on humanity. As AI takes over roles traditionally carried out by humans, it gains authority over decisions that affect society and human lives. Since it directly affects people in these situations, AI’s actions take on ethical significance – its conduct is subject to judgment as morally "good" or "bad."
Consider automated algorithms that evaluate mortgage applications or military drones that decide whether to deploy weapons. The possibility of bad decisions by AI in these cases is cause for concern. Negative outcomes might include denial of mortgage applications due to an unintended racial bias based on neighborhood location, or a drone firing missiles on suspected terrorists who turn out to be civilians.
These issues are accompanied by other societal risks stemming from advances in AI, like the potential for rising unemployment due to intelligent machines displacing humans in the job market, or the undermining of personal privacy as widespread surveillance and monitoring systems are enabled by automation.
Who should be accountable if AI makes a wrong decision that, perhaps, leads to human suffering? The algorithm itself, its human creators, the corporation or the government agency employing the AI?
Nick Bostrom, professor of philosophy at Oxford University and founding director of the Future of Humanity Institute, and Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, outline several principles that should be adhered to when developing AI algorithms, including that they should be:
- “Transparent to inspection,” so that people can figure out how and why AI algorithms make the decisions they do;
- “Predictable to those they govern,” so that an environment of stability is maintained and seemingly arbitrary decisions don’t lead to chaos; and
- “Robust against manipulation,” so that AI algorithms remain secure and cannot be exploited for nefarious purposes.
Artificial general intelligence
Ethical issues surrounding today's nascent AI will remain ongoing concerns into the near-term future. But a different set of moral questions will arise when humans succeed in creating synthetic intelligence with the same competencies as themselves – AGI.
What are the moral implications of creating such a being? Is such advanced AI truly sentient, sapient, conscious and self-aware in the same way that humans are? If the answer is yes, then some technologists argue that AI should have the same moral status as humans, with all of the rights and protections that come along with that.
To clarify the issue, Bostrom and Yudkowsky introduced a “Principle of Substrate Non-Discrimination”:
"If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status."
If AGI has the same moral status as humans, is it right to assign it tasks to perform for humans? If the AGI decides it doesn’t want to work, should it have the freedom to pursue its own ambitions?
Moral norms that are taken for granted when applied to humans may no longer apply in a world with AGI. Take, for example, the issue of reproductive freedom. Parents have the right to have as many children as they want. If they cannot take care of their babies, society cares for them in their stead.
AI might someday also have the desire and ability to reproduce. For humans, a biological limit restricts how many babies are brought into the world; babies must gestate for nine months before birth and children take over a decade to reach maturity.
AI, however, might reproduce by making copies of itself extremely rapidly. But digital storage and computing power are finite resources, just as food and living space are for humans. Should a future society pay for additional computer hardware to allow AI to reproduce with no restrictions? In this scenario, is it morally correct for AI to have complete reproductive freedom?
Superintelligence
Contemplating the ethical issues of simple automated algorithms or even the moral implications of human-level AI may be short-term concerns. Humanity as a species has sought better living standards through economic growth and technological progress. It seems reasonable to assume that AI will also seek to improve its life.
There exists a strong possibility that AI will, by modifying and improving itself, advance beyond the capabilities of even the most intelligent humans. Such superintelligence may arrive sooner rather than later, perhaps within the next century.
In the 2012 survey of AI experts, respondents predict there will be a 75% chance that superintelligence will exist within 30 years after the advent of AGI.
What code of ethics will superintelligence abide by? Will it afford humanity the same rights and protections that it currently enjoys? Will it evolve to such a state of superiority that it sees humans in the same way as people look upon insects today?
Nearly a third (31%) of the surveyed AI experts think that the long-term effects on humanity of advanced AI will be negative, with 13% saying it will be on balance bad and 18% thinking it will be catastrophic for humanity.
Superintelligent AI could very well present an existential threat to humanity. Academics and business leaders like Stephen Hawking, Bill Gates and Elon Musk have warned of potentially dangerous outcomes of AI. Geoffrey Hinton, considered by some to be the “Godfather of Deep Learning,” has observed that “there is not a good track record of less-intelligent things controlling things of greater intelligence.”
But a grim AI future is not necessarily preordained. Indeed, most surveyed respondents (52%) think AI will have a good or extremely good long-term effect on humanity. Perhaps instead of a dystopian future in which humanity is ignored, constrained, segregated or even eliminated by superintelligent AI, the human species advances together with it. Along the path of its ascension, AI might speed development of technologies like microchip neural interfaces, which could allow humans to merge minds and bodies with AI to create a whole that is greater than the sum of its parts.
Promoting friendly AI
AI is already functioning as a beneficial tool in many sectors of society and its advance continues unabated. A record $19.2 billion across over 1,600 deals was invested in AI startups globally in 2018 alone, according to CB Insights. And that’s not counting the majority of AI research, which takes place at universities and large corporations like Google and IBM.
In light of AI progress and the technology’s great potential for both benefits and dangers, it is prudent to consider how we can ensure positive AI outcomes instead of negatives ones.
Fortunately, several organizations are working to make certain that current and future AI remains beneficial for humanity. Non-profits groups like the Machine Intelligence Research Institute and OpenAI are performing research and building tools in pursuit of safe and friendly AI.
Ironically, OpenAI considers one of its most recent tools too dangerous to release to the public. The organization’s GPT-2 language AI is so good at generating realistic text that the organization worries bad actors would use it for malicious purposes, like releasing fake news stories or impersonating others.
The Future of Life Institute has held two conferences bringing together leading AI researchers and experts in ethics, philosophy, economics and law to discuss AI issues. The most recent Asilomar conference established 23 principles to guide AI development, including recommendations for how research should be conducted and guidelines around ethics and values.
IEEE launched the Global Initiative on Ethics of Autonomous and Intelligent Systems, which is developing several global standards around ethical AI. In 2019, the initiative released the first edition of a treatise called Ethically Aligned Design aimed at placing ethical issues at the top of technologists’ list of priorities in 2019.
AI is being developed and deployed in an effort to harness its immense potential to improve daily lives. But as AI becomes increasingly responsible for decisions affecting society and advances to a level of intelligence on par with and even surpassing that of humans, the risk of unintended consequences exists. A positive outcome is not guaranteed. But the chances of a good result may well increase if engineers focus on aligning AI with ethical and moral values.
Further reading
Nick Bostrom and Eliezer Yudkowsky. 2018. “The Ethics of Artificial Intelligence” in Artificial Intelligence Safety and Security. Chapman and Hall. Previously published in "The Cambridge Handbook of Artificial Intelligence" (2014).
Torresen J (2018) A Review of Future and Ethical Perspectives of Robotics and AI. "Frontiers in Robotics and AI" 4:75. doi: 10.3389/frobt.2017.00075
Discover products
Explore the components that enable and empower AI: Memory chips, Microprocessor chips (MPU) and CPU chips.