Computer Electronics

AI ethics as a new intelligence comes to life

27 March 2019
As AI increasingly permeates human society at the same time it advances in capability, it is wise to consider the ethical and moral implications involved with this field of technology.

Artificial intelligence (AI) is rapidly advancing. In many applications, AI is as good as, or better than, humans performing the same task. For example, AI based on machine learning techniques like neural networks can defeat the best human Go players, diagnose cancers as accurately as trained physicians and drive cars autonomously.

Self-driving vehicle technology is on the cusp of mass deployment. Ford’s implementation – developed by Argo AI – has a target launch date of 2021. Source: Argo AISelf-driving vehicle technology is on the cusp of mass deployment. Ford’s implementation – developed by Argo AI – has a target launch date of 2021. Source: Argo AI

And there isn’t a good reason to expect that progress in AI will end any time soon. A long-term goal of AI researchers is to develop artificial general intelligence (AGI) that escapes the specialized niches of current AI and applies its intelligence as competently as any human to as wide an array of tasks as humans are capable of completing.

A survey carried out in 2012 and 2013 of AI experts found that respondents think there is a 50:50 chance that such high-level machine intelligence will be developed by 2040, and a 90% chance that it will exist by 2075.

Already, researchers are making strides toward a potentially crucial aspect of AGI – self-awareness – with robots that can generate their own self-models, analogous to people thinking about their own bodies.

Because AI is increasingly permeating human society at the same time it is advancing in capability, it is wise to consider the ethical and moral implications involved with this field of technology.

Automated algorithms

One way to view the ethical issues associated with AI is to examine AI’s affect on humanity. As AI takes over roles traditionally carried out by humans, it gains authority over decisions that affect society and human lives. Since it directly affects people in these situations, AI’s actions take on ethical significance – its conduct is subject to judgment as morally "good" or "bad."

The MQ-9 Reaper UAV is still remotely piloted by humans, but in future conflicts, automated weapons could be a deciding factor, with AI making life-or-death decisions. Source: U.S. Air ForceThe MQ-9 Reaper UAV is still remotely piloted by humans, but in future conflicts, automated weapons could be a deciding factor, with AI making life-or-death decisions. Source: U.S. Air Force

Consider automated algorithms that evaluate mortgage applications or military drones that decide whether to deploy weapons. The possibility of bad decisions by AI in these cases is cause for concern. Negative outcomes might include denial of mortgage applications due to an unintended racial bias based on neighborhood location, or a drone firing missiles on suspected terrorists who turn out to be civilians.

These issues are accompanied by other societal risks stemming from advances in AI, like the potential for rising unemployment due to intelligent machines displacing humans in the job market, or the undermining of personal privacy as widespread surveillance and monitoring systems are enabled by automation.

Who should be accountable if AI makes a wrong decision that, perhaps, leads to human suffering? The algorithm itself, its human creators, the corporation or the government agency employing the AI?

Nick Bostrom, professor of philosophy at Oxford University and founding director of the Future of Humanity Institute, and Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, outline several principles that should be adhered to when developing AI algorithms, including that they should be:

  • “Transparent to inspection,” so that people can figure out how and why AI algorithms make the decisions they do;
  • “Predictable to those they govern,” so that an environment of stability is maintained and seemingly arbitrary decisions don’t lead to chaos; and
  • “Robust against manipulation,” so that AI algorithms remain secure and cannot be exploited for nefarious purposes.

Artificial general intelligence

Ethical issues surrounding today's nascent AI will remain ongoing concerns into the near-term future. But a different set of moral questions will arise when humans succeed in creating synthetic intelligence with the same competencies as themselves – AGI.

In the future, where will human-level AI dwell? Will they take on physical form, inhabiting robot bodies, will they reside in the cloud on internet-connected servers, or will they manifest in some other manner?In the future, where will human-level AI dwell? Will they take on physical form, inhabiting robot bodies, will they reside in the cloud on internet-connected servers, or will they manifest in some other manner?

What are the moral implications of creating such a being? Is such advanced AI truly sentient, sapient, conscious and self-aware in the same way that humans are? If the answer is yes, then some technologists argue that AI should have the same moral status as humans, with all of the rights and protections that come along with that.

To clarify the issue, Bostrom and Yudkowsky introduced a “Principle of Substrate Non-Discrimination”:

"If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status."

If AGI has the same moral status as humans, is it right to assign it tasks to perform for humans? If the AGI decides it doesn’t want to work, should it have the freedom to pursue its own ambitions?

Moral norms that are taken for granted when applied to humans may no longer apply in a world with AGI. Take, for example, the issue of reproductive freedom. Parents have the right to have as many children as they want. If they cannot take care of their babies, society cares for them in their stead.

AI might someday also have the desire and ability to reproduce. For humans, a biological limit restricts how many babies are brought into the world; babies must gestate for nine months before birth and children take over a decade to reach maturity.

AI, however, might reproduce by making copies of itself extremely rapidly. But digital storage and computing power are finite resources, just as food and living space are for humans. Should a future society pay for additional computer hardware to allow AI to reproduce with no restrictions? In this scenario, is it morally correct for AI to have complete reproductive freedom?

Superintelligence

Contemplating the ethical issues of simple automated algorithms or even the moral implications of human-level AI may be short-term concerns. Humanity as a species has sought better living standards through economic growth and technological progress. It seems reasonable to assume that AI will also seek to improve its life.

Advanced AI capable of self-improvement could trigger an intelligence explosion, advancing far beyond the level of human intelligence, an event known as the singularity.Advanced AI capable of self-improvement could trigger an intelligence explosion, advancing far beyond the level of human intelligence, an event known as the singularity.

There exists a strong possibility that AI will, by modifying and improving itself, advance beyond the capabilities of even the most intelligent humans. Such superintelligence may arrive sooner rather than later, perhaps within the next century.

In the 2012 survey of AI experts, respondents predict there will be a 75% chance that superintelligence will exist within 30 years after the advent of AGI.

What code of ethics will superintelligence abide by? Will it afford humanity the same rights and protections that it currently enjoys? Will it evolve to such a state of superiority that it sees humans in the same way as people look upon insects today?

Nearly a third (31%) of the surveyed AI experts think that the long-term effects on humanity of advanced AI will be negative, with 13% saying it will be on balance bad and 18% thinking it will be catastrophic for humanity.

Superintelligent AI could very well present an existential threat to humanity. Academics and business leaders like Stephen Hawking, Bill Gates and Elon Musk have warned of potentially dangerous outcomes of AI. Geoffrey Hinton, considered by some to be the “Godfather of Deep Learning,” has observed that “there is not a good track record of less-intelligent things controlling things of greater intelligence.”

Some see superintelligent AI as part of the natural evolution of our species to its next stage of development.Some see superintelligent AI as part of the natural evolution of our species to its next stage of development.

But a grim AI future is not necessarily preordained. Indeed, most surveyed respondents (52%) think AI will have a good or extremely good long-term effect on humanity. Perhaps instead of a dystopian future in which humanity is ignored, constrained, segregated or even eliminated by superintelligent AI, the human species advances together with it. Along the path of its ascension, AI might speed development of technologies like microchip neural interfaces, which could allow humans to merge minds and bodies with AI to create a whole that is greater than the sum of its parts.

Promoting friendly AI

AI is already functioning as a beneficial tool in many sectors of society and its advance continues unabated. A record $19.2 billion across over 1,600 deals was invested in AI startups globally in 2018 alone, according to CB Insights. And that’s not counting the majority of AI research, which takes place at universities and large corporations like Google and IBM.

In light of AI progress and the technology’s great potential for both benefits and dangers, it is prudent to consider how we can ensure positive AI outcomes instead of negatives ones.

Fortunately, several organizations are working to make certain that current and future AI remains beneficial for humanity. Non-profits groups like the Machine Intelligence Research Institute and OpenAI are performing research and building tools in pursuit of safe and friendly AI.

OpenAI’s GPT-2 language AI is so good at writing synthetic text that the organization chose not to release it to the public due to the danger that malicious parties will use it for ill purposes. Source: OpenAIOpenAI’s GPT-2 language AI is so good at writing synthetic text that the organization chose not to release it to the public due to the danger that malicious parties will use it for ill purposes. Source: OpenAI

Ironically, OpenAI considers one of its most recent tools too dangerous to release to the public. The organization’s GPT-2 language AI is so good at generating realistic text that the organization worries bad actors would use it for malicious purposes, like releasing fake news stories or impersonating others.

The Future of Life Institute has held two conferences bringing together leading AI researchers and experts in ethics, philosophy, economics and law to discuss AI issues. The most recent Asilomar conference established 23 principles to guide AI development, including recommendations for how research should be conducted and guidelines around ethics and values.

IEEE launched the Global Initiative on Ethics of Autonomous and Intelligent Systems, which is developing several global standards around ethical AI. In 2019, the initiative released the first edition of a treatise called Ethically Aligned Design aimed at placing ethical issues at the top of technologists’ list of priorities in 2019.

AI is being developed and deployed in an effort to harness its immense potential to improve daily lives. But as AI becomes increasingly responsible for decisions affecting society and advances to a level of intelligence on par with and even surpassing that of humans, the risk of unintended consequences exists. A positive outcome is not guaranteed. But the chances of a good result may well increase if engineers focus on aligning AI with ethical and moral values.

Further reading

Nick Bostrom and Eliezer Yudkowsky. 2018. “The Ethics of Artificial Intelligence” in Artificial Intelligence Safety and Security. Chapman and Hall. Previously published in "The Cambridge Handbook of Artificial Intelligence" (2014).

Torresen J (2018) A Review of Future and Ethical Perspectives of Robotics and AI. "Frontiers in Robotics and AI" 4:75. doi: 10.3389/frobt.2017.00075

Discover products

Explore the components that enable and empower AI: Memory chips, Microprocessor chips (MPU) and CPU chips.



Powered by CR4, the Engineering Community

Discussion – 7 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Re: AI ethics as a new intelligence comes to life
#1
2019-Apr-26 9:14 AM

"But the chances of a good result may well increase if engineers focus on aligning AI with ethical and moral values." Engineers produce the product they are paid to produce. It is Ownership, the people who sign the checks, who must focus on ethical and moral values.

chibi
#2
2019-Sep-11 3:04 AM

Thank you for sharing this great post, I am very impressed with your post, the information given is meticulous and easy to understand. I will often follow your next post. atari breakout

Re: AI ethics as a new intelligence comes to life
#3
2020-Feb-03 2:24 AM

OpenAI are performing examination and building instruments in quest for protected and neighborly AI. Non-benefits bunches like the Machine Intelligence Research Institute and branding companies Dubai with especial effects.

Re: AI ethics as a new intelligence comes to life
#4
2020-Feb-24 8:17 AM

When i heard about AI in web design, SEO and paid ads, i was laughing what AI has to do with these services. But later on, when i start reading about how AI can transform the way we search and analyze data, i was shocked. From that i make a mind i will surely learn AI and try to implement it in my web development company in dubai business model. I am sure it will be a great help and i can achieve great results if i implement it successfully.

Re: AI ethics as a new intelligence comes to life
#5
2020-Mar-19 5:58 AM

DubaiWebDesign is only a digital marketing agency dubai that offer web design dubai with AI implementation . For more details visit our site

Re: AI ethics as a new intelligence comes to life
#6
2020-Apr-08 5:54 AM

OpenAI are performing examination and develop instruments in go after protected and neighborly custom ios development AI. Non-benefits bunches just like the Machine Intelligence Research Institute and branding Mobile Game App Development with especial effects.

Re: AI ethics as a new intelligence comes to life
#7
2020-Jul-28 9:06 AM

This is a great blog posting and very useful.SEO Dubai

Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement