Commentary

A Busy Week for AI Ethics

05 June 2018

When making ethical arguments about tech fields, AI is one of the lowest-hanging fruits. Artificially intelligent systems operate with the goal of acting or behaving like or in place of an actual human being, and ethical conduct is important to real humans. Ergo, we have AI ethics.

This week has seen a proliferation of news about AI ethics — let’s dig in.

Google Exits Project Maven

As announced on June 1, Google declined to renew its contract with the U.S. Department of Defense’s Project Maven, which sought to develop AI to recognize people and faces in drone footage. The tech behemoth’s decision to get involved in a DoD project months ago prompted employees to quit in droves, and 4,000 workers had signed a petition to cancel the contract. Granted, Google’s contract with the DoD doesn’t expire until next year, so the company will continue to work on Project Maven until that date.

Google’s exit from the military surveillance project is an interesting twist for a company whose historical motto was “Don’t be evil.” (It’s since been watered down to “Do the right thing.”) That said, there is still something of an AI arms race afoot between Russia, China, the United States and other world military powers. The Department of Defense budget for AI, big data and cloud computing is steadily increasing. Last month, Gizmodo reported on a two-year, $10 billion DoD cloud computing contract that’s tantalizing even for the biggest tech giants. (If you’re horrified at the thought of military secrets and data floating around on the cloud, you’re not alone.)

All signs point to AI being the future of both military research and most other areas of tech-driven life, so other tech companies like Amazon, Microsoft and Oracle will likely have to weigh their ideologies against lucrative military contracts in the future. And they’ll have to clearly define what an “evil” business practice actually is, although that’s a topic for another article.

MIT Creates a Psychopathic AI

MIT has developed a “psychopathic” AI, dubbed “Norman” after the villainous Norman Bates in Alfred Hitchcock’s 1960 horror film “Psycho.” Norman was officially announced on April 1st of this year, and received minimal coverage because of his coincidence with the April Fools holiday.

It turns out that Norman is a real, semi-serious project. MIT researchers fed Norman content from “the darkest corners of Reddit” and used him as a case study of an AI system trained on biased or otherwise bad data. After Norman’s training, he was asked to caption Rorschach blots, and his responses were compared to those of a “normally trained” AI. As suspected, Norman created some pretty dark captions: he labeled one image “man is shot to death in front of his screaming wife” against a normal AI caption of “a person is holding an umbrella in the air,” for example. The point of the exercise is that an AI’s data plays a strong role in its worldview and ultimate functioning.

While it’s tempting to believe that MIT researchers created Norman for legitimate study of a deranged AI, I think they’re enjoying some light-hearted AI fun as well. A similar team created Nightmare Machine — a deep learning program that morphs normal photos of faces or places into horror-influenced images — in 2016. Last year, they debuted Shelley, an AI horror writer trained on scary stories from the /nosleep subreddit, much the same way as Norman was.

The concept of training an AI on less-than-desirable human traits reminded me of an iconic scene in the sci-fi film “The Fifth Element,” in which humanoid/savior-of-the-world Leelo is trained on the ins-and-outs of human civilization by watching a rapid succession of images in alphabetical order. When she reaches images about “War,” her rosy view of humanity is broken and she sheds very real human tears. Perhaps AI will somehow follow the same path.

Stay Tuned For Official Google AI Principles

Finally, spurred by the controversy surrounding the Project Maven involvement, Google’s CEO Sundar Pichai is due to release a set of AI principles this week. Google will then join the Future of Life Institute, IEEE, the Japanese Society for Artificial Intelligence and other organizations looking to create an ethical strong AI of the future. While purely conjecture at this point, the principles will likely address the definition of weaponized AI and the company’s future plans around its development.



Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement