What if something could be created that could “trick” machines into thinking they were “seeing” something else? Artificial neural networks are increasingly being used in everyday life -- look at Google’s Photos app, This tabby cat was misidentified as guacamole Google’s InceptionV3 image classifier. (Photo credit MIT CSAIL.)Skype’s translation function, and Microsoft’s Cortana. In addition, neural networks are used in technologies such as self-driving cars so they can look out for and recognize objects. In the near future, these systems could be employed to help identify explosives in security lines. Therefore, the ability to fool such a system could have serious consequences; understanding how that could be accomplished is imperative in order to prevent is occurrence.
In a new paper, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) , the researchers revealed a method of producing real-world 3D objects that can consistently fool neural networks. The team demonstrated they could present a gun to a neural network that would not identify it as such. In addition, by changing the object’s texture only slightly, the team could create a bomb that would be recognized as a tomato, for example. In a worst case scenario, it could render an object entirely invisible. For example, a 3D-printed toy turtle was misclassified as a rifle and a baseball was identified as an espresso, regardless of the viewing angle.
Moving an object can often help to accurately classify an object; this recent study illustrates that “adversarial examples” can be produced despite how the object is repositioned. Although there is no evidence that this kind of manipulation is taking place currently, safe self-driving cars and other systems that use neural networks depend upon the continuation of this area of research.
The paper is now under review for the 2018 International Conference on Learning Representations.