AI has been conquering the world in storm. Whether it is detecting cyber attacks, classifying customers, or understanding, processing and creating natural language, artificial neural networks are one of the most exciting technologies in today’s rapidly changing world. It is estimated that within the next decade, the technology will not only be able to tag you in your friend’s facebook pictures thanks to facial recognition, but also play a fundamental role in autonomous driving and eventually allow us to completely automate airport security scans, as the machine will be able to classify the items within your luggage, quicker and more accurately than any human.
What if I told you that one of the most powerful engines of this technology, Google’s InceptionV3 image recognition engine, is 99% confident that the cat on the left is actually guacamole?
By altering single pixels, which does not change how humans perceive the image, researchers tricked the AI to wrongly classify images in virtually any object they wanted. As AI becomes more prevalent in our daily lives and is a crucial part of technical innovations such as self-driving cars, image classification engines, which hallucinate and can be tricked into misclassifying stop signs or even bikers or pedestrians probe a huge security risk. More importantly, the leaders in AI innovation have no answer in preventing such attacks as the algorithms follow a black-box approach. “Researchers have essentially created artificially intelligent systems that “think” in different ways than humans do, and no one is quite sure how they work.” (Matsakis, 2019) Yet, it will be crucial to find better answers and methods to determine how machines learn, and how we can ensure that human intelligence does not abuse its cognitive power to trick AI into thinking that rifles are helicopters, skiers are dogs, and cats are guacamole.
Matsakis, Louise. “Researchers Make Google AI Mistake a Rifle For a Helicopter.” Wired, Conde Nast, 21 Dec. 2017, www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/.
Simonite, Tom. “AI Has a Hallucination Problem That’s Proving Tough to Fix.” Wired, Conde Nast, 12 Mar. 2018, www.wired.com/story/ai-has-a-hallucination-problem-thats-proving-tough-to-fix/.
“Fooling Neural Networks in the Physical World.” Labsix, www.labsix.org/physical-objects-that-fool-neural-nets/