What if AI Fails?
Despite artificial intelligence capabilities dating back to the 1940s, we still know very little about these technologies. Of course, AI from back then is far different than what we refer to as AI now; think IBM Watson, self-driving cars, etc. AI has certainly come a long way, but our understanding of it has lapsed. In fact, we know so little that we’re not quite sure how to fix it when things go wrong. When an AI technology needs debugging, technologists aren’t quite sure what to do. After all, “debugging is based on understanding,” so to debug something one doesn’t truly understand, is fairly impossible.
To complicate the issue even further, bugs in AI technologies may not make themselves apparent until it’s already done damage. With AI infiltrating our lives at a rapid pace and in drastic ways, its important that we identify bugs and fix them quickly to avoid harm. Imagine a self-driving car that had an unidentified or misunderstood bug, which cause the car to crash. It is imperative that we work on this debugging issue before AI is allowed to make fatal mistakes. How cautious should we be with AI in the meantime? Do you agree that, as some are saying, we should halt the use of autonomous cars after Uber’s crash in Arizona?