Seeing a massive wave of advancements in recent years, artificial intelligence has quickly become powerful enough to ponder the possible ramifications of the technology. One such example of AI that has seen rapid advancements is in facial generation software. Created by Philip Wang, a software engineer at Uber, the website ThisPersonDoesNotExist.com uses research collected by Nvidia to create pictures of people. What makes this technology impressive is the fact that none of the pictures are actual real, which are instead created by the AI. Using a type of neural network known as a generative adversarial network, it creates these faces by using a huge database filled with real examples. In fact, this type of image generation is even being used to create art and new font types. As seen above, the faces were generated on the right in 2014 and the faces on the right were generated in 2018. This shows just how quickly this type of technology is evolving.
As with any new technology, the possible implications of this technology need to be considered. There has been immense controversy over social media and the legitimacy of online profiles. How is a platform able to distinguish between a real user and a bot made to spread propaganda about a politician? If AI is able to create these seemingly real faces, they are going to be more successful at remaining anonymous online. However, this technology is still far from perfect. If you refresh the site long enough, there will be some pretty strange looking faces that don’t necessarily look like a real person. It’s only a matter of time before this technology becomes so good that we can’t tell the difference.
What other possible ramifications could this technology have? Do you think that platforms should have some kind of authentication to prove the user is real?