Masks and fills are nothing new in digital art, but Nvidia’s application of GANs, or generative adversarial networks, could have impacts in how we create images.
The concept was only discovered in 2014 by Ian Goodfellow, a then Ph.D student and current research scientist at Google. Instead of implementing one neural network that requires a large amount of data and labor to train the network, researchers implemented a second, competing neural network to interpret results of the first to further train its abilities. Founder and CEO Jensen Huang described the concept:
“After training, what you end up with is a network that is able to paint like Picasso, and you have another network that is able to recognize images and paintings at an unheard-of level of discrimination”.
Nvidia’s prototype allows users to draw a simple segmentation map of what they want to create, and the AI trained to paint more than a million images will fill in the labeled segments accordingly. The image creator network is continuously improved by the competing discriminator so the tool can learn realism by picking up shadows, reflections, and patterns in the images it trains with.
This tool isn’t available to the public for now, but Nvidia suggested the tool could be used to save time in urban planning, architecture, landscape development, and game design. More importantly, GANs in general are becoming a promising method of generating meaningful data for future AI training advancements in strictly regulated fields such as medicine.
What other research could benefit from the use of GANs?
Could AI disrupt the digital creative industry?