A couple weeks after my project over the summer studying Generative Adversarial Networks (GANs), I learned about two cool art programs, Google DeepDream and ArtBreeder. Both of these programs use GANs that I discussed before.
(Read my post about Creative Computers where I talk about GANs first! - https://www.artistry4happiness.org/post/creative-computers)
Let's look at Google DeepDream first. Google DeepDream explores GANs and neural nets through overprocessed, trippy images.
This program uses neural nets to find common archetypes in the noise of images.
Ex. dogs in a picture of spaghetti
DeepDream adds wrinkles by altering original image to look more like the archetype. So, how exactly does it work?
A neural network is trained with millions of pictures of, in this case, dogs.
Neural nets learn how to identify parts of dogs in images. It uses layers of processing (layers for color, edges, etc.) to essentially “learn” what a dog looks like.
DeepDream recreates the object on a specified image. If the network sees potential features of a dog in spaghetti, each layer accentuates those features to ultimately generate dogs.
Here's some more pictures of DeepDream:
The next type of program that I learned about is called ArtBreeder (it does exactly what it sounds like it does). This program, like DeepDream, also uses GANs to develop art images.
This program takes elements from each of the original pictures and incorporates them into the computer generated picture. Just like with biological breeding- mesh together with components of each parent to form a child that has elements of each parent. This specific example is one of the most basic with equal influence of content and style from each of the paintings.
Here's how ArtBreeder works:
Structured as though it is parents and their offspring
“Parent generation” is the original images that you are combining together
The following pictures are known as the “children generation”
After the parents have “crossbred”, you are able to edit the genes of the images
You can choose the prevalence of style or of content, how sharp are the edges, how large is the ground, how much a particular color is present?
Here's a video of me editing an ArtBreeder image using each of their features. As you can see, a little goes a long way.
The technology behind ArtBreeder was inspired by NeuroEvolution of Augmenting Topologies (NEAT)- based Genetic art
First to generate images that looked like known objects -- relate to the world
Uses process of complexification = neural networks become larger and more complex as they evolve
Can start with a simply image with a general idea
Represents patterns of familiar features
This is an example of a graph of what the network is doing. As you can see, some of the features remain as the network branches out, and some don't. Either way, it's making it more and more complex: complexification.
These are just some of the many art and AI programs that are out there and my explanations of their processes, just barely touch the surface.
A couple weeks after my project over the summer studying Generative Adversarial Networks (GANs), I found out about a different algorithm known as Creative Adversarial Networks (CANs). This algorithm attempts to build on GANs to develop novel art pieces.
We can call them GANs with a twist:
The generators in a CAN attempt to create works of art that do not fit into a specific genre of art as evaluated by the discriminator.
But, the discriminator still must recognize the pieces as real images.
This algorithm can be the future of these art programs as it does a very good job of mimicking human creativity. But will it ever be perfect?
Comments