Keras-based Deepdream experiment based on VGG19
I’ve just published a repository (https://github.com/giuseppebonaccorso/keras_deepdream) with a Jupyter notebook containing a Deepdream (https://github.com/google/deepdream) experiment created with Keras and a pre-trained VGG19 convolutional network. The experiment (which is a work in progress) is based on some suggestions provided by the Deepdream team in this blog post but works in a slightly different way. I use a Gaussian Pyramid and average the rescaled results of a layer with the next one. A total variation loss could be employed too, but after some experiments, I’ve preferred to remove it because of its blur effect.
Some examples obtained with different settings in terms of layers and number of iterations:
It’s possible to create amazing videos by zooming into the same image. This in an example created with 1500 frames:
This video has been created using the notebook https://github.com/giuseppebonaccorso/keras_deepdream which is a Deepdream experiment based on some suggestions provided by the Deepdream team in http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html (with some modifications) and created with Keras and VGG19 convolutional network.
Artistic style transfer using neural networks is a technique proposed by Gatys, Ecker and Bethge in the paper: arXiv:1508.06576 [cs.CV] which exploits a trained convolutional network in order to reconstruct the elements of a picture adopting the artistic style of a particular painting.