Python
Switch branches/tags
Nothing to show
Latest commit 7d053be Jul 12, 2017 @giuseppebonaccorso committed on GitHub Update README.md
Permalink
Failed to load latest commit information.
img Initial commit May 16, 2017
.gitignore Initial commit May 16, 2017
README.md Update README.md Jul 12, 2017
example.py Initial commit May 16, 2017
license Create license May 16, 2017
neural_styler.py Update neural_styler.py Jul 6, 2017

README.md

Neural artistic style tranfer


Based on: Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, "A Neural Algorithm of Artistic Style", arXiv:1508.06576
See also: https://github.com/fchollet/keras/blob/master/examples/neural_style_transfer.py
See some examples on: https://www.bonaccorso.eu/2016/11/13/neural-artistic-style-transfer-experiments-with-keras/

Usage

There are three possibile canvas setup:

  • Picture: The canvas is filled with the original picture
  • Style: The canvas is filled with the style image (resized to match picture dimensions)
  • Random from style: The canvas is filled with a random pattern generated starting from the style image

Some usage examples (both with VGG16 and VGG19):

Picture and style over random:
canvas='random_from_style', alpha_style=1.0, alpha_picture=0.25, picture_layer='block4_conv1'
Style over picture:
canvas='picture', alpha_style=0.0025, alpha_picture=1.0, picture_layer='block4_conv1'
Picture over style:
canvas='style', alpha_style=0.001, alpha_picture=1.0, picture_layer='block5_conv1'

For a mix of style transfer and deepdream generation, see the examples below.

Code snippets

neural_styler = NeuralStyler(picture_image_filepath='img\\GB.jpg',
                                 style_image_filepath='img\\Magritte.jpg',
                                 destination_folder='\\destination_folder',
                                 alpha_picture=0.4,
                                 alpha_style=0.6,
                                 verbose=True)

neural_styler.fit(canvas='picture', optimization_method='L-BFGS-B')
neural_styler = NeuralStyler(picture_image_filepath='img\\GB.jpg',
                                 style_image_filepath='img\\Magritte.jpg',
                                 destination_folder='\\destination_folder',
                                 alpha_picture=0.25,
                                 alpha_style=1.0,
                                 picture_layer='block4_conv1',
                                 style_layers=('block1_conv1',
                                               'block2_conv1',
                                               'block3_conv1',
                                               'block4_conv1',
                                               'block5_conv1'))
                                               
neural_styler.fit(canvas='random_from_style', optimization_method='CG')

Examples

(With different settings and optimization algorithms)

Cezanne

Magritte

Dalì

Matisse

Picasso

Rembrandt

De Chirico

Mondrian

Van Gogh

Schiele

Mixing style transfer and deep dreams

I'm still working on some experiments based on loss function which tries to maximize the L2 norm of the last convolutional block (layers 1 and 2). I've excluded those from the style_layers tuple and tuned the parameters to render a "dream" together with a styled image. You can try the following snippet:

# Dream loss function
dream_loss_function = -5.0*K.sum(K.square(convnet.get_layer('block5_conv1').output)) + \
                      -2.5*K.sum(K.square(convnet.get_layer('block5_conv2').output))

# Composite loss function
composite_loss_function = (self.alpha_picture * picture_loss_function) + \
                          (self.alpha_style * style_loss_function) + \
                          dream_loss_function

The composite loss function isnt't "free" to maximize the norm like in Keras DeepDream, because the MSE with the gramian terms forces the filters to get similar to the style, however, it's possible to obtain interesting results. The following pictures show the famous Tübingen styled with a Braque painting and forced to render "random" elements (they're similar to animal heads and eyes) like in a dream:

This example, instead, has been created using a VGG19 with a Cezanne painting and:

style_layers=('block1_conv1',
              'block2_conv1',
              'block3_conv1',
              'block4_conv1',
              'block5_conv1',
              'block5_conv2')
              
# Dream loss function
dream_loss_function = -10.0*K.sum(K.square(convnet.get_layer('block5_conv1').output)) + \
                      -5.0*K.sum(K.square(convnet.get_layer('block5_conv2').output))

(Original image by Manfred Brueckels - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6937538)

Requirements

  • Python 2.7-3.5
  • Keras
  • Theano/Tensorflow
  • SciPy