Recommendations and Feedbacks The vast majority of B2C services are quickly discovering the strategic importance of solid recommendation engines to improve the conversion rates and an establish a stronger fidelity with the customers. The most common strategies are based  on the segmentation of users according to their personal features (age range, gender, interests, social interactions, and so on) or to the ratings they gave to specific items. The latter approach normally relies on explicit feedbacks (e.g. a rating from 0 to 10) which summarize the overall experience. Unfortunately, there are drawbacks to both cases. Personal data are becoming harder to retrieve and the latest regulations (i.e. GDPR) allow the user to interact with a service without the collection of data. Moreover, a reliable personal profile must be built using many attributes that are often hidden and can only be inferred using predictive models. Conversely, implicit feedbacks are easy to […]
Fork Word2Vec (https://code.google.com/archive/p/word2vec/) offers a very interesting alternative to classical NLP based on term-frequency matrices. In particular, as each word is embedded into a high-dimensional vector, it’s possible to consider a sentence like a sequence of points that determine an implicit geometry. For this reason, the idea of considering 1D convolutional classifiers (usually very efficient with images) became a concrete possibility. As you know, a convolutional network trains its kernels so to be able to capture, initially coarse-grained features (like the orientation), and while the kernel-size decreases, more and more detailed elements (like eyes, wheels, hands and so forth). In the same way, a 1D convolution works on 1-dimensional vectors (in general they are temporal sequences), extracting pseudo-geometric features. The rationale is provided by the Word2Vec algorithm: as the vectors are “grouped” according to a semantic criterion so that two similar words have very close representations, a sequence can be […]
Fork Autoencoders are a very interesting deep learning application because they allow a consistent dimensionality reduction of an entire dataset with a controllable loss level. The Jupyter notebook for this small project is available on the Github repository: https://github.com/giuseppebonaccorso/lossy_image_autoencoder. The structure of a generic autoencoder is represented in the following figure: The encoder is a function that processes an input matrix (image) and outputs a fixed-length code: In this model, the encoding function is implemented using a convolutional layer followed by flattening and dense layers. The code is then fed into the decoder, which reconstructs a lossy version of the original image: The decoder is implemented using a deconvolutional (separable convolution) layer with 3 filters (one per channel). The model is trained minimizing the L2 loss: For the experiment, I’ve used the CIFAR-10 dataset (https://www.cs.toronto.edu/~kriz/cifar.html), using only the training samples (50000 32 x 32 RGB images) and the Keras wrapper: from […]
Fork I’ve just published a repository (https://github.com/giuseppebonaccorso/keras_deepdream) with a Jupyter notebook containing a Deepdream (https://github.com/google/deepdream) experiment created with Keras and a pre-trained VGG19 convolutional network. The experiment (which is a work in progress) is based on some suggestions provided by the Deepdream team in this blog post but works in a slightly different way. I use a Gaussian Pyramid and average the rescaled results of a layer with the next one. A total variation loss could be employed too, but after some experiments, I’ve preferred to remove it because of its blur effect. Some examples obtained with different settings in terms of layers and number of iterations: It’s possible to create amazing videos by zooming into the same image. This in an example created with 1500 frames: Deepdream animation with Keras and VGG19 This video has been created using the notebook https://github.com/giuseppebonaccorso/keras_deepdream which is a Deepdream experiment based on some suggestions provided […]
Fork I’ve just moved my Keras-based Neural Artistic Style Transfer GIST to a dedicated repository: https://github.com/giuseppebonaccorso/Neural_Artistic_Style_Transfer. Please refer always to it because the GIST is not more maintained. See also: Neural artistic style transfer experiments with Keras – Giuseppe Bonaccorso Artistic style transfer using neural networks is a technique proposed by Gatys, Ecker and Bethge in the paper: arXiv:1508.06576 [cs.CV] which exploits a trained convolutional network in order to reconstruct the elements of a picture adopting the artistic style of a particular painting.
Fork Artistic style transfer using neural networks is a technique proposed by Gatys, Ecker and Bethge in the paper: arXiv:1508.06576 [cs.CV] which exploits a trained convolutional network in order to reconstruct the elements of a picture adopting the artistic style of a particular painting. I’ve written a Python program (available in the Github repository: https://github.com/giuseppebonaccorso/Neural_Artistic_Style_Transfer) based on Keras and VGG16/19 convolutional networks, that can be used to perform some experiments. In fact, considering the huge number of variables and parameters, this kind of problems is very sensitive to the initial conditions and a different starting state can lead to different minima which content doesn’t meet our requirements. In the script, it’s possible to choose among six initial canvas types: Random: RGB random pixels from a uniform distribution Random from style: random pixels sampled from the painting Random from picture: random pixels sampled from the picture Style/Picture: Painting or picture full […]
(Updated on July, 24th, 2017 with some improvements and Keras 2 style, but still a work in progress) CIFAR-10 is a small image (32 x 32) dataset made up of 60000 images subdivided into 10 main categories. Check the web page in the reference list in order to have further information about it and download the whole set. Considering our current screen resolutions, it’s not difficult saying that those images are no more than icons and indeed some of them are very hard to be classified even by human beings. Using Keras, I’ve modeled a deep convolutional network (VGGNet-like) in order to try a classification. I’m still investigating the best architecture (in CIFAR home page, there are very interesting references to papers and other results), however, I think it can be a good starting point. As the output is a softmax layer, it can also be interesting to evaluate mixed […]