“Do not worry about your difficulties in Mathematics. I can assure you mine are still greater.” (A. Einstein) Machine Learning is becoming more and more widespread and, day after day, new computer scientists and engineers begin their long jump into this wonderful world. Unfortunately, the number of theories, algorithms, applications, papers, books, videos and so forth is so huge to disorient whoever hasn’t a clear picture of what he wants/needs to learn to improve his/her skills. In this short post, I wanted to share my experiences, suggesting a feasible path to learn quickly the essential concepts and being ready to go deeper the most complex topics. Of course, this is only a personal proposal: every student can choose to dedicate more attention to some topics which are more interesting based on his/her experience. Prerequisites Machine Learning is based on Mathematics. It’s not an optional, theoretical approach: it’s a fundamental pillar […]
Fork I’ve just published a repository (https://github.com/giuseppebonaccorso/keras_deepdream) with a Jupyter notebook containing a Deepdream (https://github.com/google/deepdream) experiment created with Keras and a pre-trained VGG19 convolutional network. The experiment (which is a work in progress) is based on some suggestions provided by the Deepdream team in this blog post but works in a slightly different way. I use a Gaussian Pyramid and average the rescaled results of a layer with the next one. A total variation loss could be employed too, but after some experiments, I’ve preferred to remove it because of its blur effect. Some examples obtained with different settings in terms of layers and number of iterations: It’s possible to create amazing videos by zooming into the same image. This in an example created with 1500 frames: Deepdream animation with Keras and VGG19 This video has been created using the notebook https://github.com/giuseppebonaccorso/keras_deepdream which is a Deepdream experiment based on some suggestions provided […]
After reading the article “How to Learn to Add Numbers with seq2seq Recurrent Neural Networks” by Jason Brownlee (that I suggest reading before going on), I’ve decided to try an experiment with more complex expressions like: -(10+5) or 4+ -2, etc. The code (with some extra information) is published on the GIST: https://goo.gl/ZmH6Tf, where there are also some test results. Unfortunately, the results are not extraordinary and there are still many errors, however, I think it depends on the size of the dataset and on the limited ability to generalize that Seq2Seq networks show. I’m working on an enhanced version, to allow a bit more generalization. Complete Python script (Keras 2 with Theano/Tensorflow is needed, moreover I’ve used Scikit-Learn for binarization): View the code on Gist. See also: Hopfield Networks addendum: Brain-State-in-a-Box model – Giuseppe Bonaccorso The Brain-State-in-a-Box is neural model proposed by Anderson, Silverstein, Ritz and Jones in 1977, that […]
Fork I’ve just moved my Keras-based Neural Artistic Style Transfer GIST to a dedicated repository: https://github.com/giuseppebonaccorso/Neural_Artistic_Style_Transfer. Please refer always to it because the GIST is not more maintained. See also: Neural artistic style transfer experiments with Keras – Giuseppe Bonaccorso Artistic style transfer using neural networks is a technique proposed by Gatys, Ecker and Bethge in the paper: arXiv:1508.06576 [cs.CV] which exploits a trained convolutional network in order to reconstruct the elements of a picture adopting the artistic style of a particular painting.
Fork BBC News dataset (available for download in Insight Project Resources website) is made up of 2225 newslines classified into 5 categories (Politics, Sport, Entertainment, Tech, Business) and, similarly to Reuters-21578, it can be adopted in order to test both the efficacy and the efficiency of different classification strategies. In the repository: https://github.com/giuseppebonaccorso/bbc_news_classification_comparison, I’ve committed a Jupyter (IPython) notebook (based on Scikit-Learn, NLTK, Gensim, Keras (with Theano or Tensorflow)) where I’ve collected some experiments aimed at comparing four different algorithms: Multinomial Naive-Bayes with Count (TF) vectorizer Multinomial Naive-Bayes with TF-IDF vectorizer SVM (linear and kernelized) with Doc2Vec (Gensim-based) vectorization MLP (Keras-based) with Doc2Vec vectorization (Every experiment has been performed on tokens without stop-words and processed with WordNet lemmatization). As expected (thanks to several research projects – see references for further information), Naive-Bayes performs even better than the other strategies, in particular comparing its simplicity and naiveness with the complexity of SVM and neural […]
Fork Artistic style transfer using neural networks is a technique proposed by Gatys, Ecker and Bethge in the paper: arXiv:1508.06576 [cs.CV] which exploits a trained convolutional network in order to reconstruct the elements of a picture adopting the artistic style of a particular painting. I’ve written a Python program (available in the Github repository: https://github.com/giuseppebonaccorso/Neural_Artistic_Style_Transfer) based on Keras and VGG16/19 convolutional networks, that can be used to perform some experiments. In fact, considering the huge number of variables and parameters, this kind of problems is very sensitive to the initial conditions and a different starting state can lead to different minima which content doesn’t meet our requirements. In the script, it’s possible to choose among six initial canvas types: Random: RGB random pixels from a uniform distribution Random from style: random pixels sampled from the painting Random from picture: random pixels sampled from the picture Style/Picture: Painting or picture full […]
(Updated on July, 24th, 2017 with some improvements and Keras 2 style, but still a work in progress) CIFAR-10 is a small image (32 x 32) dataset made up of 60000 images subdivided into 10 main categories. Check the web page in the reference list in order to have further information about it and download the whole set. Considering our current screen resolutions, it’s not difficult saying that those images are no more than icons and indeed some of them are very hard to be classified even by human beings. Using Keras, I’ve modeled a deep convolutional network (VGGNet-like) in order to try a classification. I’m still investigating the best architecture (in CIFAR home page, there are very interesting references to papers and other results), however, I think it can be a good starting point. As the output is a softmax layer, it can also be interesting to evaluate mixed […]
In this benchmark, I’ve used a Windows 10 Pro 64 Bit computer with Intel Core i7 6700HQ 2.60 GHz with 32 Gb RAM and NVIDIA GeForce GTX 960M. As a programming environment, I’ve used Python 2.7 (Anaconda distribution) and Jupyter. The task is very simple, integrating this expression (simple but effective): The code I’ve written is this (without matplotlib functions and float32 numbers, in order to use the GPU): import math from datetime import datetime import numpy as np import matplotlib.pyplot as plt import theano import theano.tensor as T from theano import function, shared # Define constants a = 0 b = math.pi precision = 10000000.0 delta = ((b-a) / precision) # Define x linear space xs = np.linspace(a, b, num=precision).astype(np.float32) # Define Theano function xss = shared(xs, ‘xss’) deltas = shared(delta, ‘delta’) sinvx = T.sum(T.sin(xss) * deltas) sf = function(, sinvx) # Number of iterations num_executions = 500 execution_times = […]