ML Algorithms Addendum: Hopfield Networks

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s…

Quickprop: an almost forgotten neural training algorithm

Standard Back-propagation is probably the best neural training algorithm for shallow and deep networks, however, it is based on the chain rule of derivatives and an update in the first layers requires a knowledge back-propagated from the last layer. This non-locality, especially in deep neural networks, reduces the biological plausibility…

A model-free collaborative recommendation system in 20 lines of Python code

Model-free collaborative filtering is a “lightweight” approach to recommendation systems. It’s always based on the implicit “collaboration” (in terms of ratings) among users, but it is computed in-memory without the usage of complex algorithms like ALS (Alternating Least Squares) that can be executed in parallel environment (like Spark). If we assume…

An annotated path to start with Machine Learning

“Do not worry about your difficulties in Mathematics. I can assure you mine are still greater.” (A. Einstein)   Machine Learning is becoming more and more widespread and, day after day, new computer scientists and engineers begin their long jump into this wonderful world. Unfortunately, the number of theories, algorithms, applications,…

ML Algorithms Addendum: Instance Based Learning

Contrary to the majority of machine learning algorithms, Instance-Based Learning is model-free, meaning that there are strong assumptions about the structure of regressors, classifiers or clustering functions. They are “simply” determined by the data, according to an affinity induced by a distance metric (the most common name for this approach…

ML Algorithms Addendum: Hebbian Learning

Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms…

Hodgkin-Huxley spiking neuron model in Python

The Hodgkin-Huxley model (published on 1952 in The Journal of Physiology [1]) is the most famous spiking neuron model (also if there are simpler alternatives like the “Integrate-and-fire” model which performs quite well). It’s made up of a system of four ordinary differential equations that can be easily integrated using several…

ML Algorithms addendum: Mutual information in classification tasks

Many classification algorithms, both in machine and in deep learning, adopt the cross-entropy as cost function. This is a brief explanation why minimizing the cross-entropy allows to increase the mutual information between training and learned distributions. If we call p the training set probability distribution and q, the corresponding learned…

Twitter Sentiment Analysis with Gensim Word2Vec and Keras Convolutional Networks

Fork Word2Vec (https://code.google.com/archive/p/word2vec/) offers a very interesting alternative to classical NLP based on term-frequency matrices. In particular, as each word is embedded into a high-dimensional vector, it’s possible to consider a sentence like a sequence of points that determine an implicit geometry. For this reason, the idea of considering 1D…