ML Algorithms Addendum: Hopfield Networks

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s…

Quickprop: an almost forgotten neural training algorithm

Standard Back-propagation is probably the best neural training algorithm for shallow and deep networks, however, it is based on the chain rule of derivatives and an update in the first layers requires a knowledge back-propagated from the last layer. This non-locality, especially in deep neural networks, reduces the biological plausibility…

A model-free collaborative recommendation system in 20 lines of Python code

Model-free collaborative filtering is a “lightweight” approach to recommendation systems. It’s always based on the implicit “collaboration” (in terms of ratings) among users, but it is computed in-memory without the usage of complex algorithms like ALS (Alternating Least Squares) that can be executed in parallel environment (like Spark). If we assume…

ML Algorithms Addendum: Instance Based Learning

Contrary to the majority of machine learning algorithms, Instance-Based Learning is model-free, meaning that there are strong assumptions about the structure of regressors, classifiers or clustering functions. They are “simply” determined by the data, according to an affinity induced by a distance metric (the most common name for this approach…

ML Algorithms Addendum: Hebbian Learning

Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms…

ML Algorithms addendum: Mutual information in classification tasks

Many classification algorithms, both in machine and in deep learning, adopt the cross-entropy as cost function. This is a brief explanation why minimizing the cross-entropy allows to increase the mutual information between training and learned distributions. If we call p the training set probability distribution and q, the corresponding learned…