Machine Learning Algorithms – Second Edition

The second edition (fully revised, extended, and updated) of Machine Learning Algorithms has been published today and will be soon available through all channels. From the back cover: Machine learning has gained tremendous popularity for its powerful and fast predictions through large datasets. However, the true forces behind its powerful…

Mastering Machine Learning Algorithms

Today I’ve published my latest book “Mastering Machine Learning Algorithms” (in a few days it will be available on all channels). From the back cover: Machine learning is a subset of AI that aims to make modern-day computer systems smarter and more intelligent. The real power of machine learning resides…

A glimpse into the Self-Organizing Maps (SOM)

Self-Organizing Maps (SOM) are neural structures proposed for the first time by the computer scientist T. Kohonen in the late 1980s (that’s why they are also known as Kohonen Networks). Their peculiarities are the ability to auto-cluster data according to the topological features of the samples and the approach to…

ML Algorithms addendum: Passive Aggressive Algorithms

Passive Aggressive Algorithms are a family of online learning algorithms (for both classification and regression) proposed by Crammer at al. The idea is very simple and their performance has been proofed to be superior to many other alternative methods like Online Perceptron and MIRA (see the original paper in the…

Hopfield Networks addendum: Brain-State-in-a-Box model

The Brain-State-in-a-Box is neural model proposed by Anderson, Silverstein, Ritz and Jones in 1977, that presents very strong analogies with Hopfield networks (read the previous post about them). The structure of the network is similar: recurrent, fully-connected with symmetric weights and non-null auto-recurrent connections. All neurons are bipolar (-1 and…

ML Algorithms Addendum: Hopfield Networks

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s…

Quickprop: an almost forgotten neural training algorithm

Standard Back-propagation is probably the best neural training algorithm for shallow and deep networks, however, it is based on the chain rule of derivatives and an update in the first layers requires a knowledge back-propagated from the last layer. This non-locality, especially in deep neural networks, reduces the biological plausibility…

A model-free collaborative recommendation system in 20 lines of Python code

Model-free collaborative filtering is a “lightweight” approach to recommendation systems. It’s always based on the implicit “collaboration” (in terms of ratings) among users, but it is computed in-memory without the usage of complex algorithms like ALS (Alternating Least Squares) that can be executed in parallel environment (like Spark). If we assume…