Hetero-Associative Memories for Non Experts: How “Stories” are memorized with Image-associations

Think about walking along a beach. The radio of a small kiosk-bar is turned-on and a local DJ announces an 80’s song. Immediately, the image of a car comes to your mind. It’s your first car, a second-hand blue spider. While listening to the same song, you drove your girlfriend…

A glimpse into the Self-Organizing Maps (SOM)

Self-Organizing Maps (SOM) are neural structures proposed for the first time by the computer scientist T. Kohonen in the late 1980s (that’s why they are also known as Kohonen Networks). Their peculiarities are the ability to auto-cluster data according to the topological features of the samples and the approach to…

ML Algorithms Addendum: Hopfield Networks

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s…

Quickprop: an almost forgotten neural training algorithm

Standard Back-propagation is probably the best neural training algorithm for shallow and deep networks, however, it is based on the chain rule of derivatives and an update in the first layers requires a knowledge back-propagated from the last layer. This non-locality, especially in deep neural networks, reduces the biological plausibility…

Artificial Intelligence is a matter of Language

“The limits of my language means the limits of my world.” (L. Wittgenstein)   When Jacques Lacan proposed his psychoanalytical theory based on the influence of language on human beings, many auditors remained initially astonished. Is language an actual limitation? In the popular culture, it isn’t. It cannot be! But,…

ML Algorithms Addendum: Hebbian Learning

Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms…

Hodgkin-Huxley spiking neuron model in Python

The Hodgkin-Huxley model (published on 1952 in The Journal of Physiology [1]) is the most famous spiking neuron model (also if there are simpler alternatives like the “Integrate-and-fire” model which performs quite well). It’s made up of a system of four ordinary differential equations that can be easily integrated using several…