ML Algorithms Addendum: Hebbian Learning

Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms have been adopted in order to solve complex problems, neuroscience continues finding more and more evidence of natural neurons whose learning process is almost perfectly modeled by Hebb’s equations. Hebb’s rule is very simple and can be discussed starting from a high-level structure of a neuron with a single output: We are considering a linear neuron, therefore the output y is a linear combination of its input values x: According to the Hebbian theory, if both pre- and post-synaptic units behave in the same way (firing or remaining in the steady state), the corresponding synaptic weight will be reinforced. Vice … Continue reading ML Algorithms Addendum: Hebbian Learning