Giuseppe Bonaccorso

Artificial Intelligence – Machine Learning – Data Science

  • Blog
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Disclaimer
  • Blog
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Disclaimer

Category: Computational Neuroscience

Hetero-Associative Memories for Non Experts: How “Stories” are memorized with Image-associations

12/31/201701/02/2018Artificial Intelligence, Computational Neuroscience, Convnet, Deep Learning, Generic, Machine Learning, Neural networks, Philosophy of Mind, Tensorflow, Tensorflow4 Comments

Think about walking along a beach. The radio of a small kiosk-bar is turned-on and a local DJ announces an 80’s song. Immediately, the image of a car comes to your mind. It’s your first car, a second-hand blue spider. While listening to the same song, you drove your girlfriend to the beach, about 25 years ago. It was the first time you made love to her. Now imagine a small animal (normally a prey, like a rodent) roaming around the forest and looking for food. A sound is suddenly heard and the rodent rises its head. Is it the pounding of water or a lion’s roar? We can skip the answer right now. Let’s only think about the ways an animal memory can work. Computer science drove us to think that memories must always be loss-less, efficient and organized like structured repositories. They can be split into standard-size slots […]

A glimpse into the Self-Organizing Maps (SOM)

10/22/201710/22/2017Artificial Intelligence, Computational Neuroscience, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, PythonNo Comments

Self-Organizing Maps (SOM) are neural structures proposed for the first time by the computer scientist T. Kohonen in the late 1980s (that’s why they are also known as Kohonen Networks). Their peculiarities are the ability to auto-cluster data according to the topological features of the samples and the approach to the learning process. Contrary to methods like Gaussian Mixtures or K-Means, a SOM learns through a competitive learning process. In other words, the model tries to specialize its neurons so to be able to produce a response only for a particular pattern family (it can also be a single input sample representing a family, like a handwritten letter). Let’s consider a dataset containing N p-dimensional samples, a suitable SOM is a matrix (other shapes, like toroids, are also possible) containing (K × L) receptors and each of them is made up of p synaptic weights. The resulting structure is a tridimensional matrix W […]

A virtual Jacques Lacan discusses about Artificial Intelligence

10/01/201710/01/2017Artificial Intelligence, Complex Systems, Computational Neuroscience, Machine Learning, Philosophy of Mind2 Comments

“In other words, the man who is born into existence deals first with language; this is a given. He is even caught in it before his birth.” (J. Lacan)   A virtual discussion with Jacques Lacan is a very hard task, above all when the main topic is Artificial Intelligence, a discipline that maybe he heard about but still too far from the world where he lived in. However, I believe that many concepts belonging to his theory are fundamental for any discipline that has to study the huge variety of human behaviors. Of course, this is a personal (and limited) reinterpretation that can make may psychoanalysts and philosophers smile, but I do believe in freedom of expression and all the constructive comments are welcome. But let’s begin our virtual discussion! PS: Someone hasn’t understood that this is a dialog where I wrote all utterances (believe it or not) and […]

ML Algorithms Addendum: Hopfield Networks

09/20/201709/30/2017Artificial Intelligence, Computational Neuroscience, Deep Learning, Generic, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, Python2 Comments

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s the generic schema of a Hopfield network with 3 neurons: Conventionally the synaptic weights obey the following conditions: If we have N neurons, also the generic input vector must be N-dimension and bipolar (-1 and 1 values).The activation function for each neuron is hence defined as: In the previous formula the threshold for each neuron is represented by θ (a common value is 0, that implies a strong symmetry). Contrary to MLP, in this kind of networks, there’s no separation between input and output layers. Each unit can receive its input value, processes it and outputs the result. According to the […]

Quickprop: an almost forgotten neural training algorithm

09/15/201709/30/2017Artificial Intelligence, Computational Neuroscience, Deep Learning, Generic, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, Python3 Comments

Standard Back-propagation is probably the best neural training algorithm for shallow and deep networks, however, it is based on the chain rule of derivatives and an update in the first layers requires a knowledge back-propagated from the last layer. This non-locality, especially in deep neural networks, reduces the biological plausibility of the model because, even if there’s enough evidence of negative feedback in real neurons, it’s unlikely that, for example, synapsis in LGN (Lateral Geniculate Nucleus) could change their dynamics (weights) considering a chain of changes starting from the primary visual cortex. Moreover, classical back-propagation doesn’t scale very well in large networks. For these reasons, in 1988 Fahlman proprosed an alternative local and quicker update rule, where the total loss function L is approximated with a quadratic polynomial function (using Taylor expansion) for each weight independently (assuming that each update has a limited influence on the neighbors). The resulting weight update […]

Artificial Intelligence is a matter of Language

09/11/201709/30/2017Artificial Intelligence, Complex Systems, Computational Neuroscience, Deep Learning, Generic, Machine Learning, Neural networks, NLP, Philosophy of MindNo Comments

“The limits of my language means the limits of my world.” (L. Wittgenstein)   When Jacques Lacan proposed his psychoanalytical theory based on the influence of language on human beings, many auditors remained initially astonished. Is language an actual limitation? In the popular culture, it isn’t. It cannot be! But, in a world where we keep on working with internal representations, it’s much more than a limitation: it’s a golden cage without a way out. First of all, an internal representation needs an external environment and, under some conditions, it must also be shared by a concrete number of people. A jungle, just like any other natural place, is a perfect starting point. However, an internal representation is more than a placeholder. Thinking this way drives to dramatic mistakes. The word “tree” is not an actual tree and never will be, but the word “tree” is an entity that transforms […]

ML Algorithms Addendum: Hebbian Learning

08/21/201709/30/2017Artificial Intelligence, Computational Neuroscience, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, Python2 Comments

Hebbian Learning is one the most famous learning theories, proposed by the Canadian psychologist Donald Hebb in 1949, many years before his results were confirmed through neuroscientific experiments. Artificial Intelligence researchers immediately understood the importance of his theory when applied to artificial neural networks and, even if more efficient algorithms have been adopted in order to solve complex problems, neuroscience continues finding more and more evidence of natural neurons whose learning process is almost perfectly modeled by Hebb’s equations. Hebb’s rule is very simple and can be discussed starting from a high-level structure of a neuron with a single output: We are considering a linear neuron, therefore the output y is a linear combination of its input values x: According to the Hebbian theory, if both pre- and post-synaptic units behave in the same way (firing or remaining in the steady state), the corresponding synaptic weight will be reinforced. Vice […]

Hodgkin-Huxley spiking neuron model in Python

08/19/201709/30/2017Artificial Intelligence, Complex Systems, Computational Neuroscience, Neural networks, PythonNo Comments

The Hodgkin-Huxley model (published on 1952 in The Journal of Physiology [1]) is the most famous spiking neuron model (also if there are simpler alternatives like the “Integrate-and-fire” model which performs quite well). It’s made up of a system of four ordinary differential equations that can be easily integrated using several different tools. The main idea is based on an electrical representation of the neuron, considering only Potassium (K) and Sodium (Na) voltage-gated ion channels (even if it can be extended to include more channels). A schematic representation is shown in the following figure: The elements are: Cm: a capacitance per unit area representing the membrane lipid-bilayer (adopted value: 1 µF/cm²) gNa: voltage-controlled conductance per unit area associated with the Sodium (Na) ion-channel (adopted value: 120 µS/cm²) gK: voltage-controlled conductance per unit area associated with the Potassium (K) ion-channel (adopted value: 36 µS/cm²) gl: conductance per unit area associated with the leak channels (adopted […]

Time for Emergence

08/15/200709/30/2017Artificial Intelligence, Complex Systems, Computational NeuroscienceNo Comments

When Sir Tim Berners-Lee had his stroke of genius and invented the World Wide Web, he didn’t surely think about its extraordinary present developments, just like a father normally hopes his children’s wellbeing, but he can’t seldom figure out every particular detail of their future. Such a behavior strikes everybody as a strange kind of myopia, like a bewitched gold-digger who stops in front of a slight glow and forgets its source. Even if there could be a strong temptation to believe it, several studies showed that its deep nature is quite different and probably no suitable attitude may avoid it. The first time I studied the concept of Emergence, I was attending a Complex Systems course at university, and my very first thought was that of a pure “scientific skepticism”: a sparkling description for an idea that had soon gone up in smoke… When I realized its immense strength, by then lots of its […]

Follow Me

  • linkedin
  • twitter
  • googlescholar
  • facebook
  • github
  • amazon
  • medium

Search articles

Latest blog posts

  • EphMrA 2019 Switzerland one day meeting 08/30/2019
  • Machine Learning Algorithms – Second Edition 08/28/2018
  • Recommendations and User-Profiling from Implicit Feedbacks 07/10/2018
  • Are recommendations really helpful? A brief non-technical discussion 06/29/2018
  • A book that every data scientist should read 06/22/2018

Subscribe to this blog

Join 2,191 other subscribers

Follow me on Twitter

My Tweets
Copyright © 2019 Giuseppe Bonaccorso. All Rights Reserved. Privacy policy - Cookie policy