Giuseppe Bonaccorso

Artificial Intelligence – Machine Learning – Data Science

  • Blog
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Gallery
  • Disclaimer
  • Blog
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Gallery
  • Disclaimer

Category: Python

Machine Learning Algorithms – Second Edition

6 months ago09/22/2018Artificial Intelligence, Books, Convnet, Data Science, Deep Learning, Keras, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, NLP, Python, Scikit-Learn, Spark, Tensorflow, TensorflowNo Comments

The second edition (fully revised, extended, and updated) of Machine Learning Algorithms has been published today and will be soon available through all channels. From the back cover: Machine learning has gained tremendous popularity for its powerful and fast predictions through large datasets. However, the true forces behind its powerful output are the complex algorithms involving substantial statistical analysis that churn large datasets and generate sufficient insight. This second edition of Machine Learning Algorithms walks you through prominent development outcomes that have taken place relating to machine learning algorithms, which constitute major contributions to the machine learning process and help you to strengthen and master statistical interpretation across supervised, semi-supervised, and reinforcement learning areas. Once the core concepts of an algorithm have been exposed, you’ll explore real-world examples based on the most diffused libraries, such as scikit-learn, NLTK, TensorFlow, and Keras. You will discover new topics such as principal component […]

Mastering Machine Learning Algorithms

9 months ago09/22/2018Artificial Intelligence, Convnet, Deep Learning, Keras, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, Python, Scikit-Fuzzy, Scikit-Learn, Tensorflow, TensorflowNo Comments

Today I’ve published my latest book “Mastering Machine Learning Algorithms” (in a few days it will be available on all channels). From the back cover: Machine learning is a subset of AI that aims to make modern-day computer systems smarter and more intelligent. The real power of machine learning resides in its algorithms, which make even the most difficult things capable of being handled by machines. However, with the advancement in the technology and requirements of data, machines will have to be smarter than they are today to meet the overwhelming data needs; mastering these algorithms and using them optimally is the need of the hour. Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the […]

Fundamentals of Machine Learning with Scikit-Learn

11 months ago09/08/2018Artificial Intelligence, Machine Learning, Python, Scikit-LearnNo Comments

A tutorial video (2 hours) derived from the book Machine Learning Algorithms has been released: Fundamental of Machine Learning with Scikit-Learn: From the notes: As the amount of data continues to grow at an almost incomprehensible rate, being able to understand and process data is becoming a key differentiator for competitive organizations. Machine Learning applications are everywhere, from self-driving cars, spam detection, document searches, and trading strategies, to speech recognition. This makes machine learning well-suited to the present-day era of big data and data science. The main challenge is how to transform data into actionable knowledge. In this course you will learn all the important Machine Learning algorithms that are commonly used in the field of data science. These algorithms can be used for supervised as well as unsupervised learning, reinforcement learning, and semi-supervised learning. A few famous algorithms that are covered in this book are: Linear regression, Logistic Regression, […]

Getting Started with NLP and Deep Learning with Python

12 months ago09/08/2018Artificial Intelligence, Deep Learning, Keras, Machine Learning, Neural networks, NLP, Python, Scikit-Learn, TensorflowNo Comments

A tutorial video (2 hours) derived from the book Machine Learning Algorithms has been released: Getting Started with NLP and Deep Learning with Python. From the notes: As the amount of data continues to grow at an almost incomprehensible rate, being able to understand and process data is becoming a key differentiator for competitive organizations. Machine Learning applications are everywhere, from self-driving cars to spam detection, document search, and trading strategies, to speech recognition. This makes machine learning well-suited to the present-day era of Big Data and Data Science. The main challenge is how to transform data into actionable knowledge. In this course, you’ll be introduced to the Natural Processing Language and Recommendation Systems, which help you run multiple algorithms simultaneously. Also, you’ll learn about Deep learning and TensorFlow. Finally, you’ll see how to create an Ml architecture. ISBN: 9781789138894 Link to the publisher page: https://www.packtpub.com/big-data-and-business-intelligence/getting-started-nlp-and-deep-learning-python-video Machine Learning Algorithms – Giuseppe Bonaccorso My […]

A glimpse into the Self-Organizing Maps (SOM)

10/22/201710/22/2017Artificial Intelligence, Computational Neuroscience, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, PythonNo Comments

Self-Organizing Maps (SOM) are neural structures proposed for the first time by the computer scientist T. Kohonen in the late 1980s (that’s why they are also known as Kohonen Networks). Their peculiarities are the ability to auto-cluster data according to the topological features of the samples and the approach to the learning process. Contrary to methods like Gaussian Mixtures or K-Means, a SOM learns through a competitive learning process. In other words, the model tries to specialize its neurons so to be able to produce a response only for a particular pattern family (it can also be a single input sample representing a family, like a handwritten letter). Let’s consider a dataset containing N p-dimensional samples, a suitable SOM is a matrix (other shapes, like toroids, are also possible) containing (K × L) receptors and each of them is made up of p synaptic weights. The resulting structure is a tridimensional matrix W […]

ML Algorithms addendum: Passive Aggressive Algorithms

10/06/201710/08/2017Artificial Intelligence, Generic, Machine Learning, Machine Learning Algorithms Addenda, Python, Scikit-Learn4 Comments

Passive Aggressive Algorithms are a family of online learning algorithms (for both classification and regression) proposed by Crammer at al. The idea is very simple and their performance has been proofed to be superior to many other alternative methods like Online Perceptron and MIRA (see the original paper in the reference section). Classification Let’s suppose to have a dataset: The index t has been chosen to mark the temporal dimension. In this case, in fact, the samples can continue arriving for an indefinite time. Of course, if they are drawn from same data generating distribution, the algorithm will keep learning (probably without large parameter modifications), but if they are drawn from a completely different distribution, the weights will slowly forget the previous one and learn the new distribution. For simplicity, we also assume we’re working with a binary classification based on bipolar labels. Given a weight vector w, the prediction […]

Linearly Separable? No? For me it is! A Brief introduction to Kernel Methods

09/28/201709/30/2017Artificial Intelligence, Generic, Machine Learning, Machine Learning Algorithms Addenda, Python, Scikit-Learn2 Comments

This is a crash-introduction to kernel methods and the best thing to do is starting with a very simple question? Is this bidimensional set linearly separable? Of course, the answer is yes, it is. Why? A dataset defined in a subspace Ω ⊆ ℜn is linearly separable if there exists a (n-1)-dimensional hypersurface that is able to separate all points belonging to a class from the others. Let’s consider the problem from another viewpoint, supposing, for simplicity, to work in 2D. We have defined an hypothetical separating line and we have also set an arbitrary point O as an origin. Let’s now draw the vector w, orthogonal to the line and pointing in one of the two sub-spaces. Let’s now consider the inner product between w and a random point x0:  how can we decide if it’s on the side pointed by w? Simple, the inner product is proportional to the cosine of […]

PCA with Rubner-Tavan Networks

09/25/201712/04/2017Deep Learning, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, PythonNo Comments

One of the most interesting effects of PCA (Principal Component Analysis) is to decorrelate the input covariance matrix C, by computing the eigenvectors and operating a base change using a matrix V:   The eigenvectors are sorted in descending order considering the corresponding eigenvalue, therefore Cpca is a diagonal matrix where the non-null elements are λ1 >= λ2 >= λ3 >= … >= λn. By selecting the top p eigenvalues, it’s possible to operate a dimensionality reduction by projecting the samples in the new sub-space determined by the p top eigenvectors (it’s possible to use Gram-Schmidt orthonormalization if they don’t have a unitary length). The standard PCA procedure works with a bottom-up approach, obtaining the decorrelation of C as a final effect, however, it’s possible to employ neural networks, imposing this condition as an optimization step. One the most effective model has been proposed by Rubner and Tavan (and it’s named after them). Its […]

Hopfield Networks addendum: Brain-State-in-a-Box model

09/22/201709/30/2017Artificial Intelligence, Complex Systems, Deep Learning, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, PythonNo Comments

The Brain-State-in-a-Box is neural model proposed by Anderson, Silverstein, Ritz and Jones in 1977, that presents very strong analogies with Hopfield networks (read the previous post about them). The structure of the network is similar: recurrent, fully-connected with symmetric weights and non-null auto-recurrent connections. All neurons are bipolar (-1 and 1). If there are N neurons, it’s possible to imagine an N-dimensional hypercube: The main differences with a Hopfield network are the activation function: and the dynamics that, in this case, is synchronous. Therefore, all neurons are updated at the same time. The activation function is linear when the weighted input a(i) is bounded between -1 and 1 and saturates to -1 and 1 outside the boundaries. A stable state of the network is one of the hypercube vertices (that’s why it’s called in this way). The training rule is always an extended Hebbian based on the pre-synaptic and post-synaptic […]

ML Algorithms Addendum: Hopfield Networks

09/20/201709/30/2017Artificial Intelligence, Computational Neuroscience, Deep Learning, Generic, Machine Learning, Machine Learning Algorithms Addenda, Neural networks, Python2 Comments

Hopfield networks (named after the scientist John Hopfield) are a family of recurrent neural networks with bipolar thresholded neurons. Even if they are have replaced by more efficient models, they represent an excellent example of associative memory, based on the shaping of an energy surface. In the following picture, there’s the generic schema of a Hopfield network with 3 neurons: Conventionally the synaptic weights obey the following conditions: If we have N neurons, also the generic input vector must be N-dimension and bipolar (-1 and 1 values).The activation function for each neuron is hence defined as: In the previous formula the threshold for each neuron is represented by θ (a common value is 0, that implies a strong symmetry). Contrary to MLP, in this kind of networks, there’s no separation between input and output layers. Each unit can receive its input value, processes it and outputs the result. According to the […]

Posts navigation

1 2 … 4

Follow Me

  • linkedin
  • twitter
  • facebook
  • github
  • instagram
  • google-plus
  • amazon
  • medium
  • rss

Search articles

Latest blog posts

  • Machine Learning Algorithms – Second Edition 08/28/2018
  • Recommendations and User-Profiling from Implicit Feedbacks 07/10/2018
  • Are recommendations really helpful? A brief non-technical discussion 06/29/2018
  • A book that every data scientist should read 06/22/2018
  • Mastering Machine Learning Algorithms 05/24/2018

Subscribe to this blog

Join 2,190 other subscribers

Follow me on Twitter

My Tweets
Copyright © 2019 Giuseppe Bonaccorso. All Rights Reserved. Privacy policy - Cookie policy