• Blog
  • Podcasts
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Disclaimer

Giuseppe Bonaccorso

Artificial Intelligence – Machine Learning – Data Science

  • Blog
  • Podcasts
  • Books
  • Resume / CV
  • Bonaccorso’s Law
  • Essays
  • Contact
  • Testimonials
  • Disclaimer

CIFAR-10 image classification with Keras ConvNet

08/06/2016 Convnet Deep Learning Keras Machine Learning Theano 5 Comments

(Updated on July, 24th, 2017 with some improvements and Keras 2 style, but still a work in progress)

CIFAR-10 is a small image (32 x 32) dataset made up of 60000 images subdivided into 10 main categories. Check the web page in the reference list in order to have further information about it and download the whole set. Considering our current screen resolutions, it’s not difficult saying that those images are no more than icons and indeed some of them are very hard to be classified even by human beings.

Using Keras, I’ve modeled a deep convolutional network (VGGNet-like) in order to try a classification. I’m still investigating the best architecture (in CIFAR home page, there are very interesting references to papers and other results), however, I think it can be a good starting point. As the output is a softmax layer, it can also be interesting to evaluate mixed results, for example, an image with features belonging both to a dog and a plane and so forth.

convnet

The full script, without any particular image preprocessing step (data augmentation) except for the normalization between 0 and 1, is the following one (you can easily try to change layer features and dimensions):

The validation accuracy reaches 0.79 after 66 epochs when the early stopping terminates the process because the validation loss has stopped decreasing (the final value is about 0.6).

In my experiments, the majority of errors are related to cat-dog or dog-cat distinctions (something absolutely not surprising, considering that most of main features are common to both categories).

The code is always available in GIST: https://gist.github.com/giuseppebonaccorso/e77e505fc7b61983f7b42dc1250f31c8

Reference:

  • CIFAR homepage: https://www.cs.toronto.edu/~kriz/cifar.html
  • Image classification benchmark: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html

See also:

Lossy image autoencoders with convolution and deconvolution networks in Tensorflow – Giuseppe Bonaccorso

Fork Autoencoders are a very interesting deep learning application because they allow a consistent dimensionality reduction of an entire dataset with a controllable loss level. The Jupyter notebook for this small project is available on the Github repository: https://github.com/giuseppebonaccorso/lossy_image_autoencoder.

Share:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pocket (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Skype (Opens in new window)
  • Click to share on WhatsApp (Opens in new window)
  • Click to share on Telegram (Opens in new window)
  • Click to email this to a friend (Opens in new window)
  • Click to print (Opens in new window)

You can also be interested in these articles:

cifarclassificationcnnconvnetconvolutionimagekerasvggnet

Reuters-21578 text classification with Gensim and Keras

Deep learning, God and Zen emptiness

5 thoughts on “CIFAR-10 image classification with Keras ConvNet”

  1. WilfoWilfo
    05/14/2017 at 4:51

    Excelente blog but there is a curiosity in your articles, they are repeated.

    Reply
    • Giuseppe Bonaccorso
      05/14/2017 at 18:10

      Where do you see them repeated?

      Reply
  2. Noura
    07/23/2017 at 13:30

    Dear Giuseppe Bonaccorso,

    I have a question : why you did not try to plot the acc vs val_acc because it seems that there is a sort of overfitting !! also 15 epochs are very few 🙂
    Best regards

    Reply
    • Giuseppe Bonaccorso
      07/23/2017 at 16:35

      Dear Noura,

      Thank you for your comment! You are probably right because the model is quite small. Normally I prefer considering the validation accuracy as a benchmark and, in this case, it doesn’t show an exceptional result (even if the convnet is not very complex). I think it would be a good idea to increase the dropout or adding another dropout layer in between the convolutional blocks. For the epochs, of course, it’s possible to increase them, but I think (probably I’m wrong :)) that the risk of overfitting increases.
      I’m going to test these options.

      G.

      Reply
  3. Noura
    07/24/2017 at 9:51

    yes, usually the risk of the overfitting increases but also increasing epochs will let you to know when your accuracy becomes asymptotically stable 🙂
    Did you change anything later in ConvNet inorder to get a higher accuracy ?
    Best regards,
    Noura

    Reply

Leave a Reply Cancel reply

Follow Me

  • linkedin
  • twitter
  • facebook
  • googlescholar
  • youtube
  • github
  • amazon
  • medium

Search articles

Latest blog posts

  • Mastering Machine Learning Algorithms Second Edition 02/03/2020
  • EphMrA 2019 Switzerland one day meeting 08/30/2019
  • Machine Learning Algorithms – Second Edition 08/28/2018
  • Recommendations and User-Profiling from Implicit Feedbacks 07/10/2018
  • Are recommendations really helpful? A brief non-technical discussion 06/29/2018

Subscribe to this blog

Join 2,199 other subscribers

Follow me on Twitter

My Tweets
Copyright © Giuseppe Bonaccorso. All Rights Reserved
Proudly powered by WordPress | Theme: Doo by ThemeVS.
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.