As 2017 comes to a close, let's have a look at the best features that the GitHub developer team has introduced this year, ranging from protected branches to improved project management and completely new IDEs.

Looking for a last-minute Christmas gift? Packt Publishing is holding a special holiday sale: Act now to get the eBook version of Machine Learning for OpenCV, OpenCV with Python Blueprints, and OpenCV: Computer Vision Projects with Python for only $5 each!
Limited time only.
Some virtual turkey for all the nerds out there: Enjoy the eBook version of Machine Learning for OpenCV, OpenCV with Python Blueprints, and OpenCV: Computer Vision Projects with Python for only $10 each!
OpenCV's machine learning module provides a lot of important estimators such as support vector machines (SVMs) or random forest classifiers, but it lacks scikit-learn-style utility functions for interacting with data, scoring a classifier, or performing grid search with cross-validation. In this post I will show you how to wrap an OpenCV classifier as a scikit-learn estimator in five simple steps so that you can still make use of scikit-learn utility functions when working with OpenCV.
Two years ago today, Packt Publishing Ltd. released OpenCV with Python Blueprints, my first technical book on computer vision and machine learning using the OpenCV library. To celebrate this anniversary, I'm giving away a free copy of the book via Amazon Giveaways! Read on to find out how you can participate.
Michael Beyeler
OpenCV with Python Blueprints
Design and develop advanced computer vision projects using OpenCV with Python
Packt Publishing Ltd., London, England
Paperback: 230 pages
ISBN 978-178528269-0
[GitHub] [Discussion Group] [Free Sample]
Machine Learning for OpenCV is one of 5,000 titles you can currently get for only $10 at www.packtpub.com as part of their big Back to School sale. Grab a copy before it's too late!
To celebrate the release of my new book, I am giving away a free copy of Machine Learning for OpenCV (Paperback, a $44 value).
You can enter the Amazon Giveaway here. Sweepstakes ends 10 August, after which the winner will be drawn randomly from all participants (18y+, must reside in US).
The book is currently trending as #1 New Release in Amazon's Computer Vision category, and initial feedback has been very favorable. Get it while it's hot!
Edit: Fixed the link to the Giveaway ^^.
One exciting application of k-means clustering is the compression of image color spaces. Although True-color images come with a 24-bit color depth (allowing 16,777,216 color variations), a large number of colors within any particular image will typically be unused—and many of the pixels in the image will have similar or identical colors. In this post I am going to show you how you can use k-means clustering via OpenCV and Python to reduce the color palette of an image to a total of 16 colors.
My new book Machine Learning for OpenCV is now available via Packt Publishing Ltd. The book features 382 pages filled with machine learning and image processing goodness, teaching you how to master key concepts of statistical learning using Python Anaconda, OpenCV, and scikit-learn.
This will be an introductory book for folks who are already familiar with OpenCV, but now want to dive into the world of machine learning.
The goal is to illustrate the fundamental machine learning concepts using practical, hands-on examples.
As always, all source code is available for free on GitHub. The book is packed with examples on how to implement different techniques in OpenCV—such as classification, regression, k-NN, support vectort machines, decision trees, random forests, Bayes classifiers, k-means clustering, and neural networks.
By the end of this book, you will be ready to take on your own Machine Learning problems, either by building on the existing source code or developing your own algorithm from scratch!
Get it while it's hot! In fact, if you act fast you can get the book for $10 on Packt's website as part of their Skill up sale! Or get it on Amazon and leave a review to tell me what you think!
The foreword to the book was written by Ariel Rokem, Senior Data Scientist at the University of Washington eScience Institute, a close colleague, collaborator, and mentor of mine. You can find out what he has to say about this book here.
The outline of the book is as follows:
Stay tuned for example chapters and code samples!
For a limited time only, you can get OpenCV with Python Blueprints and any other eBook or video for only $10 on the Packt website. That's a staggering 77% discount! The flash sale will be going on for only a few days, so if you've been toying with the idea of getting some of these books, make sure to act now.
Michael Beyeler
OpenCV with Python Blueprints
Design and develop advanced computer vision projects using OpenCV with Python
Packt Publishing Ltd., London, England
Paperback: 230 pages
ISBN 978-178528269-0
[GitHub] [Discussion Group] [Free Sample]
In the latest edition of this monthly digest series you can learn how your brain activity changes under the influence of psychedelic drugs, why brain games won't actually make you smarter, and how gene therapy might be able to treat patients blinded from retinitis pigmentosa.
You might be wondering what I have been up to, since my blog has been quiet for a while. Don't worry, I'm still here!
Truth is, in the little spare time I seem to have these days, I've been working on a new book: Machine Learning for OpenCV, coming out later this year! You can pre-order it on the official website of Packt Publishing Ltd. or Amazon.
This will be an introductory book for folks who are already familiar with OpenCV, but now want to dive into the world of machine learning. The goal is to illustrate the fundamental machine learning concepts using practical, hands-on examples. As always, all source code will be available for free on GitHub.
The outline of the book is as follows:
Stay tuned for more details!
With the recent machine learning boom, more and more algorithms have become available that perform exceptionally well on a number of tasks. But knowing beforehand which algorithm will perform best on your specific problem is often not possible. If you had infinite time at your disposal, you could just go through all of them and try them out. The following post shows you a better way to do this, step by step, by relying on known techniques from model selection and hyper-parameter tuning.
Once a year, researchers meet at the University of Washington (UW) in Seattle as part of the Neural Computation and Engineering Connection to discuss what's new in neuroengineering and computational neuroscience. Organized by the UW Institute for Neuroengineering, this year's topics ranged from brain-computer interfaces to rehabilitative robotics and deep learning, with plenary speakers such as Marcia O'Malley (Rice), Maria Geffen (University of Pennsylvania), and Michael Berry (Princeton).
Noawadays scientists find themselves spending more and more time building software to support their research. Although time spent programming is often perceived first and foremost as time spent not doing research, most scientists have never been taught how to efficiently write software that is both correct and reusable. That's why the guys behind Software Carpentry have come up with a list of best practices to help you improve your scientific code. Because after all, to quote Ralph Johnson, before software can be reusable, it has first to be usable.
Let's be honest here—who actually likes messing with partition tables? I know I don't, and every time I have to install a new Unix-based OS alongside a pre-installed Windows partition, I get a little nervous. Therefore, I thought it's never too late to write a step-by-step tutorial on how to install Ubuntu 16.04 alongside Windows 10 without falling for the common pitfalls (Secure Boot, partitioning, missing GRUB entry, etc.).