Practical Python and OpenCV + Case Studies
conference:
• CVPR - Computer Vision and Pattern Recognition
• ICCV - International Conference on Computer Vision
• ECCV - European Conference on Computer Vision
• BMVC - British Machine Vision Conference
• ICIP - IEEE International Conference on Image Processing
Beginner Books:
• Programming Computer Vision with Python: Tools and algorithms for analyzing images by Jan Erik Solem
• Practical Computer Vision with SimpleCV : The Simple Way to Make Technology See by Kurt Demaagd, Anthony Oliver, Nathan Oostendorp, and Katherine Scott
• OpenCV Computer Vision with Python by Joseph Howse
• Learning OpenCV: Computer Vision with the OpenCV Library by Gary Bradski and Adrian Kaehler
• OpenCV 2 Computer Vision Application Programming Cookbook by Robert Laganière
• Mastering OpenCV with Practical Computer Vision Projects by Daniel Lélis Baggio, Shervin Emami,
David Millán Escrivá, Khvedchenia Ievgen, Jasonl Saragih, and Roy Shilkrot
• SciPy and NumPy: An Overview for Developers by Eli Bressert
Textbooks:
• Computer Vision: A Modern Approach (2nd Edition) by David A. Forsyth and Jean Ponce
• Computer Vision by Linda G. Shapiro and George C. Stockman
• Computer Vision: Algorithms and Applications by Richard Szeliski
• Algorithms for Image Processing and Computer Vision by J. R. Parker
• Computer Vision: Models, Learning, and Inference by Dr Simon J. D. Prince
• Computer and Machine Vision, Fourth Edition: Theory, Algorithms, Practicalities by E. R. Davies
Python Libraries
When I ?rst became interested in computer vision and image search engines over eight
years ago, I had no idea where to start. I didn’t know which language to use, I didn’t
know which libraries to install, and the libraries I found I didn’t know how to use. I WISH
there had been a list like this one, detailing the best Python libraries to use for image
processing, computer vision, and image search engines.
This list is by no means complete or exhaustive. It’s just my favorite Python libraries that
I use each and everyday for computer vision and image search engines. If you think that
I’ve left an important one out, please leave me an email at adrian@pyimagesearch.com.
NumPy
NumPy is a library for the Python programming language that (among other things)
provides support for large, multi-dimensional arrays. Why is that important? Using
NumPy, we can express images as multi-dimensional arrays. Representing images as
NumPy arrays is not only computational and resource ef?cient, but many other image
processing and machine learning libraries use NumPy array representations as well.
Furthermore, by using NumPy’s built-in high-level mathematical functions, we can
quickly perform numerical analysis on an image.
SciPy
Going hand-in-hand with NumPy, we also have SciPy. SciPy adds further support for
scienti?c and technical computing. One of my favorite sub-packages of SciPy is the
spatial package which includes a vast amount of distance functions and a kd-tree
implementation. Why are distance functions important? When we “describe” an image,
we perform feature extraction. Normally after feature extraction an image is represented
by a vector (a list) of numbers. In order to compare two images, we rely on distance
functions, such as the Euclidean distance. To compare two arbitrary feature vectors, we
simply compute the distance between their feature vectors. In the case of the Euclidean
distance, the smaller the distance the more “similar” the two images are.
matplotlib
Simply put, matplotlib is a plotting library. If you’ve ever used MATLAB before, you’ll
probably feel very comfortable in the matplotlib environment. When analyzing images,
we’ll make use of matplotlib, whether plotting the overall accuracy of search systems or
simply viewing the image itself, matplotlib is a great tool to have in your toolbox.
PIL and Pillow
These two packages are good and what they do: simple image manipulations, such as
resizing, rotation, etc. If you need to do some quick and dirty image manipulations
de?nitely check out PIL and Pillow, but if you’re serious about learning about image
processing, computer vision, and image search engines, I would highly recommend that
you spend your time playing with OpenCV and SimpleCV instead.
Image Search Engine: Resource Guide! www.PyImageSearch.com
OpenCV
If NumPy’s main goal is large, ef?cient, multi-dimensional array representations, then,
by far, the main goal of OpenCV is real-time image processing. This library has been
around since 1999, but it wasn’t until the 2.0 release in 2009 did we see the incredible
NumPy support. The library itself is written in C/C++, but Python bindings are provided
when running the installer. OpenCV is hands down my favorite computer vision library,
but it does have a learning curve. Be prepared to spend a fair amount of time learning
the intricacies of the library and browsing the docs (which have gotten substantially
better now that NumPy support has been added). If you are still testing the computer
vision waters, you might want to check out the SimpleCV library mentioned below,
which has a substantially smaller learning curve.
SimpleCV
The goal of SimpleCV is to get you involved in image processing and computer vision
as soon as possible. And they do a great job at it. The learning curve is substantially
smaller than that of OpenCV, and as their tagline says, “it’s computer vision made
easy”. That all said, because the learning curve is smaller, you don’t have access to as
many of the raw, powerful techniques supplied by OpenCV. If you’re just testing the
waters, de?nitely try this library out.
mahotas
Mahotas, just as OpenCV and SimpleCV, rely on NumPy arrays. Much of the
functionality implemented in Mahotas can be found in OpenCV and/or SimpleCV, but in
some cases, the Mahotas interface is just easier to use, especially when it comes to
their features package.
scikit-learn
Alright, you got me, Scikit-learn isn’t an image processing or computer vision library —
it’s a machine learning library. That said, you can’t have advanced computer vision
techniques without some sort of machine learning, whether it be clustering, vector
quantization, classi?cation models, etc. Scikit-learn also includes a handful of image
feature extraction functions as well.
scikit-image
Scikit-image is fantastic, but you have to know what you are doing to effectively use this
library -- and I don’t mean this in a “there is a steep learning curve” type of way. The
learning curve is actually quite low, especially if you check out their gallery. The
algorithms included in scikit-image (I would argue) follow closer to the state-of-the-art in
computer vision. New algorithms right from academic papers can be found in scikit-
image, but in order to (effectively) use these algorithms, you need to have developed
some rigor and understanding in the computer vision ?eld. If you already have some
experience in computer vision and image processing, de?nitely check out scikit-image;
otherwise, I would continue working with OpenCV and SimpleCV to start.
ilastik
I’ll be honest. I’ve never used ilastik. But through my experiences at computer vision
conferences, I’ve met a fair amount of people who do, so I felt compelled to put it in this
list. Ilastik is mainly for image segmentation and classi?cation and is especially geared
towards the scienti?c community.
pprocess
Extracting features from images is inherently a parallelizable task. You can reduce the
amount of time it takes to extract features from an entire dataset by using a
multithreading/multitasking library. My favorite is pprocess, due to the simple nature I
need it for, but you can use your favorite.
h5py
The h5py library is the de-facto standard in Python to store large numerical datasets.
The best part? It provides support for NumPy arrays. So, if you have a large dataset
represented as a NumPy array, and it won’t ?t into memory, or if you want ef?cient,
persistent storage of NumPy arrays, then h5py is the way to go. One of my favorite
techniques is to store my extracted features in a h5py dataset and then apply scikit-
learn’s MiniBatchKMeans to cluster the features. The entire dataset never has to be
entirely loaded off disk at once and the memory footprint is extremely small, even for
thousands of feature vectors.