Making Proper Histograms with Numpy and Matplotlib

Often I find myself needing to visualise an array, such as bunch of pixel or audio channel values. A nice way to do this is via a histogram.

When building histograms you have two options: numpy’s histogram or matplotlib’s hist. As you may expect numpy is faster when you just need the data rather than the visualisation. Matplotlib is easier to apply to get a nice bar chart.

So I remember, here is a quick post with an example.

# First import numpy and matplotlib
import numpy as np
import matplotlib.pyplot as plt

I started with a data volume of size 256 x 256 x 8 x 300, corresponding to 300 frames of video at a resolution of 256 by 256 with 8 different image processing operations. The data values were 8-bit, i.e. 0 to 255. I wanted to visualise the distribution of pixel values within this data volume.

Using numpy, you can easily pass in the whole data volume and it will flatten the arrays within the function. Hence, to get a set of histogram values and integer bins between 0 and 255 you can run:

values, bins = np.histogram(data_vol, np.arange(0, 255))

You can then use matplotlib’s bar chart to plot this:[:-1], values, width = 1)

Using matplotlib’s hist function, we need to flatten the data first:

results = plt.hist(data_vol.ravel(), bins=np.arange(0, 255))

The result of both approaches is the same. If we are being more professional, we can also use more of matplotlib’s functionality:

fig, ax = plt.subplots()
results = ax.hist(data_vol.ravel(), bins=np.arange(0, 255))
ax.set_title('Pixel Value Histogram')
ax.set_xlabel('Pixel Value')

Things get a little more tricky when we start changing our bin sizes. A good run through is found here. In this case, the slower Matplotlib function becomes easier:

fig, ax = plt.subplots()
results = ax.hist(
    bins=np.linspace(0, 255, 16)
ax.set_title('Pixel Value Histogram (4-bit)')
ax.set_xlabel('Pixel Value')
Using 16-bins provides us with 4-bit quantisation. You can see here we could represent a large chunk of the data with just three values (if we subtract 128: <0 = -1, 0 = 0 and >0 = 1).

Playing Around with Retinal-Cortex Mappings

Here is a little notebook where I play around with converting images from a polar representation to a Cartesian representation. This is similar to the way our bodies map information from the retina onto the early visual areas.

Mapping from the visual field (A) to the thalamus (B) to the cortex (C)

These ideas are based on information we have about how the visual field is mapped to the cortex. As can be seen in the above figures, we view the world in a polar sense and this is mapped to a two-dimensional grid of values in the lower cortex.

You can play around with mappings between polar and Cartesian space at this website.

To develop some methods in Python I’ve leaned heavily on this great blogpost by Amnon Owed. This gives us some methods in Processing I have adapted for my purposes.

Amnon suggests using a look-up table to speed up the mapping. In this way we build a look-up table that maps co-ordinates in polar space to an equivalent co-ordinate in Cartesian space. We then use this look-up table to look-up the mapping and use the mapping to transform the image data.

import math
import numpy as np
import matplotlib.pyplot as plt

def calculateLUT(radius):
    """Precalculate a lookup table with the image maths."""
    LUT = np.zeros((radius, 360, 2), dtype=np.int16)
    # Iterate around angles of field of view
    for angle in range(0, 360):
        # Iterate over diameter
        for r in range(0, radius):
            theta = math.radians(angle)
            # Take angles from the vertical
            col = math.floor(r*math.sin(theta))
            row = math.floor(r*math.cos(theta))
            # rows and cols will be +ve and -ve representing
            # at offset from an origin
            LUT[r, angle] = [row, col]
    return LUT

def convert_image(img, LUT):
    Convert image from cartesian to polar co-ordinates.

    img is a numpy 2D array having shape (height, width)
    LUT is a numpy array having shape (diameter, 180, 2)
    storing [x, y] co-ords corresponding to [r, angle]
    # Use centre of image as origin
    centre_row = img.shape[0] // 2
    centre_col = img.shape[1] // 2
    # Determine the largest radius
    if centre_row > centre_col:
        radius = centre_col
        radius = centre_row
    output_image = np.zeros(shape=(radius, 360))
    # Iterate around angles of field of view
    for angle in range(0, 360):
        # Iterate over radius
        for r in range(0, radius):
            # Get mapped x, y
            (row, col) = tuple(LUT[r, angle])
            # Translate origin to centre
            m_row = centre_row - row
            m_col = col+centre_col
            output_image[r, angle] = img[m_row, m_col]
    return output_image

def calculatebackLUT(max_radius):
    """Precalculate a lookup table for mapping from x,y to polar."""
    LUT = np.zeros((max_radius*2, max_radius*2, 2), dtype=np.int16)
    # Iterate around x and y
    for row in range(0, max_radius*2):
        for col in range(0, max_radius*2):
            # Translate to centre
            m_row = max_radius - row
            m_col = col - max_radius
            # Calculate angle w.r.t. y axis
            angle = math.atan2(m_col, m_row)
            # Convert to degrees
            degrees = math.degrees(angle)
            # Calculate radius
            radius = math.sqrt(m_row*m_row+m_col*m_col)
            # print(angle, radius)
            LUT[row, col] = [int(radius), int(degrees)]
    return LUT

def build_mask(img, backLUT, ticks=20):
    """Build a mask showing polar co-ord system."""
    overlay = np.zeros(shape=img.shape, dtype=np.bool)
    # We need to set origin backLUT has origin at radius, radius
    row_adjust = backLUT.shape[0]//2 - img.shape[0]//2
    col_adjust = backLUT.shape[1]//2 - img.shape[1]//2
    for row in range(0, img.shape[0]):
        for col in range(0, img.shape[1]):
            m_row = row + row_adjust
            m_col = col + col_adjust
            (r, theta) = backLUT[m_row, m_col]
            if (r % ticks) == 0 or (theta % ticks) == 0:
                overlay[row, col] = 1
    masked = == 0, overlay)
    return masked

First build the backwards and forwards look-up tables. We’ll set a max radius of 300 pixels, allowing us to map images of 600 by 600.

backLUT = calculatebackLUT(300)
forwardLUT = calculateLUT(300)

Now we’ll try this out with some test images from skimage. We’ll normalise these to a range of 0 to 255.

from import chelsea, astronaut, coffee

img = chelsea()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

img = astronaut()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

img = coffee()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

In the methods, the positive y axis is the reference for the angle, which is extends clockwise.

Now, within the brain the visual field is actually divided in two. As such, each hemisphere gets half of the bottom image (0-180 to the right hemisphere and 180-360 to the left hemisphere).

Also within the brain, the map on the cortex is rotated clockwise by 90 degrees, such that angle from the horizontal eye line is on the x-axis. The brain receives information from the fovea at a high resolution and information from the periphery at a lower resolution.

The short Jupyter Notebook can be found here.

Extra: proof this occurs in the human brain!

Face Detection with the Raspberry Pi Camera Board

I have a very basic face detection routine running with the Raspberry Pi camera board.

To do this I used Robidouille’s library functions (see previous post). I then modified the raspicam_cv.c example to use the face detection routine from Learning OpenCV. There were some tweaks so I will post the code below. You also need to modify the makefile to include the OpenCV object detection libraries.


Modified from code supplied by Emil Valkov (Raspicam libraries) and Noah Kuntz (Face detection)



#include <cv.h>
#include <highgui.h>

#include "RaspiCamCV.h"

int main(int argc, const char** argv){

//Initialise Camera object
 RaspiCamCvCapture * capture = raspiCamCvCreateCameraCapture(0); // Index doesn't really matter

 //initialise memory storage for Haar objects
 CvMemStorage* storage = cvCreateMemStorage(0);

 //Set up Haar Cascade - need quoted file in directory of program
 CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad( "haarcascade_frontalface_alt2.xml", 0, 0, 0);

 //Set scale down factor
 double scale = 1.8;

//Set colours for multiple faces
 static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} };

 //Open Window for Viewing
 cvNamedWindow("RaspiCamTest", 1);

 //Loop for frames - while no keypress
 do {
 //Capture a frame
 IplImage* img = raspiCamCvQueryFrame(capture);

 //Clear memory object
 cvClearMemStorage( storage );

 //Initialise grayscale image
 IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 );

 //Shrink image
 IplImage* small_img = cvCreateImage(cvSize( cvRound(img->width/scale), cvRound(img->height/scale)), 8, 1 );

 //Convert to gray
 cvCvtColor( img, gray, CV_BGR2GRAY );

 //Resize to small image size
 cvResize( gray, small_img, CV_INTER_LINEAR );

 //Finished with gray image - release memory
 cvReleaseImage( &gray );

 //Vertical flip image as camera is upside down
 cvFlip(small_img, NULL, -1);

 cvEqualizeHist( small_img, small_img );

 // Detect objects - last arg is max size -test parameters to optimise
 //Will detect biggest face with 6th arg as 4
 CvSeq* objects = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 4, 4, cvSize( 40, 50 ), cvSize(small_img->width, small_img->height));

 int i;
 for(i = 0; i < (objects ? objects->total : 0); i++ )
 CvRect* r = (CvRect*)cvGetSeqElem( objects, i );

 //My compiler doesnt seem to be able to cope with default variables - need to specify all args - need to change '.' to '->' as r is pointer

 //This line appears to be the problem
 cvRectangle(small_img, cvPoint(r->x,r->y), cvPoint(r->x+r->width,r->y+r->height), colors[i%8], 2, 8, 0);

 cvShowImage("RaspiCamTest", small_img);
 //cvReleaseImage( &gray );
 cvReleaseImage( &small_img );

 } while (cvWaitKey(10) < 0);

 //Close window

 //Release memory

 return 0;



OBJS = objs

CFLAGS_OPENCV = -I/usr/include/opencv
LDFLAGS2_OPENCV = -lopencv_highgui -lopencv_core -lopencv_legacy -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_imgproc -lopencv_objdetect

USERLAND_ROOT = $(HOME)/git/raspberrypi/userland
 -I$(USERLAND_ROOT)/host_applications/linux/libs/bcm_host/include \
 -I$(USERLAND_ROOT)/host_applications/linux/apps/raspicam \
 -I$(USERLAND_ROOT)/interface/vcos/pthreads \
 -I$(USERLAND_ROOT)/interface/vmcs_host/linux \
 -I$(USERLAND_ROOT)/interface/mmal \

LDFLAGS_PI = -L$(USERLAND_ROOT)/build/lib -lmmal_core -lmmal -l mmal_util -lvcos -lbcm_host



ifeq ($(BUILD_TYPE), debug)
ifeq ($(BUILD_TYPE), release)

LDFLAGS2 = $(LDFLAGS2_OPENCV) $(LDFLAGS_PI) -lX11 -lXext -lrt -lstdc++

 $(OBJS)/RaspiCamControl.o \
 $(OBJS)/RaspiCLI.o \
 $(OBJS)/RaspiCamCV.o \

 $(OBJS)/RaspiCamTest.o \

TARGETS = libraspicamcv.a raspicamtest

all: $(TARGETS)

$(OBJS)/%.o: %.c
 gcc -c $(CFLAGS) $< -o $@

$(OBJS)/%.o: $(USERLAND_ROOT)/host_applications/linux/apps/raspicam/%.c
 gcc -c $(CFLAGS) $< -o $@

libraspicamcv.a: $(RASPICAMCV_OBJS)
 ar rcs libraspicamcv.a -o $+

raspicamtest: $(RASPICAMTEST_OBJS) libraspicamcv.a
 gcc $(LDFLAGS) $+ $(LDFLAGS2) -L. -lraspicamcv -o $@

 rm -f $(OBJS)/* $(TARGETS)

-include $(OBJS)/*.d

Hacker News Update: Raspicam & WeMo

A quick update on my recent discoveries.


I now have a Raspberry Pi Camera Board (Raspicam)!

There is a brilliant combo deal on at the moment allowing you to buy a Raspicam, Model A + 4GB SD card for about £35 (including VAT + shipping!)! That’s £35 for a device that can run OpenCV with a camera capable of 30fps at HD resolutions. I will leave you to think about that for a moment.

The downside is that the software is still not quite there. The Raspicam couples directly to the Raspberry Pi; this means it is not (at the moment) available as a standard USB video device (e.g. /dev/video0 on Linux). Now most Linux software and packages like SimpleCV work based on a standard USB video device. This means as of 24 October 2013 you cannot use SimpleCV with the Raspicam.

However, not to fret! The Internet is on it. I imagine that we will see better drivers for the Raspicam from the official development communities very soon. While we wait:

WeMo and Python

As you will see from the previous posts I have been using IFTTT as a make-shift interface between my Raspberry Pi and my WeMo Motion detector and switch.  This morning though I found a Python module that appears to enable you to control the Switch and listen to motion events via Python. Hurray!

The module is called ouimeaux (there is a French theme this week). Details can be found here: link.

Very soon I hope to adapt my existing code to control my Hue lights based on motion events (e.g. turn on when someone walks in the room, turn off when no motion). Watch this space.