Playing Around with Retinal-Cortex Mappings

Here is a little notebook where I play around with converting images from a polar representation to a Cartesian representation. This is similar to the way our bodies map information from the retina onto the early visual areas.

Mapping from the visual field (A) to the thalamus (B) to the cortex (C)

These ideas are based on information we have about how the visual field is mapped to the cortex. As can be seen in the above figures, we view the world in a polar sense and this is mapped to a two-dimensional grid of values in the lower cortex.

You can play around with mappings between polar and Cartesian space at this website.

To develop some methods in Python I’ve leaned heavily on this great blogpost by Amnon Owed. This gives us some methods in Processing I have adapted for my purposes.

Amnon suggests using a look-up table to speed up the mapping. In this way we build a look-up table that maps co-ordinates in polar space to an equivalent co-ordinate in Cartesian space. We then use this look-up table to look-up the mapping and use the mapping to transform the image data.

import math
import numpy as np
import matplotlib.pyplot as plt

def calculateLUT(radius):
    """Precalculate a lookup table with the image maths."""
    LUT = np.zeros((radius, 360, 2), dtype=np.int16)
    # Iterate around angles of field of view
    for angle in range(0, 360):
        # Iterate over diameter
        for r in range(0, radius):
            theta = math.radians(angle)
            # Take angles from the vertical
            col = math.floor(r*math.sin(theta))
            row = math.floor(r*math.cos(theta))
            # rows and cols will be +ve and -ve representing
            # at offset from an origin
            LUT[r, angle] = [row, col]
    return LUT

def convert_image(img, LUT):
    Convert image from cartesian to polar co-ordinates.

    img is a numpy 2D array having shape (height, width)
    LUT is a numpy array having shape (diameter, 180, 2)
    storing [x, y] co-ords corresponding to [r, angle]
    # Use centre of image as origin
    centre_row = img.shape[0] // 2
    centre_col = img.shape[1] // 2
    # Determine the largest radius
    if centre_row > centre_col:
        radius = centre_col
        radius = centre_row
    output_image = np.zeros(shape=(radius, 360))
    # Iterate around angles of field of view
    for angle in range(0, 360):
        # Iterate over radius
        for r in range(0, radius):
            # Get mapped x, y
            (row, col) = tuple(LUT[r, angle])
            # Translate origin to centre
            m_row = centre_row - row
            m_col = col+centre_col
            output_image[r, angle] = img[m_row, m_col]
    return output_image

def calculatebackLUT(max_radius):
    """Precalculate a lookup table for mapping from x,y to polar."""
    LUT = np.zeros((max_radius*2, max_radius*2, 2), dtype=np.int16)
    # Iterate around x and y
    for row in range(0, max_radius*2):
        for col in range(0, max_radius*2):
            # Translate to centre
            m_row = max_radius - row
            m_col = col - max_radius
            # Calculate angle w.r.t. y axis
            angle = math.atan2(m_col, m_row)
            # Convert to degrees
            degrees = math.degrees(angle)
            # Calculate radius
            radius = math.sqrt(m_row*m_row+m_col*m_col)
            # print(angle, radius)
            LUT[row, col] = [int(radius), int(degrees)]
    return LUT

def build_mask(img, backLUT, ticks=20):
    """Build a mask showing polar co-ord system."""
    overlay = np.zeros(shape=img.shape, dtype=np.bool)
    # We need to set origin backLUT has origin at radius, radius
    row_adjust = backLUT.shape[0]//2 - img.shape[0]//2
    col_adjust = backLUT.shape[1]//2 - img.shape[1]//2
    for row in range(0, img.shape[0]):
        for col in range(0, img.shape[1]):
            m_row = row + row_adjust
            m_col = col + col_adjust
            (r, theta) = backLUT[m_row, m_col]
            if (r % ticks) == 0 or (theta % ticks) == 0:
                overlay[row, col] = 1
    masked = == 0, overlay)
    return masked

First build the backwards and forwards look-up tables. We’ll set a max radius of 300 pixels, allowing us to map images of 600 by 600.

backLUT = calculatebackLUT(300)
forwardLUT = calculateLUT(300)

Now we’ll try this out with some test images from skimage. We’ll normalise these to a range of 0 to 255.

from import chelsea, astronaut, coffee

img = chelsea()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

img = astronaut()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

img = coffee()[...,0] / 255.

masked = build_mask(img, backLUT, ticks=50)
out_image = convert_image(img, forwardLUT)
fig, ax = plt.subplots(2, 1, figsize=(6,8))
ax[0].imshow(img,, interpolation='bicubic')

ax[0].imshow(masked,, alpha=0.5)

ax[1].imshow(out_image,, interpolation='bicubic')

In the methods, the positive y axis is the reference for the angle, which is extends clockwise.

Now, within the brain the visual field is actually divided in two. As such, each hemisphere gets half of the bottom image (0-180 to the right hemisphere and 180-360 to the left hemisphere).

Also within the brain, the map on the cortex is rotated clockwise by 90 degrees, such that angle from the horizontal eye line is on the x-axis. The brain receives information from the fovea at a high resolution and information from the periphery at a lower resolution.

The short Jupyter Notebook can be found here.

Extra: proof this occurs in the human brain!


Face Detection with the Raspberry Pi Camera Board

I have a very basic face detection routine running with the Raspberry Pi camera board.

To do this I used Robidouille’s library functions (see previous post). I then modified the raspicam_cv.c example to use the face detection routine from Learning OpenCV. There were some tweaks so I will post the code below. You also need to modify the makefile to include the OpenCV object detection libraries.


Modified from code supplied by Emil Valkov (Raspicam libraries) and Noah Kuntz (Face detection)



#include <cv.h>
#include <highgui.h>

#include "RaspiCamCV.h"

int main(int argc, const char** argv){

//Initialise Camera object
 RaspiCamCvCapture * capture = raspiCamCvCreateCameraCapture(0); // Index doesn't really matter

 //initialise memory storage for Haar objects
 CvMemStorage* storage = cvCreateMemStorage(0);

 //Set up Haar Cascade - need quoted file in directory of program
 CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad( "haarcascade_frontalface_alt2.xml", 0, 0, 0);

 //Set scale down factor
 double scale = 1.8;

//Set colours for multiple faces
 static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} };

 //Open Window for Viewing
 cvNamedWindow("RaspiCamTest", 1);

 //Loop for frames - while no keypress
 do {
 //Capture a frame
 IplImage* img = raspiCamCvQueryFrame(capture);

 //Clear memory object
 cvClearMemStorage( storage );

 //Initialise grayscale image
 IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 );

 //Shrink image
 IplImage* small_img = cvCreateImage(cvSize( cvRound(img->width/scale), cvRound(img->height/scale)), 8, 1 );

 //Convert to gray
 cvCvtColor( img, gray, CV_BGR2GRAY );

 //Resize to small image size
 cvResize( gray, small_img, CV_INTER_LINEAR );

 //Finished with gray image - release memory
 cvReleaseImage( &gray );

 //Vertical flip image as camera is upside down
 cvFlip(small_img, NULL, -1);

 cvEqualizeHist( small_img, small_img );

 // Detect objects - last arg is max size -test parameters to optimise
 //Will detect biggest face with 6th arg as 4
 CvSeq* objects = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 4, 4, cvSize( 40, 50 ), cvSize(small_img->width, small_img->height));

 int i;
 for(i = 0; i < (objects ? objects->total : 0); i++ )
 CvRect* r = (CvRect*)cvGetSeqElem( objects, i );

 //My compiler doesnt seem to be able to cope with default variables - need to specify all args - need to change '.' to '->' as r is pointer

 //This line appears to be the problem
 cvRectangle(small_img, cvPoint(r->x,r->y), cvPoint(r->x+r->width,r->y+r->height), colors[i%8], 2, 8, 0);

 cvShowImage("RaspiCamTest", small_img);
 //cvReleaseImage( &gray );
 cvReleaseImage( &small_img );

 } while (cvWaitKey(10) < 0);

 //Close window

 //Release memory

 return 0;



OBJS = objs

CFLAGS_OPENCV = -I/usr/include/opencv
LDFLAGS2_OPENCV = -lopencv_highgui -lopencv_core -lopencv_legacy -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_imgproc -lopencv_objdetect

USERLAND_ROOT = $(HOME)/git/raspberrypi/userland
 -I$(USERLAND_ROOT)/host_applications/linux/libs/bcm_host/include \
 -I$(USERLAND_ROOT)/host_applications/linux/apps/raspicam \
 -I$(USERLAND_ROOT)/interface/vcos/pthreads \
 -I$(USERLAND_ROOT)/interface/vmcs_host/linux \
 -I$(USERLAND_ROOT)/interface/mmal \

LDFLAGS_PI = -L$(USERLAND_ROOT)/build/lib -lmmal_core -lmmal -l mmal_util -lvcos -lbcm_host



ifeq ($(BUILD_TYPE), debug)
ifeq ($(BUILD_TYPE), release)

LDFLAGS2 = $(LDFLAGS2_OPENCV) $(LDFLAGS_PI) -lX11 -lXext -lrt -lstdc++

 $(OBJS)/RaspiCamControl.o \
 $(OBJS)/RaspiCLI.o \
 $(OBJS)/RaspiCamCV.o \

 $(OBJS)/RaspiCamTest.o \

TARGETS = libraspicamcv.a raspicamtest

all: $(TARGETS)

$(OBJS)/%.o: %.c
 gcc -c $(CFLAGS) $< -o $@

$(OBJS)/%.o: $(USERLAND_ROOT)/host_applications/linux/apps/raspicam/%.c
 gcc -c $(CFLAGS) $< -o $@

libraspicamcv.a: $(RASPICAMCV_OBJS)
 ar rcs libraspicamcv.a -o $+

raspicamtest: $(RASPICAMTEST_OBJS) libraspicamcv.a
 gcc $(LDFLAGS) $+ $(LDFLAGS2) -L. -lraspicamcv -o $@

 rm -f $(OBJS)/* $(TARGETS)

-include $(OBJS)/*.d

Hacker News Update: Raspicam & WeMo

A quick update on my recent discoveries.


I now have a Raspberry Pi Camera Board (Raspicam)!

There is a brilliant combo deal on at the moment allowing you to buy a Raspicam, Model A + 4GB SD card for about £35 (including VAT + shipping!)! That’s £35 for a device that can run OpenCV with a camera capable of 30fps at HD resolutions. I will leave you to think about that for a moment.

The downside is that the software is still not quite there. The Raspicam couples directly to the Raspberry Pi; this means it is not (at the moment) available as a standard USB video device (e.g. /dev/video0 on Linux). Now most Linux software and packages like SimpleCV work based on a standard USB video device. This means as of 24 October 2013 you cannot use SimpleCV with the Raspicam.

However, not to fret! The Internet is on it. I imagine that we will see better drivers for the Raspicam from the official development communities very soon. While we wait:

WeMo and Python

As you will see from the previous posts I have been using IFTTT as a make-shift interface between my Raspberry Pi and my WeMo Motion detector and switch.  This morning though I found a Python module that appears to enable you to control the Switch and listen to motion events via Python. Hurray!

The module is called ouimeaux (there is a French theme this week). Details can be found here: link.

Very soon I hope to adapt my existing code to control my Hue lights based on motion events (e.g. turn on when someone walks in the room, turn off when no motion). Watch this space.

Face Tracking Robot Arm

Ha – awesome – I have made a face tracking robot arm. The 12 year-old me is so jealous.

Here’s how I did it (on Ubuntu 12.04 but should be portable to the Raspberry Pis):

I installed SimpleCV: – .
(I love this – makes it so simple to prototype.)

I built this robot arm: – .

I installed pyusb:

(I did first try sudo apt-get install python-usb – it was already installed and didn’t work giving me errors when trying to import usb.core. I found on the web that the solution to this was removing python-usb and installing from the above site (e.g. download zip, extract, run

I stuck a Microsoft Lifecam Cinema on the top of the assembled robot arm.

I adapted the code below from a SimpleCV example and the arm control code (calling it

from SimpleCV import Camera, Display
import usb.core, usb.util, time

# Allocate the name 'RoboArm' to the USB device
RoboArm = usb.core.find(idVendor=0x1267, idProduct=0x0000)

# Check if the arm is detected and warn if not
if RoboArm is None:
raise ValueError("Arm not found")

# Create a variable for duration

# Define a procedure to execute each movement
def MoveArm(Duration, ArmCmd):
# Start the movement
# Stop the movement after waiting specified duration

cam = Camera()

disp = Display(cam.getImage().size())

#Get centre of field of vision
centre = []

while disp.isNotDone():
img = cam.getImage()
# Look for a face
faces = img.findHaarFeatures('face')
if faces is not None:
# Get the largest face
faces = faces.sortArea()
bigFace = faces[-1]
# Draw a green box around the face
face_location = bigFace.coordinates()
print face_location, centre
offset = (face_location[0] - centre[0])/float(200) #/cam.getImage().size()[0]
if offset < 0:
print "clockwise", offset
MoveArm(abs(offset),[0,2,0]) #Rotate base clockwise
print "anticlockwise", offset
MoveArm(abs(offset),[0,1,0]) #Rotate base anticlockwise

Digit Recognition for an LCD Screen

While I wait for a £1.75 USB LED light to solve my cupboard lumination problem I thought I would investigate digit recognition for an LCD screen.

Turns out some clever people before me have considered the same problem from the point of view of allowing the blind to read displays.

I found some good ideas in this publication –

And this one:;jsessionid=MGJB9oCtdMe3ZwipoBs0.0

Turns out the problem can be distinguished from classic OCR and bespoke algorithms provide better success. Both papers have Symbian implementations and so look perfect for implementation on the Raspberry Pi. The second publication looks slightly easier for a novice like me.

This leads me to sketch out the following rough process for a C++ program:

There may be scope for adding in edge detection (as per first publication) – may be as an extra input for blob detection or filtering. Edge detection in Linux : .

This Thesis also has some useful ideas about using the OpenCV resource: (although I don’t think Tesseract would work very well – and it hasn’t been ported to the Pi as far as I am aware). However, for now loading 3GB of code (for OpenCV) may be overkill for my task.

Setting up a Webcam for Stills in Raspberry Pi

First attempt: a Logitech Quickcam:

This was detected when I plugged it in. Running lsusb gave me the following output:

Then I tried the example described here:

However, I kept getting errors relating to the v4l2 libraries.

I tried a powered hub – a Logik Hub from Currys snipped as described here – . The Hub works but the webcam still didn’t.

I tried two different commands for taking stills: uvccapture and mplayer. Uvccapture showed the most promise. Some sample commands can be found here. However, still no luck with the Logitech.

I then cheated. I swapped the Logitech for a Microsoft branded webcam I also had floating around.

This produced results with uvccapture – but only with the options ‘-v -m’:

uvccapture -m -v

This took a picture and saved it as snap.jpg.I also found a way to take multiple snapshots with different filenames:

while :; do echo uvccapture -d/dev/video1 -o”$(date +%s).jpg” -v -m ; sleep 4; done

– this took a snap every 5 seconds or so.

I then played around with Contrast (-C), Brightness (-B) and Saturation (-S). A good combination was contrast and brightness high (e.g. 255) and saturation low (e.g. 1) to give me:

Not ideal but a good starting point.

One problem is the Pi is in a cupboard. With the door shut some illumination is needed. This generates it’s own problems:

My setup: