Face Detection with the Raspberry Pi Camera Board

I have a very basic face detection routine running with the Raspberry Pi camera board.

To do this I used Robidouille’s library functions (see previous post). I then modified the raspicam_cv.c example to use the face detection routine from Learning OpenCV. There were some tweaks so I will post the code below. You also need to modify the makefile to include the OpenCV object detection libraries.


/*

Modified from code supplied by Emil Valkov (Raspicam libraries) and Noah Kuntz (Face detection)

License: http://www.opensource.org/licenses/bsd-license.php

*/

#include <cv.h>
#include <highgui.h>

#include "RaspiCamCV.h"

int main(int argc, const char** argv){

//Initialise Camera object
 RaspiCamCvCapture * capture = raspiCamCvCreateCameraCapture(0); // Index doesn't really matter

 //initialise memory storage for Haar objects
 CvMemStorage* storage = cvCreateMemStorage(0);

 //Set up Haar Cascade - need quoted file in directory of program
 CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad( "haarcascade_frontalface_alt2.xml", 0, 0, 0);

 //Set scale down factor
 double scale = 1.8;

//Set colours for multiple faces
 static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} };

 //Open Window for Viewing
 cvNamedWindow("RaspiCamTest", 1);

 //Loop for frames - while no keypress
 do {
 //Capture a frame
 IplImage* img = raspiCamCvQueryFrame(capture);

 //Clear memory object
 cvClearMemStorage( storage );

 // IMAGE PREPARATION:
 //Initialise grayscale image
 IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 );

 //Shrink image
 IplImage* small_img = cvCreateImage(cvSize( cvRound(img->width/scale), cvRound(img->height/scale)), 8, 1 );

 //Convert to gray
 cvCvtColor( img, gray, CV_BGR2GRAY );

 //Resize to small image size
 cvResize( gray, small_img, CV_INTER_LINEAR );

 //Finished with gray image - release memory
 cvReleaseImage( &gray );

 //Vertical flip image as camera is upside down
 cvFlip(small_img, NULL, -1);

 //Equalise
 cvEqualizeHist( small_img, small_img );

 // Detect objects - last arg is max size -test parameters to optimise
 //Will detect biggest face with 6th arg as 4
 CvSeq* objects = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 4, 4, cvSize( 40, 50 ), cvSize(small_img->width, small_img->height));

 int i;
 // LOOP THROUGH FOUND OBJECTS AND DRAW BOXES AROUND THEM
 for(i = 0; i < (objects ? objects->total : 0); i++ )
 {
 CvRect* r = (CvRect*)cvGetSeqElem( objects, i );

 //My compiler doesnt seem to be able to cope with default variables - need to specify all args - need to change '.' to '->' as r is pointer

 //This line appears to be the problem
 cvRectangle(small_img, cvPoint(r->x,r->y), cvPoint(r->x+r->width,r->y+r->height), colors[i%8], 2, 8, 0);
 }

 cvShowImage("RaspiCamTest", small_img);
 //cvReleaseImage( &gray );
 cvReleaseImage( &small_img );

 } while (cvWaitKey(10) < 0);

 //Close window
 cvDestroyWindow("RaspiCamTest");

 //Release memory
 raspiCamCvReleaseCapture(&capture);

 return 0;

}

Makefile:


OBJS = objs

CFLAGS_OPENCV = -I/usr/include/opencv
LDFLAGS2_OPENCV = -lopencv_highgui -lopencv_core -lopencv_legacy -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_imgproc -lopencv_objdetect

USERLAND_ROOT = $(HOME)/git/raspberrypi/userland
CFLAGS_PI = \
 -I$(USERLAND_ROOT)/host_applications/linux/libs/bcm_host/include \
 -I$(USERLAND_ROOT)/host_applications/linux/apps/raspicam \
 -I$(USERLAND_ROOT) \
 -I$(USERLAND_ROOT)/interface/vcos/pthreads \
 -I$(USERLAND_ROOT)/interface/vmcs_host/linux \
 -I$(USERLAND_ROOT)/interface/mmal \

LDFLAGS_PI = -L$(USERLAND_ROOT)/build/lib -lmmal_core -lmmal -l mmal_util -lvcos -lbcm_host

BUILD_TYPE=debug
#BUILD_TYPE=release

CFLAGS_COMMON = -Wno-multichar -g $(CFLAGS_OPENCV) $(CFLAGS_PI) -MD

ifeq ($(BUILD_TYPE), debug)
 CFLAGS = $(CFLAGS_COMMON)
endif
ifeq ($(BUILD_TYPE), release)
 CFLAGS = $(CFLAGS_COMMON) -O3
endif

LDFLAGS =
LDFLAGS2 = $(LDFLAGS2_OPENCV) $(LDFLAGS_PI) -lX11 -lXext -lrt -lstdc++

RASPICAMCV_OBJS = \
 $(OBJS)/RaspiCamControl.o \
 $(OBJS)/RaspiCLI.o \
 $(OBJS)/RaspiCamCV.o \

RASPICAMTEST_OBJS = \
 $(OBJS)/RaspiCamTest.o \

TARGETS = libraspicamcv.a raspicamtest

all: $(TARGETS)

$(OBJS)/%.o: %.c
 gcc -c $(CFLAGS) $< -o $@

$(OBJS)/%.o: $(USERLAND_ROOT)/host_applications/linux/apps/raspicam/%.c
 gcc -c $(CFLAGS) $< -o $@

libraspicamcv.a: $(RASPICAMCV_OBJS)
 ar rcs libraspicamcv.a -o $+

raspicamtest: $(RASPICAMTEST_OBJS) libraspicamcv.a
 gcc $(LDFLAGS) $+ $(LDFLAGS2) -L. -lraspicamcv -o $@

clean:
 rm -f $(OBJS)/* $(TARGETS)

-include $(OBJS)/*.d

Advertisements

Hacker News Update: Raspicam & WeMo

A quick update on my recent discoveries.

Raspicam

I now have a Raspberry Pi Camera Board (Raspicam)!

There is a brilliant combo deal on at the moment allowing you to buy a Raspicam, Model A + 4GB SD card for about £35 (including VAT + shipping!)! That’s £35 for a device that can run OpenCV with a camera capable of 30fps at HD resolutions. I will leave you to think about that for a moment.

The downside is that the software is still not quite there. The Raspicam couples directly to the Raspberry Pi; this means it is not (at the moment) available as a standard USB video device (e.g. /dev/video0 on Linux). Now most Linux software and packages like SimpleCV work based on a standard USB video device. This means as of 24 October 2013 you cannot use SimpleCV with the Raspicam.

However, not to fret! The Internet is on it. I imagine that we will see better drivers for the Raspicam from the official development communities very soon. While we wait:

WeMo and Python

As you will see from the previous posts I have been using IFTTT as a make-shift interface between my Raspberry Pi and my WeMo Motion detector and switch.  This morning though I found a Python module that appears to enable you to control the Switch and listen to motion events via Python. Hurray!

The module is called ouimeaux (there is a French theme this week). Details can be found here: link.

Very soon I hope to adapt my existing code to control my Hue lights based on motion events (e.g. turn on when someone walks in the room, turn off when no motion). Watch this space.

Face Tracking Robot Arm

Ha – awesome – I have made a face tracking robot arm. The 12 year-old me is so jealous.

Here’s how I did it (on Ubuntu 12.04 but should be portable to the Raspberry Pis):

I installed SimpleCV: – http://simplecv.org/ .
(I love this – makes it so simple to prototype.)

I built this robot arm: – http://www.maplin.co.uk/robotic-arm-kit-with-usb-pc-interface-266257 .

I installed pyusb: http://sourceforge.net/apps/trac/pyusb/.

(I did first try sudo apt-get install python-usb – it was already installed and didn’t work giving me errors when trying to import usb.core. I found on the web that the solution to this was removing python-usb and installing from the above site (e.g. download zip, extract, run setup.py).)

I stuck a Microsoft Lifecam Cinema on the top of the assembled robot arm.

I adapted the code below from a SimpleCV example and the arm control code (calling it arm_track.py).


from SimpleCV import Camera, Display
import usb.core, usb.util, time

# Allocate the name 'RoboArm' to the USB device
RoboArm = usb.core.find(idVendor=0x1267, idProduct=0x0000)

# Check if the arm is detected and warn if not
if RoboArm is None:
raise ValueError("Arm not found")

# Create a variable for duration
Duration=1

# Define a procedure to execute each movement
def MoveArm(Duration, ArmCmd):
# Start the movement
RoboArm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)
# Stop the movement after waiting specified duration
time.sleep(Duration)
ArmCmd=[0,0,0]
RoboArm.ctrl_transfer(0x40,6,0x100,0,ArmCmd,1000)

cam = Camera()

disp = Display(cam.getImage().size())

#Get centre of field of vision
centre = []
centre.append(cam.getImage().size()[0]/2)
centre.append(cam.getImage().size()[0]/2)

while disp.isNotDone():
img = cam.getImage()
# Look for a face
faces = img.findHaarFeatures('face')
if faces is not None:
# Get the largest face
faces = faces.sortArea()
bigFace = faces[-1]
# Draw a green box around the face
bigFace.draw()
face_location = bigFace.coordinates()
print face_location, centre
offset = (face_location[0] - centre[0])/float(200) #/cam.getImage().size()[0]
if offset < 0:
print "clockwise", offset
MoveArm(abs(offset),[0,2,0]) #Rotate base clockwise
time.sleep(abs(offset))
else:
print "anticlockwise", offset
MoveArm(abs(offset),[0,1,0]) #Rotate base anticlockwise
time.sleep(abs(offset))

img.save(disp)

Digit Recognition for an LCD Screen

While I wait for a £1.75 USB LED light to solve my cupboard lumination problem I thought I would investigate digit recognition for an LCD screen.

Turns out some clever people before me have considered the same problem from the point of view of allowing the blind to read displays.

I found some good ideas in this publication – http://www.ski.org/rerc/HShen/Publications/embedded.pdf

And this one: http://ukpmc.ac.uk/articles/PMC3146550/reload=0;jsessionid=MGJB9oCtdMe3ZwipoBs0.0

Turns out the problem can be distinguished from classic OCR and bespoke algorithms provide better success. Both papers have Symbian implementations and so look perfect for implementation on the Raspberry Pi. The second publication looks slightly easier for a novice like me.

This leads me to sketch out the following rough process for a C++ program:

There may be scope for adding in edge detection (as per first publication) – may be as an extra input for blob detection or filtering. Edge detection in Linux : http://linux.about.com/library/cmd/blcmdl1_pgmedge.htm .

This Thesis also has some useful ideas about using the OpenCV resource: http://repositories.tdl.org/ttu-ir/bitstream/handle/2346/ETD-TTU-2011-05-1485/LI-THESIS.pdf?sequence=1 (although I don’t think Tesseract would work very well – and it hasn’t been ported to the Pi as far as I am aware). However, for now loading 3GB of code (for OpenCV) may be overkill for my task.

Setting up a Webcam for Stills in Raspberry Pi

First attempt: a Logitech Quickcam:

This was detected when I plugged it in. Running lsusb gave me the following output:

Then I tried the example described here: http://silicondelight.com/2012/07/grabbing-frames-from-webcam-with-a-raspberry-pi/

However, I kept getting errors relating to the v4l2 libraries.

I tried a powered hub – a Logik Hub from Currys snipped as described here – http://www.raspberrypi.org/phpBB3/viewtopic.php?f=28&t=8926 . The Hub works but the webcam still didn’t.

I tried two different commands for taking stills: uvccapture and mplayer. Uvccapture showed the most promise. Some sample commands can be found here. However, still no luck with the Logitech.

I then cheated. I swapped the Logitech for a Microsoft branded webcam I also had floating around.


This produced results with uvccapture – but only with the options ‘-v -m’:

uvccapture -m -v

This took a picture and saved it as snap.jpg.I also found a way to take multiple snapshots with different filenames:

while :; do echo uvccapture -d/dev/video1 -o”$(date +%s).jpg” -v -m ; sleep 4; done

– this took a snap every 5 seconds or so.

I then played around with Contrast (-C), Brightness (-B) and Saturation (-S). A good combination was contrast and brightness high (e.g. 255) and saturation low (e.g. 1) to give me:

Not ideal but a good starting point.

One problem is the Pi is in a cupboard. With the door shut some illumination is needed. This generates it’s own problems:


My setup: