Digit Recognition for an LCD Screen

While I wait for a £1.75 USB LED light to solve my cupboard lumination problem I thought I would investigate digit recognition for an LCD screen.

Turns out some clever people before me have considered the same problem from the point of view of allowing the blind to read displays.

I found some good ideas in this publication – http://www.ski.org/rerc/HShen/Publications/embedded.pdf

And this one: http://ukpmc.ac.uk/articles/PMC3146550/reload=0;jsessionid=MGJB9oCtdMe3ZwipoBs0.0

Turns out the problem can be distinguished from classic OCR and bespoke algorithms provide better success. Both papers have Symbian implementations and so look perfect for implementation on the Raspberry Pi. The second publication looks slightly easier for a novice like me.

This leads me to sketch out the following rough process for a C++ program:

There may be scope for adding in edge detection (as per first publication) – may be as an extra input for blob detection or filtering. Edge detection in Linux : http://linux.about.com/library/cmd/blcmdl1_pgmedge.htm .

This Thesis also has some useful ideas about using the OpenCV resource: http://repositories.tdl.org/ttu-ir/bitstream/handle/2346/ETD-TTU-2011-05-1485/LI-THESIS.pdf?sequence=1 (although I don’t think Tesseract would work very well – and it hasn’t been ported to the Pi as far as I am aware). However, for now loading 3GB of code (for OpenCV) may be overkill for my task.

Setting up a Webcam for Stills in Raspberry Pi

First attempt: a Logitech Quickcam:

This was detected when I plugged it in. Running lsusb gave me the following output:

Then I tried the example described here: http://silicondelight.com/2012/07/grabbing-frames-from-webcam-with-a-raspberry-pi/

However, I kept getting errors relating to the v4l2 libraries.

I tried a powered hub – a Logik Hub from Currys snipped as described here – http://www.raspberrypi.org/phpBB3/viewtopic.php?f=28&t=8926 . The Hub works but the webcam still didn’t.

I tried two different commands for taking stills: uvccapture and mplayer. Uvccapture showed the most promise. Some sample commands can be found here. However, still no luck with the Logitech.

I then cheated. I swapped the Logitech for a Microsoft branded webcam I also had floating around.

This produced results with uvccapture – but only with the options ‘-v -m’:

uvccapture -m -v

This took a picture and saved it as snap.jpg.I also found a way to take multiple snapshots with different filenames:

while :; do echo uvccapture -d/dev/video1 -o”$(date +%s).jpg” -v -m ; sleep 4; done

– this took a snap every 5 seconds or so.

I then played around with Contrast (-C), Brightness (-B) and Saturation (-S). A good combination was contrast and brightness high (e.g. 255) and saturation low (e.g. 1) to give me:

Not ideal but a good starting point.

One problem is the Pi is in a cupboard. With the door shut some illumination is needed. This generates it’s own problems:

My setup:

OCR Meter Readings using Raspberry Pi?

I have a wireless energy meter and thermostat at home. I could try to hack them, taking them apart and listening to certain key voltages. However, the circuits are likely small and breakable. And I would like to use the units again and not pay for replacements.

So I was wondering whether I could cheat and input data values using OCR from a webcam or camera. The Raspberry Pi would be well placed to do this. My thoughts so far for the process are set out below. I can probably tackle each independently.

  1. Place meters;
  2. Acquire image;
  3. OCR on image;
  4. Output of OCR to DB or file.

1. Place meters

  • Needs to be a set distance from acquisition device;
  • Mark out so can replicate even if need to take meters in and out;
  • Illumination for night time:
    • Low power (LED?)
    • Filter image when LED is lit?

2. Acquire image

  • Frame grab from webcam;
    • Need to get webcam working;
    • Need to learn command to acquire image;
  • Segment image for different data:
    • Set x,y area in image if meters are placed consistently;
      • Is there a command line tool for this?
    • Test with crop in iPhone/iPad;
    • Output image files for different areas – use these as input for OCR.

3. OCR on image

  • No obvious OCR tool on Raspberry Pi – keep looking;
  • Web services? Does Google/Tesseract have a web service? Use URL?
  • Did common sense check on output:
    • Values will be integer (input parameter for OCR)
    • Values will have decimal point;
  • Create own OCR tool?

4. Output of OCR to DB or file

  • MySQL DB?
  • Key field = time stamp (inc. seconds);
  • Other fields for each item of OCR data;
  • Or flat file, e.g. CSV, with {timestamp, data} tuple.