Embedded Eye

Give your gizmo the gift of sight

I've been trying to characterize the image sensor and flow in a quantitative manner (for finding expected accuracy and all that in various lighting conditions and targets), and seem to have run into a problem with the calibration.

When I first started using the sensor the pinhole location had apparently been set by Geof in the lab before I received it, and I was able to look at the image stream and see it change appropriately as I moved the sensor and put things in front of it. I started playing with the REST and RESB values to get better contrast, and it was working to some extent. I then wondered if possibly the pinhole could be slightly off (because parts of the image were stationary noise that never changed), so I followed the recommended calibration method for finding the pinhole, i.e. pointing the sensor at a bright light for a while and then sending the command. I did this with it looking directly at a 40 or 60w incandescent desk lamp bulb from a foot or two away, and the sensor was completely washed out. This resulted in a pinhole image that didn't show any movement whatsoever, only general brightness changes. I did it again after checking the raw image and making sure there was still some contrast, and while the pinhole location moved again it didn't seem to fix it.

What is your method of finding the pinhole? I have an example here of the raw 16x16 image (keep in mind this is one of the "B" grade sensors that I've been testing, I haven't gotten into the bags of the "A" grade yet, I want to figure it out on this first). The data is the raw 16x16 output from every 4th pixel of the sensor, with the value subtracted from 0xFF and bit shifted two to the left to make white correspond to bright and black to dark, in the full range instead of just 0-63.

This is from the perspective of looking through the sensor, I did all the necessary flipping and reordering to make it show up right. The green circle is where the pinhole appears to be, I can move things in front of the sensor and see them in that area. The blue oval indicates another region where it seems to pick up movement, but primarily horizontally, an object in front of it shows up the full height of the image. The rest of the image will change intensity when it points at a brighter or darker object, but no movement is visible. The environment here is inside my lab, it's relatively bright for indoors but is all from fluorescent tubes.

You can see from the image that there are a few bright white pixels around the edge (very low number values), so I'm thinking that the calibration routine might be picking one of them or somewhere in the blue region.

Having this image though, can I figure out raw row and column values for the pinhole location? There were enough various transformations and flips I had to do to get the image displayed live in a GTK+/cairo window that I don't honestly remember what order the pixels from the actual sensor go in. I have to say it looks a lot better with the default bilinear interpolation that cairo uses for scaling up than in this version that I saved with Matlab (adding a bit of blur and annotation in GIMP).

Note from Geof: This topic is about version 2 of the CYE8 sensor, using firmware revision 5 from December 2010. I changed the topic title slightly to make it easier for people to find in the future. I hope you don't mind.

Views: 183

Reply to This

Replies to This Discussion

Note to everyone: For this run, we have two grades of sensors "A" and "B". "B" grade sensors are additional spares with identified flaws in the optics that we will correct in future versions. Generally we recommend people get up and running with the "B" sensors first so that the less valuable hardware is zapped in case it is hooked up backwards.


From the above diagram, the correct pinhole location looks like it is in the green circled area above. The blue region is where there is some leakage in the optics. What the calibration algorithm does is simply look for the brightest pixel in the interior of the image, and calls that the "center" of the 8x8 image used for optical flow computation. The calibration method should be pretty straight forward:

1) First power cycle the sensor so that it goes to the default settings. You want RESB and REST to be at their default values.

2) Set up a light- a 60W incandescent bulb like what you are using is perfect. You want the light bulb maybe a half meter to a meter away from the sensor. Turn off all other lights in the room. You want to make it as easy as possible for the algorithm to find the bright spot.

3) Hold the sensor so that it looks directly at the light bulb. If you draw a line from the light bulb to the sensor, it will be parallel to the plane in which the sensor board resides.

4) Send the command to "Find Pinhole" (command 79, argument 85). This should be it.

5) Since the pinhole location has changed, you should also calibrate the sensor using the technique in the instructions- place paper right on top of the optics and send the "Calibrate" command (command 61, argument 85).

6) I find that the best way to verify the pinhole was set correctly is to grab the pinhole image (ATT=68) and display the resulting 8x8 array. Personally I like to use MATLAB to display a crude video (several frames a second) and move around the sensor a bit. The image from the light bulb should be very clear. Alternatively you can just move your hand or the light bulb around and verify that the optical flow response is good in the desired direction.

7) You can tweak the pinhole location manually using commands 77 and 78. Note though that the 16x16 raw image is actually generated from every fourth row and column pixel of the raw 64x64 array of the Faraya64plus chip. Also if you change the pinhole location you will need to recalibrate using command 61 again.

Let me know if this works.


Thanks, that worked perfectly. The part that I hadn't been doing before was turning off all the other lights, I thought the bright bulb would overpower them but I guess not.

For the calibration instructions, I'd suggest saying explicitly in the manual to have the paper touching it - I read it as-is to mean a few inches away, as long as it fills the sensor. One minor clarification though, does the calibration mask need to be redone every time you change the RESB and REST values?


I've written a little C program (with GTK+ and Cairo) that shows a live image with an accompanying table of OF values and an arrow overlay on the image for the rolling average flow. I set it to nominally 10 fps and it works great to show me what's going on. If anyone else ends up using this sensor from a linux-based computer, I'll be happy to share the code for it, it runs smoothly on the 1GHz Atom we're using that has a "video card" so slow that you can see it wipe the screen when scrolling through a text file.


For your edit notes, I didn't know for sure which vision chip we had, you'd just sent the pdf for the Firefly chip so that's what I thought it was, so thanks for fixing the topic. I don't know if it matters that I'm using the r7 firmware for all these tests, since nothing related to these issues has changed since r5.



Hi Chris,

(Sorry about the delayed response- I really do need to reply faster)

Great! Yes- we need to clarify a few things in the manual regarding calibration and finding a pinhole (and a few other things). Thank You for your feedback! Instead of a blank sheet of paper, you can also hold the sensor right up to an LDC screen at a uniform (e.g. all white etc.) location, touching or almost touching the screen.

To answer your question- yes- performing calibration is needed if you change the RESB and REST values. It is also needed if you change the amplifier settings, or if you change the on-chip voltage regulator settings.

There is about 2k of EEPROM on the sensor's Atmel microcontroller, which could be used to store additional calibration masks. So it is possible to modify the firmware to record and use multiple masks. We would have done this but my original intent was to keep the firmware as simple as possible to make it easier to understand and hack.

One thing to keep in mind is that the calibration mask is only needed if you are not using the high pass filters on the pixels. If you do not need to explicitly measure zero motion, then you can use the high pass filters and not worry about calibration. (Chris- this probably doesn't apply to your application but I am putting this down for future reference for everyone else.) For example, if you are going to put these sensors on a moving ground vehicle, and are using these sensors to say avoid obstacles or drive down a tunnel, you can use the high pass filters and not worry about calibration.

Thank You for offering to share your program! Feel free to post it here if you wish- you can edit your post and place it on the bottom- just make sure to put the appropriate license in the header. (Ask me if you have questions about that.)

Apologies for sending you the Firefly instructions. The Faraya instructions need some serious updating and we will get that to you as soon as it is done.


To answer your question- yes- performing calibration is needed if you change the RESB and REST values. It is also needed if you change the amplifier settings, or if you change the on-chip voltage regulator settings.

This includes changing the row amp type? If so, wouldn't that be another value to move to EEPROM instead of having it reset to zero every power cycle? Every other calibration command in the vision chip range stores the value to the chip itself, while the row amp seems to be saved in RAM and then used to alter which commands are sent while reading the image.


As for the program, I'm still working on it, but after it's to a distributable state I'll post it here. It's sorta turning into a full suite for monitoring and calibrating the sensors, so my professor is thinking that it may be useful to include in a paper with my other work, and if so I'll post it after that is submitted.

Correct- this includes changing the row amp type.

I can see two options for improvements to the firmware- A) Whatever parameters (biases, row amplifier types, etc.) you send to the chip would also be stored in the Atmel's EEPROM, so that at the next power cycle it reloads that. B) Store several such operating modes, which would include those parameters and an accompanying mask, and when you select an operating mode all parameters including the calibration pattern are applied.

Do you have an ISP? This would be helpful for when we get new firmwares ready.


Sorry for not replying sooner, I've been working on the computer-side GUI app recently more than the firmware.

I do have an ISP and have used it already when I uploaded r7 with SMBus compatibility for the sensors. What I would suggest (and am basically volunteering to implement) for the firmware is basically your option A - store all the vision chip calibration values that influence the mask to the EEPROM. The main push is so if I'm running these sensors on an embedded system and the i2c wire has a power blip, I don't have to detect this on the computer side and then re-set all of them manually. I might also add the "enable calibration mask" flag to this list, so after any blip the data is consistent (with the exception of needing to re-send the ATT). I'd like to be able to calibrate a sensor and then not have to run an initialization routine every time I turn it back on, but still get the same data.

Another thing I've realized recently is that the VC_init() function is hardcoded with values that in the comments talk about assuming a 5V supply. The SMBus interface on our embedded computer (ADLS15PC if anyone wants to know, which the sensors will be connected to) provides a 3.3V supply in the same connector as the data lines so I'd really rather not add an extra connector to grab a 5V line from somewhere else on the mainboard. Because of that I've been using 3.3V for for everything I've done so far and the CYE8 itself runs fine with a 18/30mA (idle running/programming) draw (with LOW=0xC6, i.e. Ceramic resonator enabled, and maybe some draw from the SPI pull-up resistors).
How much difference do the initialization values make for the images and computation, and related, is there a reference for what these values mean and what to set them to for a different supply voltage?

Hi Chris,

No need to apologize!

Option A is simpler and will give most of us the added functionality that would be useful. Thank You for offering to do this.

Regarding the different parameters- in practice most of the bias settings are not that critical. The exception is VREF, which is used for level shifting when using the amplifiers. (When not using the amplifiers, VREF doesn't matter.) RESB and REST define the top and bottom range of the ADC, and so of course these are also critical, but they affect only the output of the ADC not the analog pathway of the chip. The other biases don't have a significant affect but set too low a value may result in too much power consumption or limit the range of light levels, or set the other way may slow down the chip. (It is a bit counterintuitive, but a lower value results in a higher voltage.) NBIAS1, NBIAS2, ANALOGOUTBIAS, and YUKNBIAS fall into this category. PRSUPPLY should be left as is.

I have a preliminary set of instructions that should help you out a bit, but they haven't yet been thoroughly examined for errors. Also, it might help if we release a partial schematic of the chip.



© 2022   Created by Geoffrey L. Barrows.   Powered by

Badges  |  Report an Issue  |  Terms of Service