Give your gizmo the gift of sight
HI: I have ordered 2 Stonymen and am now analyzing what I need to make a 3D pair. I have taken downsampling of some pictures and the full resolution is a must. Below that, there is too much of a jump between sections of the picture. So I want about 3 frames a second and all pixels. How fast can I read a pixel, assuming I get great A/D's and 40+ megainstructions per second? The limit may be the camera. Or will I have to use a gate array?
HI: just looked at Optek site. A 4 degree version is out as well.
HI: replying to myself again. It occured to me from looking at the stoneyman datasheet that the pixels vary less by a simple subtraction of a bright sample reference, but by a multiplication factor of 'sensitivity' of the pixel. My micro does multiply ops in one cycle (as fast as subtractions).
Perhaps, subtract whitelevel reference (from a whitelevel scan) then multiply by scale factor (obtained afterward by a gray reference scan minus the whitelevel scan).
Is it worth the trouble??? What are the pixel-to-pixel variations in an a/d output???
You are correct in your interpretation of a Stonyman pixel- this is their logarithmic nature.
NEXT installment: I am getting close to writing the first test program. Still have to order the boards.
I have worked out the logic of scanning to match my concept of following about 10 objects on two sensors in 3D.
The image must be kept in RAM using a 16KB bank for each image, one for eye1 and one for eye2. The eye RAM is such that the msbit of the address defines the eye. The next two msbits of the address define 4 banks within each eye.
One bank for the image with 1 byte per pixel,
One bank for the whitescale calibration to be subtracted from each pixel,
One bank for the pixel gain (to be mulitplied after whitescale subtraction).
One bank for a mask... each pixel has a full byte, with flags specified within the byte. (only bit 0 right now)
The image is kept only where the mask has bit0 = 1 for that pixel. The other bytes where the mask has bit0 = 0 are 'don't care'.
The eye is scanned for every pixel, but the INPHI and a/d conversion is not done if the mask bit is 0. Both eyes are scanned simultaneously by paralleling the control lines. If a pixel is to be read, the multichannel A/D converter does eye1, then changes channel to do eye2 about 300ns later.
This provides the least number actions on the counters...and the fastest scan based on the assumption that there are about 10 squares, stars or circles of interest in the image, each with 200 pixels or so.
The high level analysis program is done between images and creates/modifies the mask flags, with bit0 defining the pixels to be actually recorded next. Since this is 3D, there needs to be only one mask bank for both eyes. This frees up a 64K bank as a
stack for results...using up my 128Kx8 MRAM.
The scan rate is about 500ns for each not-read pixel and 4us to read the two eyes in one pixel each. For 112x112 pixels, with 2000 pixels remembered, this means 10,544 pixels just stepped over at 500ns and 2000 pixels read and stored with calibration
done at 4us. That's 5.3ms + 8ms, say 14 msec, reading only at the 1msec peaks of an AC cycle to give a 1:8 ratio, so 8x14= 112ms in real time.
10 individual programs will work on determining the movement of the objects within their squares... and the 3d distance by comparing eye views. They will then determine the center of the new square for the next scan, setting the new mask bits.
There has to be special processing if an object becomes no longer interesting.
Perhaps inserting a full scan of the right or left quarter by setting the mask
bits... of interest only to the individual program which needed restarting.
An overriding program can analyse the results from the combined 10 programs and decide whether to start with a full scan and correct mistakes. This would take nearly 1/2 second for a full scan!
Because I am using 3.3volts, I have to use the amplifier in the eyes and there is spare time waiting for the INPHI cycle. I will process background light at this time
if I need to. My A/D converter will have this background sensor as channel 3.
NB: I might control the DC light source and run at full non-AC speed, making everything 8 times faster. It's just not universal if I do this. But I will look into controlling the ambient light level using Cree LED's so that 'linear' mode of the future can be accomodated without keeping more calibration formulae.
So that's it for now. I have bitbanged the scanning/mask/storing code and these times are close based on 62.5ns per instruction. However, the analysing programs are not included in the timing.
OPV382, I used that before, I think it is not "not enough particles to reflect" but the fact is that OPV382 is only 1 mW or something like that. Also it is 4 degree, most laser diode is mrad... OPV382 is using VCEL technology unlike the common laser pointer. Laser pointer is about 5 mW or more, and the half angle is less than 1 degree, thus giving narrow beam at long distance.
I am still replying to myself. RE the lighting... I have added two XLamp LED's to my board, with a D/A converter driving a power FET to create a DC controlled light. The two XLamps are red to keep the volts low, and are almost too bright at 8 inches or so. I have some holographic film which spreads them out to help with the blindness. The system will run off 4-6 volt batteries.
I have played with software on the PC for the lip reading app. See the attached photo of the program output.
Still waiting for the Centeye chips.