Give your gizmo the gift of sight
Sometimes, experimentation provides a lot of good answers.
After reading the data sheet for the Stonyman imaging chip, I set out to operate one using a PIC 18F2550 microcontroller. I developed a simple circuit that would allow the analog output from the imaging chip to be turned directly into a real time composite video signal suitable for presentation using a television set. My video signal is not totally compliant with the NTSC standard but I have been displaying it using an older analog TV. Older analog sets are often much more forgiving when it comes to this sort of thing. In my case, I used a small 5-inch black and white model. It had a composite video input jack on the back of it and it only cost me a buck at an auction.
The PIC microcontroller has a crystal controlled clock that allows it to execute most of its instructions at the rate of 12 million per second, or one every 83.333 nanoseconds. To gain the maximum degree of control over the program timing, I carefully wrote the program in assembly language and kept careful count of program cycles.
I included in my design a simple user interface consisting of 8 LEDs and 3 push buttons that allowed me to adjust the CONFIG register and the three bias registers in the imaging chip. I was able to watch the television display and play with the chip settings to try to produce the best picture.
The data sheet says that the duration of pulses such as to reset the pointer, increment the pointer, reset a value, and to increment a value only need to be about 100 nanoseconds in length. Indeed, I have been using pulses that are as short as 83 nanoseconds and they work just fine.
A single scan line across a television image takes place in 63.5 microseconds. In that time, there is a 4.7 microsecond sync pulse that must occur as well as a bit of black border that must occur at both the right and left edges of the image. The actual time period in which the image data must be presented is something closer to 40 or 50 microseconds. Therefore when I began, based on the timing suggested by the data sheet, I was scanning only a small number of pixels from each row in the imaging chip, allowing about a microsecond per pixel if the on-chip pixel amplifier was not being used or two microseconds if the amplifier was in use.
I soon discovered that the variation in the analog output signal seemed quite small when the amplifier was not in use, probably amounting to a small fraction of a volt. Therefore, I was soon using the on-chip amplifier which really did make a lot of difference.
When using the on-chip pixel amplifier, a pulse must be issued to the imaging chip on the PH digital signal pin to cause it to cycle. The suggested timing indicates that the PH signal should be taken high for one or more microseconds and then taken low for a similar time period before the analog output signal is valid. So that is where I began with PH being high for a microsecond and then low for a microsecond.
I was soon watching an image with only about 20 pixels across but 112 pixels high. It was stretched to the full width of the TV screen but less than half its height. The aspect ratio was way off with the displayed image being stretched horizontally but I could see that it was essentially working. Naturally, I thought I would try to push the on-chip pixel amplifier with regard to its speed in an attempt to get more of each row's pixels presented on a scan line of the TV.
So I began reducing the time periods that PH was held high and then low. Shorter and shorter the time went but the video image did not seem to suffer much. I expect that if one really measured the analog output signal accurately, there would be some degradation at the higher speeds but I was surprised that it appeared to still run well even when the PH signal stayed high for only 83 nanoseconds, the shortest pulse time I could create with my microcontroller without resorting to specialized circuitry.
During the scan line, I was eventually using a program loop that looked something like this:
Loop: Set the Increment Value signal
Clear the Increment Value signal
Set the PH signal
Clear the PH signal
Step a counter variable to see if all pixels were presented
Branch back to Loop if not done
This loop contains a total of 6 instructions. Therefore, at an execution rate of 12 instructions per microsecond, the complete loop takes one half of a microsecond (500 nanoseconds). This allowed my program to put about 80 pixels across the screen.
Then I was looking for how I might speed things up still further. With regard to the four logic signals to the imaging chip (Reset Pointer, Increment Pointer, Reset Value, and Increment Value), the data sheet says this, "Two rules should be followed when pulsing these four signals. First, always return the signal back to digital low before doing anything else. Second, do not pulse more than one signal at a time. No more than one of these signals should be a digital high at any one time or the chip may behave in strange ways. " But this says nothing about the related timing of the pulse on the PH line except for the possible implication in "always return the signal back to digital low before doing anything else."
Despite my expectation that it would probably not work and mostly due to the ease of giving it a try, I tried using the same pulse for Increment Value and for the PH signal. To my surprise, at least when looking at the image on the TV, it seemed to work just fine.
That allowed my program loop to turn into this:
Loop: Set the Increment Value signal and the PH signal
Clear the Increment Value signal and the PH signal
Step a counter variable to see if all pixels were presented
Branch back to Loop if not done
Now executing only four instructions per pixel brought the time per pixel down to a third of a microsecond (333 nanoseconds) which allowed me to put all 112 pixels across one scan line. I did not bother with cycling the pixel amplifier for column zero in each row since that would have required some extra circuitry to pulse the PH line without also pulsing the Increment Value line. Therefore, column zero just did not function properly in the displayed result.
Now the image had become as narrow on the TV screen as it needed to be but the image was still shorter than it was wide. So I set up the program to increment the row number register in the imaging chip only every other scan line of the image. In other words, the pixels from each row were scanned out twice to the TV before proceeding to the next row in the imaging chip. This doubled the height of the displayed image and that brought the aspect ratio nearly to normal.
By the way, one of the best lessons learned in this little project was with regard to FPN (Fixed Pattern Noise). The data sheet tells us about it and how we need to include in our designs the reading of it as a noise pattern image that is subsequently subtracted from the analog readings taken from the imaging chip. My little video circuit knew nothing about how to remove the FPN so I was watching the raw analog output signal with the FPN still included. Folks, I am here to tell you that the magnitude of the FPN is pretty significant. I was able to recognize the resulting video images on my little TV screen but it was like watching a live image through an old screen door where the screen had accumulated a LOT of dirt. When I held my hand still in front of the imaging chip and looked at the image on the TV set, it was hard to pick out where my hand was in the image due to the FPN. But if I moved my hand around, it became very obvious. In a way, my brain was helping to eliminate the FPN when things were in motion.
I have two Stonyman image chips with optics. Once I had discovered a set of values for the bias registers that gave an optimal image with one image chip, I tried swapping the other image chip in its place to see if different bias values would be needed. My experience was that the bias settings were very repeatable and that the same bias settings were optimum for both image chips. I could readily see that the FPN patterns for the two chips were dramatically different however.
I also tried various binning combinations just to see what the effect would be on the video. The effects were pretty much as I expected, reducing definition in the axis being binned.
The schematic diagram of the entire system is presented below. Just to clarify a few things, whenever the /SYNC signal goes low, it pulls the video output to its lowest voltage level, a condition the TV interprets as either a vertical or horizontal sync signal depending on how long it stays low. Whenever the /BLACK signal goes low, it pulls the video output to a low level but not as low as does the /SYNC signal. This is interpreted by the TV as an indication that the displayed video should be black. This black level is used for a black border above, below, to the right, and to the left of the displayed image. When neither the /SYNC nor the /BLACK signal is low, the analog output from the imaging chip is passed to the composite video output.
In case anyone wonders, a composite video signal gets much more complex when a color image is to be displayed. Since the imaging chip is strictly monochrome, I did not have to contend with any of that complexity.
Just a very minor correction. In both of the program loops described, the last instruction (the one that branches back to Loop if not done) actually takes two instruction times, not one. Therefore, the first loop takes seven instruction times or 583 nanoseconds instead of 500. Similarly, the second loop takes five instruction times or 416 nanoseconds instead of 333.
I have just discovered these image sensor module from your article, i try google to get distributor www.proto-pic.co.uk have information that it is now discontinued. how would you help me with referral please.
I also will need help on how to use it with laser module for range finding using PIC18F4550 and C-program.
Thank in anticipation.
Look at the Centeye.com website. You should be able to order the image sensor there.
I have never used a laser module for range finding. Perhaps someone else can help you with that.
Concerning the C programming language, I have never liked using it. Almost everything I write for PIC chips is in assembly language.
For this project that generates a video signal, the exact timing was extremely important so that the pixels were output at just the right time as well as the vertical and horizontal sync pulses. Doing this with C would have been especially difficult because the exact number of execution cycles for any given C statement is not known and may even vary according to the data that is being handled. In assembly language, it is possible to write code with very precisely controlled timing and at the maximum speed that the processor can deliver.