Give your gizmo the gift of sight
I am using a Stonyman breakout board with a mounted lens together with an Arduino Uno R3.
For the connection of the Stonyman to the Arduino board I use an Arduino Proto Shield with an external ADC (MCP3201).
For testing I use the program “ARDUEYE_STONYMAN_EXAMPLE_V1”.
Now my question:
When I use the command “m” or “M” I get values in the range between 600 and 900. Exposed to light I get lower values. If I cover the lens I get bigger values.
Is this correct? Is the range ok? I mean 5V is 4095. The values I get back are low.
I have tested the circuit without the Stonyman. The ADC is working correct.
Thanks a lot for helping.
Yes, the range you are experiencing is correct. Logarithmic pixels do not have a large variation, hence the recommendation to use a better ADC.
Thanks for your help. Is quit important for me because I am using your chip for my master thesis at the University of Hagen in Germany.
The question raised up because when I followed the example on ardueye – generate a mask, make a picture and then substrate the mask from the picture – I never got a nice picture like it is shown in the example.
Just to make sure I understand what you are saying:
The voltage I can expect from the Stonyman is between 0.7V and 1.1V.
Is this what you are saying?
Thanks a lot for your answer.
No problem! Let's see if I can help you get better results. The voltage range output by the Stonyman, in raw mode, depends on all sorts of factors like the particular biases used, but 0.7V to 1.1V is a reasonable range. Can you please tell me more about what you are trying to do, including what resolution you are using and how you are displaying the image (Processing? MATLAB?)
I am using matlab.
The output of the Stonyman vison chip will be the input of a neural network which will be used for detecting objects. Some kind of an Insect eye. What kind of neural network I will use is part of my work. I have to evaluate which architecture is the best for this kind of task. At the end the neural network will be implemented in an FPGA. In the moment I have started to work with the “Zedboard” (Zedboard.org). But this will be in some months from now.
First I have to develop models in matlab. And I want to use real data from the Stonyman chip.
I am not saying that I need a better resolution. In the moment I am at the beginning and I was wondering about the output of the vision chip. The image I got was not very clear. But when you say it is ok...Is it ok to send or post the mask data and the image data so you can have a look at it?
Do you have any experience with neural networks together with your vision chips?
Please go ahead and post anything you want to share. We have MATLAB here so you can even just upload a .mat file if you wish. As for the image "not being clear", do you mean out of focus?
I have lots of experience with neural networks, but this was from when I was in graduate school (1993 through 1999) so I'm not up on the most recent stuff. But I am familiar with the insect eye- that is what has inspired this work.
I just followed the tutorial “StonymanLens“ (http://www.ardueye.com/pmwiki.php?n=Main.StonymanLens ) on the website „ardueye.com“.
I have attached the mask which I have created like it is shown in the tutorial and the image data of my setup. The file “Stonyman_Image” shows the image after the command “imagesc(255-(test_image-mask)); “. As well a real picture so you can see what the stonyman is looking at.
Thanks for your answer and opinion.
Thanks! That helps! It looks like you have binning on e.g. 4x4 blocks of pixels are shorted together. Is that intentional? If so, then you only need to acquire every fourth row and column. I would sample a pixel in the middle of the block rather than the edge. Otherwise, try the same thing but set HSW and VSW to zero.
The diagonal banding you see is probably due to light flickering from the AC mains. (That is 100Hz in Europe and 120Hz in North America.)
Thank you!!!!!!! This is the solution.
It was the binning. I thought - because it is a new chip - it is off by default.
You just made my day.
Basically all of the registers should be treated as "unknown" at power up and reset. I think though the sample code you used initializes to 4x4 binning so that the image can be stored on the Arduino.
May I please see the final image? I'm just curious how it came out on your setup.