Embedded Eye

Currently I am facing how to relate the change of accumulative optical flow values to the velocity of moving object.

Geof told me before that if I are sampling every 0.1 seconds, the accumulated optical flow is the sum of about 40 individual optical flow measurements. How can we get the number "40"?

I tried to find frame/sec used by the sensor, so I used the command 2, 60. The results are as follow:

1-------232
1-------232
1-------232
0-------43
1-------232
1-------232
1-------203
1-------232

1-------203
1-------232
1-------232
1-------232
1-------232
1-------232
1-------232
0-------43
1-------232

According to the instruction: this array contains an estimate of the current operating frame rate of the sensor, in frames per second. The estimate frame rate is 256*Data1 (first column) + Data2 (second column). My question is why there shows different values in data1 and data2 each.

Views: 191

### Replies to This Discussion

One thing to keep in mind is that the accumulated optical flow (optical flow array, ATT=65, elements 5 and 6) is simply the sum of the individual optical flow measurements (divided by 64). Note- I think Qing has firmware version 1 or 6, and this statement applies to both firmware versions. This value is generally independent of the frame rate- if you double the frame rate then the individual optical flow measurements are half as large, so the sum is the same.

Therefore all you need to do is to compute the accumulation at two independent points in time, say time A and time B. The average optical flow over that duration will be ((accumulation B) - (accumulation A)) / (B-A). This is independent of frame rate.

we are not 100% sure of the frame rate computation on the sensor itself. (We estimate frame rates by having an LED blink every 100 frames and measure the blink rate.) So it is possible that you are getting something erroneous, especially when the values are 0,43. The frame rate is computed by monitoring a timer and measuring how much time elapsed between frames. It is possible for one cycle to take more or less time than a second cycle- if you are communicating with the sensor that can slow things down- and the variations you show above (other than 0,43) are reasonable.

Geof,

Currently I am trying to put the sensor on a robot, and set a constant velocity to that machine and collect data from the sensor.

I use the command 60, 2 so as to test the method 2: odemetry with displacement based update. I try to maintain the robot the same distance from a board. Let the robot move forward and send data to my computer. The problem is when it starts, sometimes the cumulative optical flow won't change for a while; sometimes it goes down when mostly it goes up. When the robot stops, the results will drasticly change and then be stable.

Is there any method I can resort to so as to improve the performance? I already try to skip frames.

A couple of random ideas from me ..

- how about filtering the results?  i.e. instead of working off the raw values coming out of the camera, how about each time you take the rolling average of the past 3 values?  all the same info is used but the results should be smoother.

- i guess there's no chance that your robot is rolling or pitching when it's starting or stopping?  i.e. when it stops it doesn't suddenly lean forward does it?  If it did this would make the camera show a sudden movement.  It's a bit unlikely because if this was the case, I'd expect you'd see the same behaviour when you start moving.

Hi Randy,

Actually the firmware already computes a running average of optical flow values- when you read out the optical flow values- the first two are instantaneous optical flow, second two are cumulative, and third two are running average. You can also set the running average time constant with command 69. In general it is better to use one of these latter measurements because if you are grabbing optical flow every, say 50msec, and the sensor is running at 400Hz, then you are only getting one out of every 20 optical flow measurements.

Geof

Hi Qingwen,

When going into method 2, try also enabling the calibration mask, by sending command [62,1]. It could be you don't have the calibration mask enabled, and so the fixed pattern noise washes out the image you are picking up.

Geof