Embedded Eye

Give your gizmo the gift of sight

Geoffrey L. Barrows
  • Male
  • Washington, DC
  • United States
Share on Facebook Share

Geoffrey L. Barrows's Friends

  • Enmanuel Parache
  • Hugo Capucho
  • Scott Gardner
  • Sanjeev Koppal
  • Alicia Gibb
  • Qingwen Liu
  • Randy Mackay
  • Scott Schuff
  • Chris Shake
  • Craig Neely
  • Jason Mayes
  • Tom
  • Jared Napora
  • Mark Murphy
  • Eugenio Culurciello

Geoffrey L. Barrows's Discussions

How to hook up a Stonyman Breakout directly to an Arduino (without the Rocket Shield)
1 Reply

If you don't have an ArduEye Rocket Shield, you can still hook up a Stonyman breakout board to an Arduino. All you need to do is make these connections:G connects to  GROUNDV connects to 5VRP…Continue

Started this discussion. Last reply by lucas johnson Jul 24, 2014.

ArduEye Aphid Standalone Example #1: Generate PWM based on light position

INTRODUCTIONAs an experiment, I've decided to make a few ArduEye Aphid examples written specifically for an Aphid, and written for "very easy" use. Specifcially, these examples will meet the…Continue

Started Mar 22, 2013

ArduEye Aphid Initial Tutorial (Part 3): Exploring the ArduEye Examples

IntroductionOn the ArduEye Wiki site there are three "Utility Sketches" that demonstrate different aspects of the ArduEye libraries. All three of…Continue

Started Mar 14, 2013

ArduEye Aphid Initial Tutorial (Part 2): Sample Firmware Shipped with Sensor

Note: It is recommended that you look through the "Getting Started Guide" on the ArduEye site (www.ardueye.com). In particular, look through the sections on…Continue

Started Mar 13, 2013

 

Geoffrey L. Barrows's Page

Latest Activity

Geoffrey L. Barrows replied to Singe's discussion Any activity in this forum or on Centeye?
"Hi Singe, Thank You for asking. Centeye is very much still in business- we are focusing on ultra-light, high performance vision sensors for nano drones, and are actually making good progress. We put on hold the ArduEye project to focus on this.…"
Jul 17, 2016

Profile Information

Hometown
Washington, DC
About me
Founder of Centeye, Inc.
Technical interest
optics, image sensor chips, vision hardware, image processing, ground robotics, airborne robotics, sensor networks, art or music
Website
http://www.centeye.com

Geoffrey L. Barrows's Blog

Updated Stonyman data sheet

We have updated the Stonyman data sheet to include software exemples (in Pseudocode). Version 1.0 is here: Stonyman_Hawksbill_ChipInstructions_Rev10_20130312.pdf

Posted on March 12, 2013 at 11:57am

ArduEye Aphids first batch now available

We have a small batch of ArduEye Aphids that are now available for serious beta users. An Aphid is essentially a clone of an Arduino Pro Mini but with a Stonyman vision chip and an external 12-bit 100ksps ADC added. For more details and for ordering, please visit the ArduEye Aphid product page…

Continue

Posted on March 6, 2013 at 9:30am

Internet of Things (IoT) car traffic counter camera using an ArduEye and COSM

Eager to make a little foray into the "Internet of Things", I decided to experiment with the use of an ArduEye as an "Eye for the IoT". My house is on a fairly busy street, of which I have a good vantage point from my attic home office. A car traffic counter seemed like a good choice for a first project.

I programmed an ArduEye Aphid to detect cars using a very…

Continue

Posted on February 19, 2013 at 5:00pm — 3 Comments

BCIT students make eye tracking computer input device, for ALS patients, using Centeye chip

This is the type of news that makes everything we do worthwhile! This past January, I was contacted by Alex Sayer, Alan Kwok, and Benny Chick, students at the British Columbia Institute of Technology (BCIT) in Vancouver, who wanted to use some of our Tam2 chips for a class project in which they would provide a human-computer interface (HCI) for people suffering from ALS (Lou Gehrig's disease) and could not operate a computer using their hands. Their idea: Use a low resolution image sensor,…

Continue

Posted on August 13, 2012 at 5:56pm

Comment Wall (4 comments)

At 6:59am on February 28, 2011, Hari said…

Dear sir,

I'm doctoral student interested to work on these chips for 3-Dimensional position (linear motion) tracking with good precession( <3mm) , accuracy and fast update-rate(>50Hz) . If it is possible for you to design the chips to meet the above requirements, we will place an order as soon as possible. I would like to find an alternative to WII remote .

Thanks

Harinath

At 8:44am on February 17, 2012, Nicola Massari said…

Hi Geoffrey,

 

I am really sorry about my late answer! Actually your question is complex and need more time to answer it. I personally think that, in general, vision sensors are complex detectors and usually need strong knowledge and in particular strong experience in visual processing in specific applications. I am not particularly expert in processing and often I trust to my intuition to design my sensors and sometimes I realize that this is not the correct way to proceed. On the other hand the potentiality of vision sensors is really high. Just think about human eye can do and  how many tasks we can perform only thanks to eyes!

 

Thank you for your interest!

 

Regards

 

Nicola

 

At 10:42pm on March 22, 2012, Paul Atkinson said…

 Hi Geof,

               Thanks for the comment recently. I did received the Tam2's chip yesterday. I have seen some info about how to use the 5-pins on the rox1 board such as:Vdd(onboard),Gnd(onboard),ANA(onboard),IO6 & IO7 onboard represents CLKIN AND CLB(RESET) respectively if that's the correct order or vice-versa of the two I/O pins onboard. Just like to know if those are 5 pins used for the tam2 chip?, also does ANAOUTBIAS and COLOOUTBIAS pins available on the rox1 board whereas a 1- 10k is tied to them to Gnd for bias purpose. Thanks again! with lots of appreciation. 

PS. Are these lens pinhole type, if possible do you know the focal length?.

At 7:19pm on May 31, 2013, Ron Patey said…

Hello Geoffrey. Yes, the reason for the resistor was to equalize the gain, not necessarily the offset. In this way, all the control lines to each eye can be paralleled. I was expecting to have to subtract from a white level reading kept in my permanent MRAM. This handles the offset.

 I didn't expect to have to change the gain by the scanning software, but doing that handles 'noise' beautifully, especially vertical ghosts... It disappears!  I keep a 'black' level in permanent MRAM for each pixel as well as the white level reference. The stored 'black' level is calculated as a fraction to speed things up. The black reference scan uses a darkish target under the same light level as the white reference scan. (Too dark and you make the range too large.)

The fraction is calculated to make the black reference produce a black level 32 pixel counts below the white level. The 8 bit multiplication of this 8 bit fraction with the sampled pixel creates 16 bits PRODH/PRODL in a single cpu cycle and the PRODH byte is the gain-corrected result of a normal scan. Simple and fast.

After that, each picture is brightness adjusted to handle differing lighting conditions. It reduces the 32 level pixel readings to 16 levels, (4 bits) which matches my LCD resolution. I can turn my very-bright lighting LEDs off or leave them on in my living room with only a small difference on the LCD screen.

Amazingly, the fluorescent and LED AC light patterns are not really noticeable... I see a moire fringe every now an then.  I think your photsensors do integration just with high impedance RC filtering. I will have to check  that sometime.

But I still use the LED strip, 350ma, behind the eyes because they make great reference scans.

So, I have adjusted for offset, gain and brightness .... the resistor allows the signals to be in the right range..important only because I am paralleling the control signals to each eye. I am extremely happy with the results... wish I got a faster cpu, but I have simulated finding the eyes, nose and mouth on my laptop using vertical darkness gradients and this will mean only scanning 1/5 or less of the pixels at high speed, once their location has been identified.

The gradients take pixels 2 above and 2 below the pixel in question and if the real pixel is darker, I multiply the differences by a constant number. The value of the pixel is not used, just the difference between it and the others. If the pixel is good, it is kept, if not, it is set to full white. I then get an average of the non-white pixel values and show as full black only the ones below (darker than) the average. The result is really fun. The constant number can change, the actual value of the pixel or a portion of it can be included and the results are still interesting. Gradient therapy. I will also  try varying the 'constant' value in feed forward format to keep the displayed pixels just enough to used for decisions.

I might also change the range of pixels surrounding the pixel in question. I tried this on my laptop and it didn't matter much. I'm looking for edges of horizontal things and the vertical gradient method does this well.

I also tried shifting the whole image left and right to create minumum difference between two eye images. WOW. fun. It works just as well for the gradient screen as the original full screen. Processing this is slow... so gradients would be better.

 

 

You need to be a member of Embedded Eye to add comments!

Join Embedded Eye

 
 
 

© 2021   Created by Geoffrey L. Barrows.   Powered by

Badges  |  Report an Issue  |  Terms of Service