Monday, August 30, 2010

Project Diary of Accessing Color CMOS Image Sensor OV7725 Using NEXSY FPGA Board



"In theory, there is no difference between theory and practice. In practice, there is.
Fearless Experimentation!"



About
This is a FPGA project using on NEXYS2 board that I experimented in the summer of 2010 for about two monthsThis post is written in a daily log fashion to record what happened during this project. Special thanks to Dr. Jesse Jenkins @ Xilinx for his advisory on my FGPA learning & this project.


This effort was for me to learn the FPGA design flow as well as COMS sensor control. The system was built simply using a low-cost FPGA board and a raw CMOS camera module. The focus was to have all control logic implemented in the FPGA using 8-bit picoBlaze processors and its assembly codes. There was no linux nor any other embedded OS or device drivers used.




Day 1 (June/30/2010) - Getting Started 

I had no clue what FGPA was about. I only knew that project cycle could be much shorter in FPGA world. One day I was in a golf private lesson – this is it! I am going to do a project using Video capture as a training aid for analyzing my golf swing & use it for my learning of FPGAs!

In theory:
http://www.youtube.com/watch?v=Z2o1SYXaOHE&p=B3025594607F8780&playnext=1&index=23
http://www.youtube.com/watch?v=JkZdlYg9UuY

In Practice:
(read on….)




First Week (July/10/2010) – About VGA

I received my FPGA board and used the weekend to study how VGA works. It is not that hard. 

Conceptually, there are two synchronization signals, one for vertical scanning (vsync), one for horizontal scanning (hsync). Between the horizontal sync-pulses, color data of pixels need to be sent out, at proper timing, to 3 analog signals (R,G,B) for displaying colors. By driving the color signals in different strength, different color can be shown.  For example, Nexys Board uses (R[2:0], G[2:0], B[1:0]), that provides the capability of displaying 2^8=256 colors at the same time. I wrote a 1-page report about it, more can be found there. 
(Keywords: VGA timing, RGB color model, YUV/YCbCr, Nexys Ref. Manual)

I also spent a little time on the storage method in case needed. 
(Keywords: SD “Secure Digital” card, File System.)




Second Week (July/15/2010) – About CMOS Sensor

I spent a lot of time on searching for the component of CMOS sensor and their design spec in order to determine if technically the module can fit the need for the video system. Some findings after a week’s search:
  • Most of the CMOS sensors are capable of outputting 30 fps (frame per second). One of the reasons could be that human eyes cannot tell the difference between images when switching rate is higher than 24 fps.
  • There are only few leading companies that have products capable of >30 fps. OmniVision seems to be one taking over the leadership with products that supports 60fps. Since I am going for video-systems that can do slow-motion, I do need high fps. OmniVision’s products seem to be a good match for me.
  • After a week’s search, I decided to go for OmniV’s CameraCube (shown in picture)




Third Week (July, 22) – What I need is actually “Camera Board”

One mistake I made was that not only the technical data was hard to get but also how would I connect all the modules together. The CameraCube is very small (didn’t notice until I receive the real module in hands) There are special sockets needed.

I explored the possibility to build my own PCB but later decided to just go buy “Camera Board” instead to save the time & efforts.  However, it is very interesting that people came up many different way to manufacture their own PCBs. There may be some innovation can be done, but that’s for future projects.

It is an interesting discovery it seems there is a larget community outthere for hobbyist who like to build their own Robots. Commercial products/Components are available to get started, .e.g. www.sparkfun.com, www.furtureelectronics.com, www.digikeys.com, etc. 

One thing interesting about Machine Vision is that there are a few projects leading by either university labs (e.g. www.cs.cmu.edu/~cmucam) or company like www.surveyor.com.  There are quite a few interesting projects done in Surveyor. 

At the end I decided to order the camera board Suveyor provides. (It seems to be the best/latest products out there and available in a reasonable price at the time of this project). It is also interesting to learn another type of products “CCD camera board” that are a lot more mature and popular but that’s something to look into later.






Fourth Week (July/28/2010) – Not working!! Introduction of my “picoScope”

After studying the camera spec, coded the controller needed in FPGA to access the camera and have the physical prototype built, the outcome is no surprise – Not working!!

One of the major issues is that for the FPGA board to communicate with the OV7725, I have to follow their “SCCB” protocol, which I found out at the later time that it was actually very similar to the I2C protocol Philip had developed. 

I realized I don’t know enough of the FPGA debug tools yet and meanwhile, it is very important for me to debug issues systematically, I decided to build my own soft (chip-internal) logic analyzer. The purpose is to figure out what’s the waveforms are like at boundary of FPGA pins.




Fith Week (August/04/2010) – my picoScope (Part II)

Designing a logic analyzer wasn’t as simple as I thought at the beginning, there are clk-domain cross involved as well as how to produced different clock frequency in Xilinx FPGA. 

After a week’s debug efforts, finally the first working vision is available. The logic analyzer as I called it “picoScope” is equipped with multiple sampling clock options, through on-board switches, and a input trigger when to start sampling signals. As the first version, it has 16 channels and can capture 32 cycles. I later also realized that it can also be used to sample external signal which can be routed through the PMOD ports on the board. One the sampling is done, the picoBlaze and the assembly codes programmed inside, it converts the data and output to VGA display as shown. The amount of data to store is a key. For chip internal debug, able to view 32 cycles over 16 signals is perhaps enough. The picoScope use BRAM to store the data.

At a later time, an idea came to my mind that the data actually can be compressed or be stored in vector form and with that, more data can be stored in the same size of storage elements. Below is another picture that I was playing with different display seting:




Sixth Week (August/11/2010) – Debug: PC-based USB Logic Analyzer


From the picoScope, I realized the signals on the FGPA ports are alive. I have to be able to know what’s going on with the physical bus I connect between the board and OV7725 module.

After a few rounds of study of what products available on the market, I decded to buy a low-cost PC-based USB logic analyzer for the price of $149 (Saleae Logic Analyzer). It worked out pretty well.




Seventh Week (August, 18) – Breakthrough: the first Image! (Understanding of SCCB( I2C ) protocol & an echo from Mars!)

With the help of the logic analyzer, I get a better understanding of the activities on the I2C bus. After debugging the software and hardware for a week, a few problems were fixed and finally the first image comes up, as shown.

Problems found:
  • I2C bus has some physical spec, which requires VDD pull-up circuitry that I din’t implement.
  • A error in the value I program a main control registers was wrong. As a result, the registers were being reset and all programmed value were lost.

The image above was the first image I saw. There was a bug how the vertical synchronization signal was resting the address of BRAM, resulting in unstable image display.

I start to notice some of the video signals are very sensitive and results can be very different even if I use same firmware and physical component.






Eighth Week (August/25/2010) – Storage: Resolution v.s. Colors - onBoardSRAM Access,  Color Image & Issues of Video Signals



In the first-cut design, the system only display signal color using RGB010 (meaning only 1 bit is stored for displaying purpose). There are two direction I can go, given the BRAM available on the chip.

Picture on the left was the experiment that I used the memory to store more data for green channel to enhance the resolution (3 bits for green-channel).

Picture on the right was the experiment that used the memory to store more color bits (RGB111).

For the system to display good quality image, there are few things need to be resolved:
  1. debug the issues of flakey & noisy image. Possibly because of the clock shape after transmitting from OV7225 to the FPGA board went bad.
  2. The configuration of the OV7725 - There are so many parameters to tune, e.g. exposure time, color strength.
  3. The memory to store more data.


Among the three, (3) is the most important one. I did look into the onBoard memory and realized that it was very easy to use if it is programmed as asynchronous SRAM.

However, it was later proven both from implementation and paper analysis the output speed is not enough (limited to 12.5Mhz), which can not catch up with the VGA display frequency.

There is a possible solution is to explore the “Burst Mode” of the SRAM chip. However, time is up (a two-month window of spare time aside from day job) and I shall start wrapping up a summary report for this project.



Nineth Week (August, 25) – Wrapping up: RGB111 Using BRAM & Future Work


As of September/01/2010, the best result for this project is to use the FGPA BRAM to store 3 bits of pixel data for displaying in QVGA resolution (the design was capable of handling VGA 640x480 but memory space is limited to 320x240x3 bits). 

For the future work, the memory system needs to be designed carefully to meet the requirement for VGA displaying. The video signals also need to be debugged and be handled/protected carefully so that the image output is stable.

As for the possible applications, multiple Camera modules can be used to recording image in parallel so that more pictures can be recorded for displaying slow motions, if the CMOS sensor is limited to certain output rate (fps). Algorithm may need to be explored in order to synchronize pictures. Possible direction is place the cameras in a circle with same center point.

Another interesting project would be using two camera with proper distance (need some careful calculation and careful positioning of the camera modules) and by locking the color channel, a 3D video system can possibly be built.