Thursday, June 23, 2011

Ten Weeks of Learning iPhone Programming


About
This is an effort for myself to learn mobile-app programming. This time  I picked iOS first (iPhone programming, XCODE4, Objective-C). As a result of this effort, I wrote an iPhone game called "HappyBall & MadBall", the theme about this game is to deliver Happiness to others.


The following is a 3-minute presentation of mine for an iPhone programming class ended in June, 2011. It was a very intensive programming experience given that I haven't done major C programming for a while and didn't even know about Objective-C to begin with, yet another video project going on at the same time.. For 10 weeks of efforts to be presented in 3 minutes, it was insane.. I worked on the video till the last minute - it took me 6 hours for concepting, laying out story lines, film shooting, editing and eventually finished right before presentation - didn't even got time to review second time. It was done under a great time pressure, however, it was fun. The video has been refined again for some details after the presentation.






The App - "HappyBall & MadBall"
Here are some screen shots (these are only part of the game; the app needs to be run on device to enable camera view). The pictures below were captured from iOS simulator.) I will write more details about the App later; perhaps after I make it to Apple store... 

This App is about real-time image processing for a simple ball game that delivers "Happy Energy" from one ball to another, with real-time obstacles derived from objects in reality.
The App uses "tap-bar view controller" (an iOS term) to switch between to views. The second view in this case is for 'Settings' - where the users can turn on/off camera and choose to enable "Object Vision" feature or not. The object vision is a term I created so people can get the idea easier. It is about image processing to perform edge detection for images. People use terms "Virtual Reality", "Augmented Reality"..etc for similar ideas. This one maybe is, "Reality in Virtual".  


These are the two main characters in the game I created:


HappyBall - the one you will give touches to keep its energy up. It will bounce in the view and react to gravity (i.e. iPhone's accelerometer).  Every time HappyBall hits MadBall, it delivers a point ("Happy Energy") to MadBall. Because it share its happy energy with others, its own  energy drops, i.e. touch count decreases a little bit. And only your touches will boost it up!




This is MadBall. It would be very mad at the beginning until HappBall forwards "Happy Energy". After some points it receives enough energy from HappyBall, it will start changing colors and eventually turn into Happy Color (at points level = 10, 20, 30,50) !


In short, the aim for this game is one game that simple enough for anyl lever of users to pick up easily. Studies show that most people spend time on "simple games" on their mobile devices. (take a look at the top iphone/ipad apps here). For this app, I also aimed for children's games - where I hope they can be just like HappyBall - delivers Happiness to others around them !It was satisfying to see my daughter got the idea right away at first time she played it. She ended up coming back to me saying "I won!" with excitements after 10 minutes getting the Madball turned happy. During the design process, she actually advised me not to lower the HappyBall energy - that makes her sad and she wanted the HappyBall to be happy all the time!



This is a map of the Happy Energy delivery cycle:



Summary:
My idea of learning iPhone programming was to be able to have mobile devices to interact with other hardware devices ("myPAC" - as described at the end of the video clip). I think this project gave me a good jump-start and also helped me pick up a few ideas along the way. iPhone programming actually isn't as easy as it seems - I would say a good iPhone programmer will take years to build up key skills and knowing the tricks. Its no different from any other engineering type of work. However, things here are more tangible and one would enjoy a lot of fun during the process. The challenging side of this is it's about presentations and a lot of them there may be copyright involved -images, sound effects..etc. A multi-talent programmer can make things a lot easier.   The mechanism to deliver an App from an idea to a product on market seems to be pretty organized. For this, I haven't tried yet, Maybe that's something I will try out soon, to send my app to App Store and get a picture how it is like.




Tuesday, June 14, 2011

Running uBuntu Linux on Beagle Board xM-revC with DSP Bridge enabled for Gstreamer




About

This post is about video streaming on a hardware box with embedded Linux. After some efforts and helps from friends (Special thanks to Siva.V), finally the DSP on Beagle board seems to be working. Below I am logging few steps how it was done. Overall, this effort is for myself to learn how to set up an embedded system for video streaming using:
  • Hardware: Beagle Board xM (rev C) (SD ID: xMTEST beta 3-30)
  • Software: uBuntu linux 2.6.39 with Gstreamer 0.10 and DSP tools for onboard DSP 

Why This Log

Many websites list details how their projects were done but some information could be out-dated.  I have spent significant amount of time trying out different recipes; here I am listing one recipe that seems to work for me as of June/2011.  There are many approaches to achieve the same goal, e.g. build everything from open source codes. Here I only focus on how to put pieces together. This is a faster way to get overall  idea first and leave the details to explore later. Most steps leverage existing packages from other people's efforts. You certainly can explore further in more details how to build them from sources and gain better insights.

Hopefully this helps the people who are interested in a similar project and need some pointers to get started. 

The Big Picture

First, the overall idea of the steps involved in this recipe:

  • STEP1: Get the HW/SW needed for this project.
  • STEP2: Build a bootable SD for the Beagle Board
  • STEP3: Build the Linux Kernel
  • STEP4: Build the DSP & Gsteamer tools Needed
  • STEP5: Misc Stuffs (VNC/Networking)
  • STEP6: Try out some videos
  • STEP7: What Next


STEP1: Get the HW/SW needed for this project

If you are reading this post, I assume you have some ideas of Beagle board & Linux. If not, here are some good starters. I will be using the reference numbers in the second column through out this post.



Subject
Websites
Beagle Board website
Beginner’s wiki
Beagleboard  uBuntu
Beagleboard DSP from source
Felipe Contreas’ gst-dsp
Wiki gst-dsp
Wiki gst-omapfb
Wiki gst-dsptools
uBuntu VNC
Big Buck Bunny

This picture shows the environment setup. The laptop is the host machine, which can connect to the Beagle board through a USB-to-Serial and use minicom program to communicate with the board for initial installation and configuration. Once the embedded Linux is up and running, the host machine can use VNC to launch programs, e.g. Gstreamer, running on the Beagle board from the host machine. 


STEP2: Build a Bootable SD for the Beagle Board
Our goal here is to build a bootable SD card for Beagle board with 2 disk partitions. In wiki [4], Section 4 – “SD card boot” lists great details and pointer how this can be done. However, if you want to save time, you can jump ahead and load a working one directly from [3]. I followed section 5.1 in [3] for Natty 11.04.

You can play with it to get a taste but all we need is just it’s first partition without uImage (later will be overwritten).


STEP3: Build the Linux Kernel & Root File System

The goal here is to build an uImage file to be loaded into partition-1 of your SD card. Follow the instruction from [3]’s section 9.4 DSP. This is basically using RobertCNelson’s stable kernel git. It seems he had integrated everything needed to enable Beagleboard DSP from Felip Contras' projects - his website is [5].

Few things to clarify here.  Once you git Robert’s stable kernel,  you will find 3 scripts in the directory. You need to run all of them.  The purposes of them are:

  • build_kernel.sh – this is basically to build uImage. Refer to [4] section 6.3 “Deploy the kernel”. You will need to copy this to disk partition-1 of your SD card.
  • build_deb.sh – this is to build the "*.deb" for uBuntu rootstock to deploy userspace flie system, i.e. the disk partition-2 of your SD. You will find the *deb file in the deploy directory after you run the script. Refer to section 7 of [4] “Userspace File system” to see how this is done. The idea is that you need to enable webserver (lighttpd) on your host, copy the *dev to /var/www and build a tarball in a clean directory so later you can copy it to partition-2 to untar for your file system.  Wiki [4] section 6.2 and 7 describes enough details for this process. When you deploy your user space file system,  you can use the command in [4].   Section 7.1.1 lxde root files ystem. This gives you GUI environment so later you can remotely log in from VNC and playback video from there.
  • create_dsp_package.sh  - see next section. 
At this point, you should be delopying Linux kernel and file system to the SD card and able to boot up uBuntu on beagle board. You may want to enable VNC and networking first before installing the DSP stuffs - please refer to STEP5 Misc Stuffs.


STEP4: Build the DSP & Gsteamer Tools Needed

The script of create_dsp_package.sh  is to build the DSP tools and BIOS needed to enable the DSP hardware on beagle board. You need to build this and install in your beagle board Linux later. See [3]’s DSP section. There are essentially 3 steps involved in the script:
  1. build gst-dsp – for understanding purpose, see[6].
  2. build gst-omapfb – for understanding purpose, see[7]
  3. build dsp-tools – for understanding purpose, see[8]


Once the script completes, the DSP modules will be automatically load in next boot-up. You can see dspbridge and mailbox from “lsmod” command. Also check /opt/dsp to see what were loaded.


STEP5: Misc Stuffs (VNC/Networking)


VNC

In order to remotely log in your beagle board and play around stuffs just like on your desktop, a good solution is to use VNC.  [9] is a good post. There are one thing for you to decide – whether you want to have VNC server automatically running or not after boot up.  If not, all you need is just
 sudo apt-get install tightvncserver
to install vncserver in your beagle board linux and type “vncserver” after log in. You should be able to find a vncviewer to remotely login, e.g. "xtightvncviewer”. The step-2 on webpage [9] and following steps are needed if you want VNC automatically loaded. This is convenient later because all you need to do with beagle board later is just to power up – you don’t even need minicom to log in.

Enable Networking

It is likely that the networking is not enabled after you log in beagle board through minicom the first time. See section 8.4.1 of [3]. My recommendation is just to copy /etc/network/interfaces file from your orginal SD to the new SD card you are working on. 


STEP6: Try out some videos
If everything is setup right, you are ready to test out some videos. Few steps:


Test  DSP


  •   Perform “lsmod” to see if DSP modules are loaded (check bridgedriver and mailbox_mach).
  •   Have two terminals and run “sudo dsp-load” and “sudo dsp-test” separately to see if the loading picks up.

Play Some Videos:


There are only few video formats supported in this package (mp4, jpeg, wmv, h264 etc). You can easily find video with those formats, or download from [10]. I recommend the trailer for iphone – a small clip. Run the command to launch gstreamer
  sudo gst-launch playbin2 uri=”file://$PWD/trailer_iphone.m4v

Wala! You see your first video on your beagle board.



STEP7: What Next


See next post on performance tuning … enough typing for today..  


Thursday, June 9, 2011

Few thoughts about Apple's 2011 WWDC (written in Chinese Traditional Font)



最近有比較多的時間, 所以寫了一些關於 APPLE 2011 WWDC 的看法.
(WWDC: Apple Worldwide Developers Conference), 
分享一些個人觀點. 思考這有什衝擊, 對科技業有何影響, 做為一些投資的參考 (以下純屬個人觀點, 不負任何投資損失的責任 :) )先進/朋友有不同看法, 也請不吝指正或討論.


前言- 正在革命的科技世界
不是我要特別擁抱Apple蘋果電腦的發展, 但是正如Apple總裁 Steve Jobs 自己說的 - 現在的蘋果電腦正像野草一樣的成長, 擋都擋不住, 他的工作就是不要把這機會搞砸了. 老實說這和十年前昇陽(SUN)的狀況很相似 - SUN CEO Scott曾說, 魚多到自己跳到船裡來.. 
    
我最近稍微研究Apple的東西, 他們的產品是有深度的. 從我一些動手的經驗,看得出他們很重視細節. 重視品質, 花了很多工夫在規劃, 花很多力氣訂規則,做品管. 這樣的品質不是甚麼小公司可以出品的東西, 一些矽谷稱為大公司的,有許多我也看不到這樣的品質. Apple 沉潛了10多年, 累積了相當的科技實力.這回通通爆發出來.這是我覺得最值得參考的, 一鍋湯慢熬細燉, 總會熬出一片天的.
   
1999年我來到矽谷, Apple是一家搖搖欲墬的公司. 2003年股價不過10塊錢上下.今天大約是350塊錢 - 市值已經超逾微軟 Microsoft, 掌握了足以撼動世界的資金, 人才和科技實力.


浪頭上的2011 Apple WWDC
仔細看一下昨天(06/06/2011) WWDC 發表的新發展. 這顯然會對 PC 產業產生重大的影響. 三大新發展 (此次著重軟體, 不似過去有新硬體發表):

  •       OSX Lion
  •       IOS5
  •       iCloud 


OS X Lion
新作業系統 OSx Lion. 基本上的重點是他們把從 iphone/ipad 所學的人機介面移植到桌上型電腦和筆記型電腦. 這有什影響 ?

  • 我自己想像我是微軟的經理人-看到這個我真會慌了.Apple竟然在推出滑鼠之後, 現在竟又改寫的滑鼠的使用. 我不知道 GESTURE 科技翻成中文是什麼 (基本上是用幾根手指頭畫來畫去操控人機介面的技術, 使得軟體操作或瀏覽更流利) 但數年內可以預見有相當的影響力. 過去滑鼠, iphone 介面帶來的衝擊不就是如此嗎? (科技股投資可參考)
  • Windows/Linux 必定要做出回應, 要不可能會有更多的使用者轉向 Apple陣營. Apple 的新作業系統只賣一千台幣 (US $29.99).誰不要買來試試, 又新鮮又好用. 這樣的低價策略瞄準的就是打擊 PC WINDOWS 連帶Linux市場, 讓更多人使用 Apple的硬體, 讓Apple的利潤更成長, 雪球越滾越大. 這招真厲害... (硬體是他們目前比重最大的收入).
  • 再從筆記型電腦的角度, Apple 說他們筆電的收入是遠大於桌上型的. 隨著Apple 推動的趨勢, 這是不是會讓 Windows 陣營的 Notebook 成長停滯? (Well, 如果OSX 越來越好用,附加數量可觀的Apps,文書 OpenOffice 也免費, windows PC 用戶好像可以轉成 Mac用戶吧? 現下 windows行之有年根森蒂固, 但是如果Apple 領軍的 OSX, iOS Apps 再發展幾年, 加上觀察這幾年Linux uBuntu 的發展, Microsoft真要小心翼翼了… ) 攤開一個現實面的問題, 假設你要為一個20人的office 選擇一套作業系統 (1) Windows PC (2) Mac OS (3) Linux OS -  Mac OS 選項是不是出現了一些契機? Apple 如果站穩了手持裝置和個人電腦市場, 是不是又會往伺服器市場邁進?
  • 專攻滑鼠的公司, 必定要投入觸控滑板的研發, 對他們來說這可能是生死攸關的事. 他們不做, 將來必在這裡滑跤. 如果WINDOWS/Linux 也朝這個方向,滑鼠有可能漸漸式微.這當然也是一個投資的契機. 如果OS三大陣營都朝 GESTURE INTEFACE 方向前進. 做觸控介面的公司 (computer touch pad with embedded co-processor inside for gesture recognition), 再來數年可能有不錯的商機, 反之,不跨足這個領域, 只死守 mouse 的公司,大概不久就可以開始放空他們了.
  • 新的 OSx 運用了大量的 Wifi科技來做無線傳輸. 所以專攻Wifi的公司值得投資. 我最近也研究了無線傳輸的數種科技, Wifi還是最好的選項, 可攻可守, 市場普及又高. (其他: zigbee, ant, bluetooth 科技有不同市場與用途, 特性在此就不多說了, 但都可看為Wifi變種). QualComm 去年便購併了一家 Wifi公司. 這Wifi有什影響? Apple 加了一項 AirDrop 的功能, 使得使用者可以在電腦間無線傳送檔案. 也就是說過去和現在用 USB DISK 的方式很快就會式微了, 至少成長有限. 那些做 USB disk 的公司就不要再投資了,很快有成長瓶頸. (試想, 4G 馬上就來,加上現在實驗室裡的光速傳輸 可到一秒傳整片DVD,  iCloud 技術漸漸成熟, 很難想像 USB disk drive 的市場還會持續擴大, 雖然還是有一定的需求.)


iOS5
新的 iOS5. 這基本上是新版的 iphone/ipad OS. 老實說, 我較深入了解 iOS 後, 我便了解到 Andriod 是很難跟 Apple 拼的. Apple整合的功力和創新的想法真是沒得說. 現下Apple和 Android 陣營就像過去的WINDOWS 大戰 Linux.  只有電腦玩家才玩得起Linux. Linux雖然免費,十個電腦使用者到底有幾個會用Linux? 沒有整合的東西, 光是一些簡單的版本問題就嚇跑一堆非工程出身的使用者了. 整合的事大概只有大廠做得起, 但是大範圍的上下整合, Android陣營就見拙了.


Embedded 大概是最Linux最好的出路. 但以Apple 在 smart device大小通吃的方式, embedded 市場大概被瓜分了一大塊.我開始覺得,做 embedded system 與 Apple devices 靠得太近可能會是與虎為伴.


我在 iOS 的經驗上, 看到了Apple整合和訂規格的功力. 不久將來這會拋開Android 的追趕. Apple 採取在 WIDOWS 和Linux中間的策略 : 大部分的in-house系統整合和部分的對外開放. 然而, iOS的進展日新月異, 我想這對跟著 APPLE 跑的開發者一定有某種程度的困擾(變得太快了, 有些人還在學iOS3, 前天已經在講 iOS5了), 但想起自己年輕時對 programming 的熱情, 我想新一代的年輕工程師應該不是很大的考驗. 以iphone app 數量上跳躍式的成長, Apple 與 Android雙方App的影響力可能會拉開距離.


Apple 狹帶這種半開放式的方式領著一群開發者向前衝, 我想Android 陣營要提出相對應的策略, 不然將來如何與 APPLE匹敵. 時間一久,路遙知馬力.
   
iCloud
看來Apple 是具體提到他們對雲端科技的看法和做法了. 這是一個很大的試驗.如果成功了, 對Apple又往上推了一層樓, 進一步甩開競爭對手. (WINTEL 陣營, Linux 陣營, Android 陣營現在有哪些具體的雲端科技?) Apple的雲端科技, 簡單的說就是他們提供遠端服務,連結使用者所有的硬體介面, 加深使用者便利的經驗, 簡化他們消費的程序.


Apple說這是免費的. 但是著眼點是讓 App Store更便利. 我在想, Apple 這方面也厲害, 光是這Apple Store 銷售模式就可以擴展通吃軟體通路了.有一天如果Apple 找不到  IPHONE/IPAD 硬體可以再做的, 這App Store大概還有一片天可以加大力度, 弄網路商業, 宅配賣汽水賣電子雜誌.. 那真是 硬/軟/網/搜 通吃的思維了...


另附:
   (A) 你消我長的變局:
晶片消費 蘋果獨霸全球
http://udn.com/NEWS/WORLD/WOR2/6388396.shtml
德儀示警
http://udn.com/NEWS/WORLD/WOR2/6388399.shtml


   (B) Apple 下一代辦公室建築物.
YAHOO 英文報導
http://news.yahoo.com/s/yblog_localsfo/20110608/ts_yblog_localsfo/steve-jobs-makes-surprise-presentation-on-new-hi-tech-apple-headquarters?bouchon=807,ca
中文新聞很快就出來了
http://udn.com/NEWS/WORLD/WOR4/6387685.shtml    


Wednesday, April 20, 2011

A Wireless Control Project Using Zigbee for Temperature Control





About
This is an effort for me to explore wireless communication for embedded systems. This project targets on wireless control using Zigbee modem. The theme is temperature feedback control. Special thanks to Raymond.Y and Ajay.K who were my partners working on this project in April, 2011; also thanks to Avnish.A for this advice on this project and the knowledge of wireless communication protocols.


The idea of zigbee is that it is a wireless communication protocol with low-power consumption targeting on the usage such as home/factory automation. I have a separate post to compare different wireless protocols.




Project Specifications
  • Automatically detect and configure Zigbee nodes
    • Analog IO at rate of one sample per 10 secs.
    • End nodes must sleep for maximum battery life
  • Maintain database of nodes
  • Report to the console every minute
    • Data reported by ND
    • Latest temperature for nodes
    • Flag nodes with temperature change in past minute
    • Flag nodes with temperature above threshold




Hardware Design
This is a brief video walk through of the hardware design:




Software Design
Display (every 20 seconds) :
  • MY  MY address
  • SH  SH address
  • SL  SL address
  • TEMP  Most recent temperature in deg Celcius
  • CHG  1: change in 1 deg at any time since last print
  • Alert  1: temperature > 35 at any time since last print
  • IR  1: node acknowledged sample rate program
  • D0  1: node acknowledged ADC program
  • Retries number of retry attempts for failed IR or D0
  • UPD  1: node gave a temp update since last print
  • ACT  1: node is alive and active
System Internal Loop :
Packet Processing Flow Chart:
Database Structure for a Single Node:
  • SOURCE_ADDRESS
    • 10 byte address  MY:SH:SL
  • TEMP
    • Temperature in Celcius
  • NUM_RETRIES
    • Number of retries to program node
  • Flags
    • ACK_IR
      • Node successfully programmed Sample Rate
    • ACK_D0
      • Node successfully programmed ADC
    • UPD
      • Node updated in last minute (since last console print)
    • ACT
      • Node responded to ATND in last minute
    • CHG
      • Temperature changed by greater than 1 deg since last reading
    • ALERT
      • Temperature above threshold


System Scalability

  • For large number of nodes ATND network maintenance may take away from sensor bandwidth
    • Trade-off of how often you want to keep network list current
  • Currently static array of 1024 nodes
  • Number of nodes limited by update cycle time
    • Update cycle must have enough time for all nodes to get back to coordinator
    • Coordinator assumes nodes that don’t respond in that time are inactive
  • Star network limited to bandwidth of coordinator
    • More nodes could be supported if routers had intelligence to monitor their own children and update coordinator only periodically.


Challenges

  • Hardware does not always maintain settings after power off and power on
    • D0 value reset to 0
  • Frame sizes change for ATIR depending on the range of the sample rate.
    • 1 byte for 255ms
    • 2 bytes for greater than 255ms (we needed this for 10 secs)
  • Printf for hex values interpreted as signed values add ffffffff before
    • Corrected by changing packet data to unsigned
  • Nodes sometimes take a very long time to respond to ATND


DEMO



Monday, August 30, 2010

Project Diary of Accessing Color CMOS Image Sensor OV7725 Using NEXSY FPGA Board



"In theory, there is no difference between theory and practice. In practice, there is.
Fearless Experimentation!"



About
This is a FPGA project using on NEXYS2 board that I experimented in the summer of 2010 for about two monthsThis post is written in a daily log fashion to record what happened during this project. Special thanks to Dr. Jesse Jenkins @ Xilinx for his advisory on my FGPA learning & this project.


This effort was for me to learn the FPGA design flow as well as COMS sensor control. The system was built simply using a low-cost FPGA board and a raw CMOS camera module. The focus was to have all control logic implemented in the FPGA using 8-bit picoBlaze processors and its assembly codes. There was no linux nor any other embedded OS or device drivers used.




Day 1 (June/30/2010) - Getting Started 

I had no clue what FGPA was about. I only knew that project cycle could be much shorter in FPGA world. One day I was in a golf private lesson – this is it! I am going to do a project using Video capture as a training aid for analyzing my golf swing & use it for my learning of FPGAs!

In theory:
http://www.youtube.com/watch?v=Z2o1SYXaOHE&p=B3025594607F8780&playnext=1&index=23
http://www.youtube.com/watch?v=JkZdlYg9UuY

In Practice:
(read on….)




First Week (July/10/2010) – About VGA

I received my FPGA board and used the weekend to study how VGA works. It is not that hard. 

Conceptually, there are two synchronization signals, one for vertical scanning (vsync), one for horizontal scanning (hsync). Between the horizontal sync-pulses, color data of pixels need to be sent out, at proper timing, to 3 analog signals (R,G,B) for displaying colors. By driving the color signals in different strength, different color can be shown.  For example, Nexys Board uses (R[2:0], G[2:0], B[1:0]), that provides the capability of displaying 2^8=256 colors at the same time. I wrote a 1-page report about it, more can be found there. 
(Keywords: VGA timing, RGB color model, YUV/YCbCr, Nexys Ref. Manual)

I also spent a little time on the storage method in case needed. 
(Keywords: SD “Secure Digital” card, File System.)




Second Week (July/15/2010) – About CMOS Sensor

I spent a lot of time on searching for the component of CMOS sensor and their design spec in order to determine if technically the module can fit the need for the video system. Some findings after a week’s search:
  • Most of the CMOS sensors are capable of outputting 30 fps (frame per second). One of the reasons could be that human eyes cannot tell the difference between images when switching rate is higher than 24 fps.
  • There are only few leading companies that have products capable of >30 fps. OmniVision seems to be one taking over the leadership with products that supports 60fps. Since I am going for video-systems that can do slow-motion, I do need high fps. OmniVision’s products seem to be a good match for me.
  • After a week’s search, I decided to go for OmniV’s CameraCube (shown in picture)




Third Week (July, 22) – What I need is actually “Camera Board”

One mistake I made was that not only the technical data was hard to get but also how would I connect all the modules together. The CameraCube is very small (didn’t notice until I receive the real module in hands) There are special sockets needed.

I explored the possibility to build my own PCB but later decided to just go buy “Camera Board” instead to save the time & efforts.  However, it is very interesting that people came up many different way to manufacture their own PCBs. There may be some innovation can be done, but that’s for future projects.

It is an interesting discovery it seems there is a larget community outthere for hobbyist who like to build their own Robots. Commercial products/Components are available to get started, .e.g. www.sparkfun.com, www.furtureelectronics.com, www.digikeys.com, etc. 

One thing interesting about Machine Vision is that there are a few projects leading by either university labs (e.g. www.cs.cmu.edu/~cmucam) or company like www.surveyor.com.  There are quite a few interesting projects done in Surveyor. 

At the end I decided to order the camera board Suveyor provides. (It seems to be the best/latest products out there and available in a reasonable price at the time of this project). It is also interesting to learn another type of products “CCD camera board” that are a lot more mature and popular but that’s something to look into later.






Fourth Week (July/28/2010) – Not working!! Introduction of my “picoScope”

After studying the camera spec, coded the controller needed in FPGA to access the camera and have the physical prototype built, the outcome is no surprise – Not working!!

One of the major issues is that for the FPGA board to communicate with the OV7725, I have to follow their “SCCB” protocol, which I found out at the later time that it was actually very similar to the I2C protocol Philip had developed. 

I realized I don’t know enough of the FPGA debug tools yet and meanwhile, it is very important for me to debug issues systematically, I decided to build my own soft (chip-internal) logic analyzer. The purpose is to figure out what’s the waveforms are like at boundary of FPGA pins.




Fith Week (August/04/2010) – my picoScope (Part II)

Designing a logic analyzer wasn’t as simple as I thought at the beginning, there are clk-domain cross involved as well as how to produced different clock frequency in Xilinx FPGA. 

After a week’s debug efforts, finally the first working vision is available. The logic analyzer as I called it “picoScope” is equipped with multiple sampling clock options, through on-board switches, and a input trigger when to start sampling signals. As the first version, it has 16 channels and can capture 32 cycles. I later also realized that it can also be used to sample external signal which can be routed through the PMOD ports on the board. One the sampling is done, the picoBlaze and the assembly codes programmed inside, it converts the data and output to VGA display as shown. The amount of data to store is a key. For chip internal debug, able to view 32 cycles over 16 signals is perhaps enough. The picoScope use BRAM to store the data.

At a later time, an idea came to my mind that the data actually can be compressed or be stored in vector form and with that, more data can be stored in the same size of storage elements. Below is another picture that I was playing with different display seting:




Sixth Week (August/11/2010) – Debug: PC-based USB Logic Analyzer


From the picoScope, I realized the signals on the FGPA ports are alive. I have to be able to know what’s going on with the physical bus I connect between the board and OV7725 module.

After a few rounds of study of what products available on the market, I decded to buy a low-cost PC-based USB logic analyzer for the price of $149 (Saleae Logic Analyzer). It worked out pretty well.




Seventh Week (August, 18) – Breakthrough: the first Image! (Understanding of SCCB( I2C ) protocol & an echo from Mars!)

With the help of the logic analyzer, I get a better understanding of the activities on the I2C bus. After debugging the software and hardware for a week, a few problems were fixed and finally the first image comes up, as shown.

Problems found:
  • I2C bus has some physical spec, which requires VDD pull-up circuitry that I din’t implement.
  • A error in the value I program a main control registers was wrong. As a result, the registers were being reset and all programmed value were lost.

The image above was the first image I saw. There was a bug how the vertical synchronization signal was resting the address of BRAM, resulting in unstable image display.

I start to notice some of the video signals are very sensitive and results can be very different even if I use same firmware and physical component.






Eighth Week (August/25/2010) – Storage: Resolution v.s. Colors - onBoardSRAM Access,  Color Image & Issues of Video Signals



In the first-cut design, the system only display signal color using RGB010 (meaning only 1 bit is stored for displaying purpose). There are two direction I can go, given the BRAM available on the chip.

Picture on the left was the experiment that I used the memory to store more data for green channel to enhance the resolution (3 bits for green-channel).

Picture on the right was the experiment that used the memory to store more color bits (RGB111).

For the system to display good quality image, there are few things need to be resolved:
  1. debug the issues of flakey & noisy image. Possibly because of the clock shape after transmitting from OV7225 to the FPGA board went bad.
  2. The configuration of the OV7725 - There are so many parameters to tune, e.g. exposure time, color strength.
  3. The memory to store more data.


Among the three, (3) is the most important one. I did look into the onBoard memory and realized that it was very easy to use if it is programmed as asynchronous SRAM.

However, it was later proven both from implementation and paper analysis the output speed is not enough (limited to 12.5Mhz), which can not catch up with the VGA display frequency.

There is a possible solution is to explore the “Burst Mode” of the SRAM chip. However, time is up (a two-month window of spare time aside from day job) and I shall start wrapping up a summary report for this project.



Nineth Week (August, 25) – Wrapping up: RGB111 Using BRAM & Future Work


As of September/01/2010, the best result for this project is to use the FGPA BRAM to store 3 bits of pixel data for displaying in QVGA resolution (the design was capable of handling VGA 640x480 but memory space is limited to 320x240x3 bits). 

For the future work, the memory system needs to be designed carefully to meet the requirement for VGA displaying. The video signals also need to be debugged and be handled/protected carefully so that the image output is stable.

As for the possible applications, multiple Camera modules can be used to recording image in parallel so that more pictures can be recorded for displaying slow motions, if the CMOS sensor is limited to certain output rate (fps). Algorithm may need to be explored in order to synchronize pictures. Possible direction is place the cameras in a circle with same center point.

Another interesting project would be using two camera with proper distance (need some careful calculation and careful positioning of the camera modules) and by locking the color channel, a 3D video system can possibly be built.