跳到主要內容

Movidius Neural Compute Stick and Raspberry 3 — quick start guide

Movidius Neural Compute Stick and Raspberry 3 — quick start guide

I’ve been wanting to try out the Movidius Neural Compute Stick (NCS) for a while now and decided to combine it with a Raspberry Pi 3 — the idea being of course to move the setup to a mobile-capable chassis (wheeled or winged!) in the near future.
They are selling like hot cakes here in the UK — either that or the manufacturing batches are small. RS Online got their latest batch in last week and they are sold out already -they’ve sold out twice in the last couple of months..
Hardware-wise, £120+VAT gets you a NCS, a Raspberry Pi 3 and v2 camera module for the Pi. The OS and SDK are free of course so that’s a relatively small outlay for a fun Deep Learning project..
£120 for a mobile deep learning dev kit.
I found the Movidius documentation to be either missing or incomplete — a lot of links indexed by Google now 404, the forums are pretty quiet and the Youtube videos and other resources imply that a host PC running full Ubuntu is required. That’s not the case — it is possible to do a full install straight onto your Raspberry Pi 3 as we outline below. So the steps are as follows (all of these steps were executed on the Pi, we did not use a host PC at all):
1. Install Raspbian on to your Pi— we installed Stretch with Desktop straight from the website.
2. Next, clone the ncsdk repo from the Movidius organisation on Github:
git clone https://github.com/movidius/ncsdk.git
Quick note: this won’t be the only repo you need to get up and running quickly (see below).
3. Make sure you run install.sh using bash as it has “bashisms” — so “sudo bash install.sh”. Now we need to wait as the Pi CPU is pretty underpowered for what comes next..
4. scikit-image takes ~30 mins to compile — be patient, have a coffee and a biscuit.
5. Caffe takes ~1 hour to compile (and looks like it will run out of RAM but doesn’t — it uses ~800 MB and a Pi 3 has 1 GB so we’re ok here) — be even more patient, partaking coffee and biscuits as necessary..
6. Once install.sh is finished test your installation by running this file (https://github.com/movidius/ncsdk/blob/master/examples/apps/hello_ncs_py/hello_ncs.py) e.g.
python hello_ncs.py 
You should see output like:
Hello NCS! Device opened normally.
Goodbye NCS! Device closed normally.
NCS device working.
We now have a working Neural Compute Stick!
7. Now run
make all
in the examples subdir to download and build some models to test (the pre-trained weights are downloaded from dl.caffe.berkeleyvision.org). This will build three models — GoogLeNet, AlexNet and SqueezeNet. These models are trained on ImageNet so they can recognise objects from the 1,000 categories that comprise the ImageNet corpus. You could in fact build just one of these models — we want to compare relative performance though so we built all of them. This step will need to download the pre-trained weight matrices for each model and also compile the Caffe model into the internal NCS format, so the time taken will depend on the speed of your network connection.
8. Now we need our second Movidius repo — clone the ncappzoo GH repo to get stream_infer.py (this is the same script referred to in this video) — otherwise you’ll need OpenCV which will take **4 hours!!!** to build and you will also need to write Python code to access the video stream — not a problem but we don’t need to do it in this quick start guide. So it’s:
git clone https://github.com/movidius/ncappzoo
9. Make sure you run
sudo modprobe bcm2835-v4l2
to get the v2 camera module loaded as the /dev/video0 device, otherwise the python script will simply hang as it wants to access the camera using this handle.
10. As per the Youtube video I linked to above, you’ll need to
sudo apt install gir1.2-gstreamer-1.0 git1.2-gst-plugins-base-1.0 python3-gst-1.0
No, I have no idea why these packages have such friendly and easy to remember names either. This gives us the windowing system API that stream_infer.py wants to use.
11. Ok, now we can finally see how the NCS runs our target model!! Run stream_infer.py from the ncappzoo/app/stream_infer directory. We loaded in the GoogleNet model — make sure you use the right image dimensions required by your model (we need 224x224x3 for GoogleNet) or the code will error with a re-shaping problem (ValueError: cannot reshape array of size XXXX into shape (224, 224, 3)).
Here’s a running example — safety pin is a category from ImageNet and the NCS running GoogleNet classifies it with ease (95%). Cool..
PS The NCS runs pretty hot!
PPS TODOs — generate a PR back into the repo for two things:
(a) Get the scripts to use make -j4 to run at least a bit more quickly when compiling on the Pi 3
(b) Get the scripts to be a bit more arch-aware for the Pi 3 platform.

留言

這個網誌中的熱門文章

opencv4nodejs Asynchronous OpenCV 3.x Binding for node.js   122     2715     414   0   0 Author Contributors Repository https://github.com/justadudewhohacks/opencv4nodejs Wiki Page https://github.com/justadudewhohacks/opencv4nodejs/wiki Last Commit Mar. 8, 2019 Created Aug. 20, 2017 opencv4nodejs           By its nature, JavaScript lacks the performance to implement Computer Vision tasks efficiently. Therefore this package brings the performance of the native OpenCV library to your Node.js application. This project targets OpenCV 3 and provides an asynchronous as well as an synchronous API. The ultimate goal of this project is to provide a comprehensive collection of Node.js bindings to the API of OpenCV and the OpenCV-contrib modules. An overview of available bindings can be found in the  API Documentation . Furthermore, contribution is highly appreciated....

2017通訊大賽「聯發科技物聯網開發競賽」決賽團隊29強出爐!作品都在11月24日頒獎典禮進行展示

2017通訊大賽「聯發科技物聯網開發競賽」決賽團隊29強出爐!作品都在11月24日頒獎典禮進行展示 LIS   發表於 2017年11月16日 10:31   收藏此文 2017通訊大賽「聯發科技物聯網開發競賽」決賽於11月4日在台北文創大樓舉行,共有29個隊伍進入決賽,角逐最後的大獎,並於11月24日進行頒獎,現場會有全部進入決賽團隊的展示攤位,總計約為100個,各種創意作品琳琅滿目,非常值得一看,這次錯過就要等一年。 「聯發科技物聯網開發競賽」決賽持續一整天,每個團隊都有15分鐘面對評審團做簡報與展示,並接受評審們的詢問。在所有團隊完成簡報與展示後,主辦單位便統計所有評審的分數,並由評審們進行審慎的討論,決定冠亞季軍及其他各獎項得主,結果將於11月24日的「2017通訊大賽頒獎典禮暨成果展」現場公佈並頒獎。 在「2017通訊大賽頒獎典禮暨成果展」現場,所有入圍決賽的團隊會設置攤位,總計約為100個,展示他們辛苦研發並實作的作品,無論是想觀摩別人的成品、了解物聯網應用有那些新的創意、尋找投資標的、尋找人才、尋求合作機會或是單純有興趣,都很適合花點時間到現場看看。 頒獎典禮暨成果展資訊如下: 日期:2017年11月24日(星期五) 地點:中油大樓國光廳(台北市信義區松仁路3號) 我要報名參加「2017通訊大賽頒獎典禮暨成果展」>>> 在參加「2017通訊大賽頒獎典禮暨成果展」之前,可以先在本文觀看各團隊的作品介紹。 決賽29強團隊如下: 長者安全救星 可隨意描繪或書寫之電子筆記系統 微觀天下 體適能訓練管理裝置 肌少症之行走速率檢測系統 Sugar Robot 賽亞人的飛機維修輔助器 iTemp你的溫度個人化管家 語音行動冰箱 MR模擬飛行 智慧防盜自行車 跨平台X-Y視覺馬達控制 Ironmet 菸消雲散 無人小艇 (Mini-USV) 救OK-緊急救援小幫手 穿戴式長照輔助系統 應用於教育之模組機器人教具 這味兒很台味 Aquarium Hub 發展遲緩兒童之擴增實境學習系統 蚊房四寶 車輛相控陣列聲納環境偵測系統 戶外團隊運動管理裝置 懷舊治療數位桌曆 SeeM智能眼罩 觸...
2019全台精選3+個燈會,週邊順遊景點懶人包 2019燈會要去哪裡看?全台精選3+個燈會介紹、週邊順遊景點整理給你。 東港小鎮燈區-鮪鮪到來。 2019-02-15 微笑台灣編輯室 全台灣 各縣市政府 1435 延伸閱讀 ►  元宵節不只看燈會!全台元宵祭典精選、順遊景點整理 [屏東]2019台灣燈會在屏東 2/9-3/3:屏東市 · 東港鎮 · 大鵬灣國家風景區 台灣燈會自1990年起開始辦理,至2019年邁入第30週年,也是首次在屏東舉辦,屏東縣政府與交通部觀光局導入創新、科技元素,融入在地特色文化設計,在東港大鵬灣國家風景區打造廣闊的海洋灣域燈區,東港鎮結合漁港及宗教文化的小鎮燈區,及屏東市綿延近5公里長的綵燈節河岸燈區,讓屏東成為璀璨的光之南國,迎向國際。 詳細介紹 ►  2019台灣燈會在屏東 第一次移師國境之南 大鵬灣燈區 主題樂園式燈會也是主燈所在區,區內分為農業海洋燈區、客家燈區、原住民燈區、綠能環保燈區、藝術燈區、宗教燈區、競賽花燈及317個社區關懷據點手作的萬歲光廊等。 客家燈籠隧道。 平日:周一~周四14:00-22:30(熄燈) 假日:周五~周六10:00-22:30(熄燈)  屏東燈區: 萬年溪畔 屏東綵燈節藍區-生態。 綵燈節--每日17:30 - 22:00(熄燈) 勝利星村--平日:14:00 - 22:30(熄燈) 假日:10:00 - 22:30(熄燈) 燈區以「彩虹」為主題,沿著蜿蜒市區的萬年溪打造近5公里長的光之流域,50組水上、音樂及互動科技等不同類型燈飾,呈現紅色熱情、橙色活力、黃色甜美、綠色雄偉、藍色壯闊、靛色神祕、紫色華麗等屏東風情。勝利星村另有懷舊風的燈飾,及屏東公園聖誕節燈飾。 東港小鎮燈區 東港小鎮燈區-鮪鮪到來。 小鎮燈區以海的屏東為主題,用漁港風情及宗教文化內涵規劃4個主題區,分別為張燈結綵趣、東津好風情、神遊幸福海、延平老街區。每日17:00~22:30(熄燈) 以上台灣燈會資料來源: 2019台灣燈會官網 、 i屏東~愛屏東 。 >> 順遊行程 小吃旅行-東港小鎮 東港小吃和東港人一樣,熱情澎湃...