跳到主要內容

Multiple cameras with the Raspberry Pi and OpenCV

https://www.pyimagesearch.com/2016/01/18/multiple-cameras-with-the-raspberry-pi-and-opencv/

Multiple cameras with the Raspberry Pi and OpenCV

multiple_cameras_animated
I’ll keep the introduction to today’s post short, since I think the title of this post and GIF animation above speak for themselves.
Inside this post, I’ll demonstrate how to attach multiple cameras to your Raspberry Pi…and access all of them using a single Python script.
Regardless if your setup includes:
  • Multiple USB webcams.
  • Or the Raspberry Pi camera module + additional USB cameras…
…the code detailed in this post will allow you to access all of your video streams — and perform motion detection on each of them!
Best of all, our implementation of multiple camera access with the Raspberry Pi and OpenCV is capable of running in real-time (or near real-time, depending on the number of cameras you have attached), making it perfect for creating your own multi-camera home surveillance system.
Keep reading to learn more.
Looking for the source code to this post?
Jump right to the downloads section.

Multiple cameras with the Raspberry Pi and OpenCV

When building a Raspberry Pi setup to leverage multiple cameras, you have two options:
  • Simply use multiple USB web cams.
  • Or use one Raspberry Pi camera module and at least one USB web camera.
The Raspberry Pi board has only one camera port, so you will not be able to use multipleRaspberry Pi camera boards (unless you want to perform some extensive hacks to your Pi). So in order to attach multiple cameras to your Pi, you’ll need to leverage at least one (if not more) USB cameras.
That said, in order to build my own multi-camera Raspberry Pi setup, I ended up using:
  1. A Raspberry Pi camera module + camera housing (optional). We can interface with the camera using the picamera  Python package or (preferably) the threaded VideoStream  class defined in a previous blog post.
  2. Logitech C920 webcam that is plug-and-play compatible with the Raspberry Pi. We can access this camera using either the cv2.VideoCapture  function built-in to OpenCV or the VideoStream  class from this lesson.
You can see an example of my setup below:
Figure 1: My multiple camera Raspberry Pi setup.
Figure 1: My multiple camera Raspberry Pi setup.
Here we can see my Raspberry Pi 2, along with the Raspberry Pi camera module (sitting on top of the Pi 2) and my Logitech C920 webcam.
The Raspberry Pi camera module is pointing towards my apartment door to monitor anyone that is entering and leaving, while the USB webcam is pointed towards the kitchen, observing any activity that may be going on:
Figure 2: The Raspberry Pi camera module and USB camera are both hooked up to my Raspberry Pi, but are monitoring different areas of the room.
Figure 2: The Raspberry Pi camera module and USB camera are both hooked up to my Raspberry Pi, but are monitoring different areas of the room.
Ignore the electrical tape and cardboard on the USB camera — this was from a previous experiment which should (hopefully) be published on the PyImageSearch blog soon.
Finally, you can see an example of both video feeds displayed to my Raspberry Pi in the image below:
Figure 3: An example screenshot of monitoring both video feeds from the multiple camera Raspberry Pi setup.
Figure 3: An example screenshot of monitoring both video feeds from the multiple camera Raspberry Pi setup.
In the remainder of this blog post, we’ll define a simple motion detection class that can detect if a person/object is moving in the field of view of a given camera. We’ll then write a Python driver script that instantiates our two video streams and performs motion detection in both of them.
As we’ll see, by using the threaded video stream capture classes (where one thread per camera is dedicated to perform I/O operations, allowing the main program thread to continue unblocked), we can easily get our motion detectors for multiple cameras to run in real-time on the Raspberry Pi 2.
Let’s go ahead and get started by defining the simple motion detector class.

Defining our simple motion detector

In this section, we’ll build a simple Python class that can be used to detect motion in a field of view of a given camera.
For efficiency, this class will assume there is only one object moving in the camera view at a time — in future blog posts, we’ll look at more advanced motion detection and background subtraction methods to track multiple objects.
In fact, we have already (partially) reviewed this motion detection method in our previous lesson, home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox — we are now formalizing this implementation into a reusable class rather than just inline code.
Let’s get started by opening a new file, naming it basicmotiondetector.py , and adding in the following code:
Line 6 defines the constructor to our BasicMotionDetector  class. The constructor accepts three optional keyword arguments, which include:
  • accumWeight : The floating point value used for the taking the weighted average between the current frame and the previous set of frames. A larger accumWeight  will result in the background model having less “memory” and quickly “forgetting” what previous frames looked like. Using a high value of accumWeight  is useful if you except lots of motion in a short amount of time. Conversely, smaller values of accumWeight  give more weight to the background model than the current frame, allowing you to detect larger changes in the foreground. We’ll use a default value of 0.5 in this example, just keep in mind that this is a tunable parameter that you should consider working with.
  • deltaThresh : After computing the difference between the current frame and the background model, we’ll need to apply thresholding to find regions in a frame that contain motion — this deltaThresh  value is used for the thresholding. Smaller values ofdeltaThresh  will detect more motion, while larger values will detect less motion.
  • minArea : After applying thresholding, we’ll be left with a binary image that we extract contours from. In order to handle noise and ignore small regions of motion, we can use the minArea  parameter. Any region with > minArea  is labeled as “motion”; otherwise, it is ignored.
Finally, Line 17 initializes avg , which is simply the running, weighted average of the previous frames the BasicMotionDetector  has seen.
Let’s move on to our update  method:
The update  function requires a single parameter — the image we want to detect motion in.
Line 21 initializes locs , the list of contours that correspond to motion locations in the image. However, if the avg  has not been initialized (Lines 24-26), we set avg  to the current frame and return from the method.
Otherwise, the avg  has already been initialized so we accumulate the running, weighted average between the previous frames and the current frames, using the accumWeight  value supplied to the constructor (Line 32). Taking the absolute value difference between the current frame and the running average yields regions of the image that contain motion — we call this our delta image.
However, in order to actually detect regions in our delta image that contain motion, we first need to apply thresholding and contour detection:
Calling cv2.threshold  using the supplied value of deltaThresh  allows us to binarize the delta image, which we then find contours in (Lines 37-45).
Note: Take special care when examining Lines 43-45. As we know, the cv2.findContours  method return signature changed between OpenCV 2.4 and 3. This codeblock allows us to use cv2.findContours  in both OpenCV 2.4 and 3 without having to change a line of code (or worry about versioning issues).
Finally, Lines 48-52 loop over the detected contours, check to see if their area is greater than the supplied minArea , and if so, updates the locs  list.
The list of contours containing motion are then returned to calling method on Line 55.
Note: Again, for a more detailed review of the motion detection algorithm, please see the home surveillance tutorial.

Accessing multiple cameras on the Raspberry Pi

Now that our BasicMotionDetector  class has been defined, we are now ready to create themulti_cam_motion.py  driver script to access the multiple cameras with the Raspberry Pi — and apply motion detection to each of the video streams.
Let’s go ahead and get started defining our driver script:
We start off on Lines 2-9 by importing our required Python packages. Notice how we have placed the BasicMotionDetector  class inside the pyimagesearch  module for organizational purposes. We also import VideoStream , our threaded video stream class that is capable of accessing both the Raspberry Pi camera module and built-in/USB web cameras.
The VideoStream  class is part of the imutils package, so if you do not already have it installed, just execute the following command:
Line 13 initializes our USB webcam VideoStream  class while Line 14 initializes our Raspberry Pi camera module VideoStream  class (by specifying usePiCamera=True ).
In the case that you do not want to use the Raspberry Pi camera module and instead want to leverage two USB cameras, simply changes Lines 13 and 14 to:
Where the src  parameter controls the index of the camera on your machine. Also note that you’ll have to replace webcam  and picam  with webcam1  and webcam2 , respectively throughout the rest of this script as well.
Finally, Lines 19 and 20 instantiate two BasicMotionDetector ‘s, one for the USB camera and a second for the Raspberry Pi camera module.
We are now ready to perform motion detection in both video feeds:
On Line 24 we start an infinite loop that is used to constantly poll frames from our (two) camera sensors. We initialize a list of such frames  on Line 26.
Then, Line 29 defines a for  loop that loops over each of the video stream and motion detectors, respectively. We use the stream  to read a frame  from our camera sensor and then resize the frame to have a fixed width of 400 pixels.
Further pre-processing is performed on Lines 37 and 38 by converting the frame to grayscale and applying a Gaussian smoothing operation to reduce high frequency noise. Finally, the processed frame is passed to our motion  detector where the actual motion detection is performed (Line 39).
However, it’s important to let our motion detector “run” for a bit so that it can obtain an accurate running average of what our background “looks like”. We’ll allow 32 frames to be used in the average background computation before applying any motion detection (Lines 43-45).
After we have allowed 32 frames to be passed into our BasicMotionDetector’s, we can check to see if any motion was detected:
Line 48 checks to see if motion was detected in the frame  of the current video stream .
Provided that motion was detected, we initialize the minimum and maximum (x, y)-coordinates associated with the contours (i.e., locs ). We then loop over the contours individually and use them to determine the smallest bounding box that encompasses allcontours (Lines 51-59).
The bounding box is then drawn surrounding the motion region on Lines 62 and 63, followed by our list of frames  updated on Line 66.
Again, the code detailed in this blog post assumes that there is only one object/person moving at a time in the given frame, hence this approach will obtain the desired result. However, if there are multiple moving objects, then we’ll need to use more advanced background subtraction and tracking methods — future blog posts on PyImageSearch will cover how to perform multi-object tracking.
The last step is to display our frames  to our screen:
Liens 70-72 increments the total  number of frames processed, followed by grabbing and formatting the current timestamp.
We then loop over each of the frames  we have processed for motion on Line 75 and display them to our screen.
Finally, Lines 82-86 check to see if the q  key is pressed, indicating that we should break from the frame reading loop. Lines 89-92 then perform a bit of cleanup.

Motion detection on the Raspberry Pi with multiple cameras

To see our multiple camera motion detector run on the Raspberry Pi, just execute the following command:
I have included a series of “highlight frames” in the following GIF that demonstrate our multi-camera motion detector in action:
Figure 4: An example of applying motion detection to multiple cameras using the Raspberry Pi, OpenCV, and Python.
Figure 4: An example of applying motion detection to multiple cameras using the Raspberry Pi, OpenCV, and Python.
Notice how I start in the kitchen, open a cabinet, reach for a mug, and head to the sink to fill the mug up with water — this series of actions and motion are detected on the first camera.
Finally, I head to the trash can to throw out a paper towel before exiting the frame view of the second camera.
A full video demo of multiple camera access using the Raspberry Pi can be seen below:

Summary

In this blog post, we learned how to access multiple cameras using the Raspberry Pi 2, OpenCV, and Python.
When accessing multiple cameras on the Raspberry Pi, you have two choices when constructing your setup:
  1. Either use multiple USB webcams.
  2. Or using a single Raspberry Pi camera module and at least one USB webcam.
Since the Raspberry Pi board has only one camera input, you cannot leverage multiple Pi camera boards — atleast without extensive hacks to your Pi.
In order to provide an interesting implementation of multiple camera access with the Raspberry Pi, we created a simple motion detection class that can be used to detect motion in the frame views of each camera connected to the Pi.
While basic, this motion detector demonstrated that multiple camera access is capable of being executed in real-time on the Raspberry Pi — especially with the help of our threadedPiVideoStream  and VideoStream  classes implemented in blog posts a few weeks ago.
If you are interested in learning more about using the Raspberry Pi for computer vision, along with other tips, tricks, and hacks related to OpenCV, be sure to signup for the PyImageSearch Newsletter using the form at the bottom of this post.
See you next week!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

132 Responses to Multiple cameras with the Raspberry Pi and OpenCV

  1. Fred January 18, 2016 at 12:17 pm # 

留言

這個網誌中的熱門文章

2017通訊大賽「聯發科技物聯網開發競賽」決賽團隊29強出爐!作品都在11月24日頒獎典禮進行展示

2017通訊大賽「聯發科技物聯網開發競賽」決賽團隊29強出爐!作品都在11月24日頒獎典禮進行展示 LIS   發表於 2017年11月16日 10:31   收藏此文 2017通訊大賽「聯發科技物聯網開發競賽」決賽於11月4日在台北文創大樓舉行,共有29個隊伍進入決賽,角逐最後的大獎,並於11月24日進行頒獎,現場會有全部進入決賽團隊的展示攤位,總計約為100個,各種創意作品琳琅滿目,非常值得一看,這次錯過就要等一年。 「聯發科技物聯網開發競賽」決賽持續一整天,每個團隊都有15分鐘面對評審團做簡報與展示,並接受評審們的詢問。在所有團隊完成簡報與展示後,主辦單位便統計所有評審的分數,並由評審們進行審慎的討論,決定冠亞季軍及其他各獎項得主,結果將於11月24日的「2017通訊大賽頒獎典禮暨成果展」現場公佈並頒獎。 在「2017通訊大賽頒獎典禮暨成果展」現場,所有入圍決賽的團隊會設置攤位,總計約為100個,展示他們辛苦研發並實作的作品,無論是想觀摩別人的成品、了解物聯網應用有那些新的創意、尋找投資標的、尋找人才、尋求合作機會或是單純有興趣,都很適合花點時間到現場看看。 頒獎典禮暨成果展資訊如下: 日期:2017年11月24日(星期五) 地點:中油大樓國光廳(台北市信義區松仁路3號) 我要報名參加「2017通訊大賽頒獎典禮暨成果展」>>> 在參加「2017通訊大賽頒獎典禮暨成果展」之前,可以先在本文觀看各團隊的作品介紹。 決賽29強團隊如下: 長者安全救星 可隨意描繪或書寫之電子筆記系統 微觀天下 體適能訓練管理裝置 肌少症之行走速率檢測系統 Sugar Robot 賽亞人的飛機維修輔助器 iTemp你的溫度個人化管家 語音行動冰箱 MR模擬飛行 智慧防盜自行車 跨平台X-Y視覺馬達控制 Ironmet 菸消雲散 無人小艇 (Mini-USV) 救OK-緊急救援小幫手 穿戴式長照輔助系統 應用於教育之模組機器人教具 這味兒很台味 Aquarium Hub 發展遲緩兒童之擴增實境學習系統 蚊房四寶 車輛相控陣列聲納環境偵測系統 戶外團隊運動管理裝置 懷舊治療數位桌曆 SeeM智能眼罩 觸...
opencv4nodejs Asynchronous OpenCV 3.x Binding for node.js   122     2715     414   0   0 Author Contributors Repository https://github.com/justadudewhohacks/opencv4nodejs Wiki Page https://github.com/justadudewhohacks/opencv4nodejs/wiki Last Commit Mar. 8, 2019 Created Aug. 20, 2017 opencv4nodejs           By its nature, JavaScript lacks the performance to implement Computer Vision tasks efficiently. Therefore this package brings the performance of the native OpenCV library to your Node.js application. This project targets OpenCV 3 and provides an asynchronous as well as an synchronous API. The ultimate goal of this project is to provide a comprehensive collection of Node.js bindings to the API of OpenCV and the OpenCV-contrib modules. An overview of available bindings can be found in the  API Documentation . Furthermore, contribution is highly appreciated....

完形心理學!?讓我們了解“介面設計師”為什麼這樣設計

完形心理學!?讓我們了解“介面設計師”為什麼這樣設計 — 說服客戶與老闆、跟工程師溝通、強化設計概念的有感心理學 — 情況 1 : 為何要留那麼多空白? 害我還要滾動滑鼠(掀桌) 情況 2 : 為什麼不能直接用一頁展現? 把客戶的需求塞滿不就完工啦! (無言) 情況 3: 這種設計好像不錯,但是為什麼要這樣做? (直覺大神告訴我這樣設計,但我說不出來為什麼..) 雖然世界上有許多 GUI 已經走得又長又遠又厲害,但別以為這種古代人對話不會出現,一直以來我們只是習慣這些 GUI 被如此呈現,但為何要這樣設計我們卻不一定知道。 由於 完形心理學 歸納出人類大腦認知之普遍性的規則,因此無論是不是 UI/UX 設計師都很適合閱讀本篇文章。但還是想特別強調,若任職於傳統科技公司,需要對上說服老闆,需要平行說服(資深)工程師,那請把它收進最愛;而習慣套用設計好的 UI 套件,但不知道為何這樣設計的 IT 工程師,也可以透過本文來強化自己的產品說服力。 那就開始吧~(擊掌) 完形心理學,又稱作格式塔(Gestalt)心理學,於二十世紀初由德國心理學家提出 — 用以說明人類大腦如何解釋肉眼所觀察到的事物,並轉化為我們所認知的物件。它可說是現代認知心理學的基礎,其貫徹的概念就是「整體大於個體的總合 “The whole is other than the sum of the parts.” —  Kurt Koffka」。 若深究完整的理論將會使本文變得非常的艱澀,因此筆者直接抽取個人認為與 UI 設計較為相關的 7 個原則(如下),並搭配實際案例做說明。有興趣了解全部理論的話可以另外 Google。 1. 相似性 (Similarity)  — 我們的大腦會把相似的事物看成一體 如果數個元素具有類似的尺寸、體積、顏色,使用者會自動為它們建立起關聯。這是因為我們的眼睛和大腦較容易將相似的事物組織在一起。如下圖所示,當一連串方塊和一連串的圓形並排時,我們會看成(a)一列方塊和兩列圓形(b)一排圓形和兩排三角形。 對應用到介面設計上,FB 每則文章下方的按鈕圖標(按讚 Like / 留言Comment / 分享 Share)雖然功能各不相同,但由於它們在視覺上顏色、大小、排列上的相似性,用戶會將它們視認為...