Streaming Live Motion Detection using OpenCV

  • Ripul Agrawal
  • Jul 14, 2020
  • Artificial Intelligence
Streaming Live Motion Detection using OpenCV title banner

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.” - Alan Turing

 

Introduction

 

In the last blog, we featured OpenCV along with the functions it provides and the areas of application. OpenCV is a Computer Vision library, written in two languages, i.e., C++ & Python to deal with both images and videos. 

 

In addition, it provides real-time support using the camera or web cameras in many applications as can be seen in Yolo, Face Recognition, and many more.

 

Some of the common functions available by OpenCV are,

  • Read, write and display the image,

  • Different color formats for image i.e. BGR, RGB, grayscale, HSV, etc,

  • Data Augmentation i.e. rotation, shrinking, resize, magnification, etc, 

  • Image Thresholding,

  • Edge Detection, and

  • Contour Detection will help in detecting different shapes from the image.

 

Common Areas of application are,

  • Image Processing,

  • Face Recognition,  

  • Object Detection,

  • Gesture Recognition, 

  • Can create a high-resolution image from different images,

  • Track moving objects,

  • Track 3D models,

  • Track camera movements,

  • Track human motions, and

  • Motion Detection.

 

 

Comparison with Other Languages

 

No doubt there are other languages available for computer vision but none of them compete with OpenCV, as its an open-source and also there is no restriction on its use as you can use it without a license while some language requires a license to use like Matlab. To use Matlab on a large scale you have to purchase a license that makes it less economical. 

 

It also fits best in terms of execution as OpenCV is much faster as its speed can reach up to 80 in some cases. So in this way, anyone who wants to explore the field of computer vision or image processing can easily use Open CV whereas Matlab is more feasible for the researchers who can afford its license.

 

 

Understanding Motion Detection

 

The core concept is to detect motion in frame i.e. if any change will occur in the frame using OpenCV. It can be done for either recorded video or live streaming i.e web-camera or any other camera mounted. 

 

In this article, we will detect motion from the web camera using OpenCV and a bounding box will be rendered to the motion detected as if any new object will introduce in the frame then a bounding box will appear surrounding the object.

 

In videos, multiple frames are stack together to form video. For motion detection, we calculate the difference between two continuous frames and if it’s higher than the set threshold, it means motion detection has been observed there.

 

Code your own Motion Detector

 

  • Import the Libraries

Import the required libraries to be used i.e. cv2 (OpenCV), NumPy. As the video is comprised of images, and OpenCV will be used to preprocess images i.e. starting the web camera, to read video as a stack of frames, deal with the color formatting of the frame, find the gaussian blur of the image, calculation of the difference between frames, image thresholding, dilation, contour detection, and bounding boxes over the motion detected.


This image demonstrates the code to import libraries.

Code Snippet: 1


  • Assignment of Static Frame

To perform motion detection, we will compare every frame with the first frame that will be termed as a static frame. So every time each frame will be compared with the static frame and based on the difference motion in the frames will be detected.

Initially, the static frame will be None and will initialize with the first frame once the web camera opened.


This image demonstrates the code to initialize the static frame to None before starting off the camera.

Code snippet: 2


  • Create video capture Object

To capture a live stream from the web camera using OpenCV the first step is to create a VideoCapture object by passing  0 as an argument to signify the camera to use. Set the window size to a particular dimension.

Next, we will run while loop, to capture every frame from the web camera and run until the camera will be closed using waitkey.


This image demonstrates the code to initialize the camera object using VideoCapture for live streaming using web camera.

Code Snippet: 3


  • Read the frames and convert into grayscale

Now read the frames using object initialized above and further convert it into the grayscale format.


This image demonstrates the code to read the frames while live streaming and then conversion of each frame to the grayscale format.

Code Snippet: 4


  • Image Blurring

Now the next step is to remove the noise from each frame basically remove high frequencies from the images using GaussianBlur function by CV2. Pass the height and width of the low pass filter kernel as an argument along with the standard deviation in both directions.

Also, pass the grayscale image as input to the function. After blurring the image, motion in the consecutive frames will be detected easily.


This image demonstrates the code to perform image blurring on the grayscale image to remove noise from that.

Code Snippet: 5


  • Update Static Frame

Now its time to update the static frame with the first frame from the live streaming as no changes in the initial level. This frame will be used as a reference to calculate the absolute difference between each frame.


This image demonstrates the code to update the value of the static frame with the very first frame which was initially set to None.

Code snippet: 6


  • Calculate the difference in the frames and detect motion

Calculate the absolute difference between the static frame and each frame from the live streaming. Hence the difference is greater than 30, apply thresholding to the grayscale so as to where ever the motion will be detected that the region will be white and later dilate that thresholded image.


This image demonstrates the code to calculate the difference between the static frame and each frame and later based on score, thresholding of the image.

Code Snippet: 7


  • Find Contours

Now the region of motion has been converted to the white using thresholding, which can be treated as contours, so use find contours to detect coordinates of the moving objects.

Apart from the motion detection, once we will get the coordinates of the moving objects to plot the rectangle (bounding box) on them and put the text “Motion Detected”.  


This image demonstrates the code to contour detection in the thresholded frames and later plotting of bounding boxes over region of motion detected.

Code Snippet: 8, Source


  • Display frames in live stream

Now the motion has been detected in the video, so the next step is for live streaming with results of motion detection.

Put the code to release the camera and to destroy all the windows.


This image demonstrates the code to display the image and to close the camera by pressing some particular key and destruction of all the windows.

Code Snippet: 9


 

Conclusion

 

OpenCV a computer vision library supports many functions and has many applications including facial recognition, object detection, tracking human motions, tracking objects, camera movements, motion detection, etc. In comparison to the other tools, easily accessible to everyone as its open-source and its speed is another factor.

 

Many companies are using OpenCV due to its real-time application as it can be integrated with different languages and some of the hardware devices i.e. raspberry pi and some others.

 

Out of its vast applications, this blog has been framed to cover one of that i.e. Motion Detection based on background subtraction from the live streaming, where each new frame will be keep subtracted from the very first frame after image smoothening and if any region has a value less than some threshold, that signifies the motion detection in the frames. Later using contours, plot the bounding boxes on the region where motion has been detected and put the text over there.

0%

Ripul Agrawal

Computer Vision and Data Science enthusiast with expertise in Python and its libraries like Open CV, pandas, Numpy, matplotlib, sklearn, Keras, nltk, etc. I have worked on projects in • Machine Learning, • Natural Language Processing, • Deep Learning, • Image Processing. Apart from this, I have practiced, • Android Development with Java and Flutter (Dart), Firebase • Gmail add-ons development with Google App Script

Trending blogs

  • What is the OpenAI GPT-3?

    READ MORE
  • Introduction to Time Series Analysis: Time-Series Forecasting Machine learning Methods & Models

    READ MORE
  • How is Artificial Intelligence (AI) Making TikTok Tick?

    READ MORE
  • 6 Major Branches of Artificial Intelligence (AI)

    READ MORE
  • 7 Types of Activation Functions in Neural Network

    READ MORE
  • 7 types of regression techniques you should know in Machine Learning

    READ MORE
  • Reliance Jio and JioMart: Marketing Strategy, SWOT Analysis, and Working Ecosystem

    READ MORE
  • Top 10 Big Data Technologies in 2020

    READ MORE
  • Introduction to Logistic Regression - Sigmoid Function, Code Explanation

    READ MORE
  • What is K-means Clustering in Machine Learning?

    READ MORE
Write a BLOG