Skip to content

Implementation of Motion Detection Algorithm

paschalidoud edited this page Dec 1, 2014 · 5 revisions

#Functionality In this packet, two basic classes are implemented. The first one is called MotionDetection and it's responsibility is to guarantee a communication between motion_node and data_fusion. The second class is named MotionDetector and it uptakes the motion detection task. The idea behind our implementation is actually very simple and it is based on the assumption that moving objects belong to the foreground, whereas background consists only of static objects.

The implemented algorithms begin with a segmentation of background and foreground and subsequently it subtracts current frame from background, in order to isolate moving objects. Afterwards we apply suitable filters to enlarge their difference and at the end we calculate motion's position. According to the sum of moving pixels we deploy classification on three types of motions, with coresponding probability. For that purpose two thresholds are used. The first one corresponds to the minimum amount of moving pixels required to consider motion. On the other hand, the second one sets an upper limit on the amount of moving pixels required. In order to implement background and foreground segmentation, we have used a Gaussian Mixture-based algorithm, implemented in opencv.

#Motion types We consider the following three types of motion:

  • If the amount of moving pixels is lower than the lower threshold, then it is most likely to be caused by noise. For this reason the attached probability for this type is 0.
  • If the amount of moving pixels is greater than the lower threshold, then the attached probability is set to 0.5.
  • If the amount of moving pixels is greater than the upper threshold, then the attached probability is set to 1.

#Required variables In this section we will analyze all necessary variables for motion node, which are included in class MotionParameters.

  • MotionParameters::history : Number of frames, according to which background is calculated. It is obvious, that the smaller this number is, the more sensitive it is to noise.
  • MotionParameters::varThreshold : Threshold on the squared Mahalanobis distance between the pixel and the model to decide whether a pixel is well described by the background model. This parameter does not affect the background update.
  • MotionParameters::ShadowDetection : Parameter defining whether shadow detection should be enabled.
  • MotionParameters::nmixtures : Number of Gaussian mixtures.

Every one of the above variables can be changed from rqt_reconfigure.

#Position of moving object After segmentation in background and foreground we create a new binary image, whose pixels of every moving object have value 255. This means that every moving object is colored white. Therefore we only now search for continuous regions of white pixels. The disadvantage of this approach is that noise may be included. For this reason we have also implemented a dbscan clustering algorithm to cluster neighboring white pixels in same regions.