|
Motion Objects Segmentation and Shadow Suppressing without Background LearningDOI: 10.1155/2014/615198 Abstract: An approach to segmenting motion objects and suppressing shadows without background learning has been developed. Since wavelet transformation indicates the position of sharper variation, it is adopted to extract the information contents with the most meaningful features based on two successive video frames only. According to the fact that the saturation component is lower in the region of shadow and is independent of the brightness, HSV color space is selected to extract foreground motion region and suppress shadow instead of other color models. A local adaptive thresholding approach is proposed to extract initial binary motion masks based on the results of the wavelet transformation. A foreground reclassification is developed to get an optimal segmentation by fusion of mode filtering, connectivity analysis, and spatial-temporal correlation. Comparative studies with some investigated methods have indicated the superior performance of the proposal in extracting motion objects and suppressing shadows from cluttered contents with dynamic scene variation and crowded environments. 1. Introduction Robust foreground motion objects segmentation plays a crucial role in many computer vision applications such as visual surveillance, intelligent traffic monitoring, athletic performance analysis, perceptual user interface, and others. Many methods have been proposed for motion object segmentation. One of the popular approaches is background subtraction by comparing each new frame with a learned model of the scene taken by a static camera [1–3]. The initial background modeling is critical for this method. Gaussian mixture models (GMM) [4, 5] have been adopted to construct background modeling based on some previous period observations. There are two problems when GMM is applied to model background [1] including the selection of the number of components and the initialization. When the pixels have more Gaussian distribution than a predefined value or the pixels are covered by motion objects, it results in some background pixel errors easily. Nonparametric approaches methods have been developed to tackle this problem [6–8]. However, the size of a temporal window is needed to be specified. Besides, spatial dependencies are not exploited and the presence of shadows is usually incorrectly classified. Some algorithms based on spatiotemporal characteristics have been developed [9, 10] to overcome above shortcomings. Some fragmentations arise where foreground objects overlap spatially with background ones with similar color in this method. Some attempts have been made to
|