%0 Journal Article %T An Integrated New Scheme for Digital Video Stabilization %A W. Xu %A X. Lai %A D. Xu %A N. A. Tsoligkas %J Advances in Multimedia %D 2013 %I Hindawi Publishing Corporation %R 10.1155/2013/651650 %X In many digital video applications, video sequences suffer from jerky movements between successive frames. In this paper, an integrated general-purpose stabilization method is proposed, which extracts the information from successive frames and removes the translation and rotation motions that result in undesirable effects. The scheme proposed starts with computation of the optical flow between consecutive video frames and an affine motion model is adopted in conjunction with the optical flow field obtained to estimate objects or camera motions using the Horn-Schunck algorithm. The estimated motion vectors are then used by a model-fitting filter to stabilize and smooth video sequences. Experimental results demonstrate that the proposed scheme is efficient due to its simplicity and provides good visual quality in terms of the global transformation fidelity measured by the peak-signal-noise-ratio. 1. Introduction Video captured by cameras often suffers from unwanted jittering motions. In general, this problem is usually dealt with by means of compensation for image motions. Most video stabilization algorithms presented in the recent literature try to remove the image motions by either totally or partially compensating for all motions caused by camera rotations or vibrations [1¨C9]; therefore the resultant background remains motionless. The motion models described in [1, 2] proposed a pyramid structure to compute the motion vectors with an affine motion model representing rotational and translational camera motions. Hansen et al. [3] described an image stabilization scheme which uses a multiresolution, iterative process to calculate the affine motion parameters between levels of Laplacian pyramid images. The parameters obtained through the refinement process achieve the desired accuracy. The method presented in [4] used a probabilistic model with a Kalman filter to reduce the motion noises and to obtain stabilized camera motions. Chang et al. [5] used the optical flow between consecutive frames based on the modification of the method in [6] to estimate the camera motions by fitting a simplified affine motion model. Tsubaki et al. [7] developed a method that uses two threshold parameters to describe the velocity and the frequency of oscillations of unstable video sequences. More recently, Zhang et al. [8] proposed a 3D perspective camera model based method, which works well in situations where significant depth variations exist in the scenes and the camera undergoes large translational movement. The technique developed in [9] adopted a spatially and %U http://www.hindawi.com/journals/am/2013/651650/