In many digital video applications, video sequences suffer from jerky movements between successive frames. In this paper, an integrated general-purpose stabilization method is proposed, which extracts the information from successive frames and removes the translation and rotation motions that result in undesirable effects. The scheme proposed starts with computation of the optical flow between consecutive video frames and an affine motion model is adopted in conjunction with the optical flow field obtained to estimate objects or camera motions using the Horn-Schunck algorithm. The estimated motion vectors are then used by a model-fitting filter to stabilize and smooth video sequences. Experimental results demonstrate that the proposed scheme is efficient due to its simplicity and provides good visual quality in terms of the global transformation fidelity measured by the peak-signal-noise-ratio. 1. Introduction Video captured by cameras often suffers from unwanted jittering motions. In general, this problem is usually dealt with by means of compensation for image motions. Most video stabilization algorithms presented in the recent literature try to remove the image motions by either totally or partially compensating for all motions caused by camera rotations or vibrations [1–9]; therefore the resultant background remains motionless. The motion models described in [1, 2] proposed a pyramid structure to compute the motion vectors with an affine motion model representing rotational and translational camera motions. Hansen et al.  described an image stabilization scheme which uses a multiresolution, iterative process to calculate the affine motion parameters between levels of Laplacian pyramid images. The parameters obtained through the refinement process achieve the desired accuracy. The method presented in  used a probabilistic model with a Kalman filter to reduce the motion noises and to obtain stabilized camera motions. Chang et al.  used the optical flow between consecutive frames based on the modification of the method in  to estimate the camera motions by fitting a simplified affine motion model. Tsubaki et al.  developed a method that uses two threshold parameters to describe the velocity and the frequency of oscillations of unstable video sequences. More recently, Zhang et al.  proposed a 3D perspective camera model based method, which works well in situations where significant depth variations exist in the scenes and the camera undergoes large translational movement. The technique developed in  adopted a spatially and
M. Hansen, P. Anandan, K. Dana, G. van der Wal, and P. Burt, “Real-time scene stabilization and mosaic construction,” in Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, pp. 54–62, December 1994.
A. Litvin, J. Konrad, and W. C. Karl, “Probabilistic video stabilization using Kalman filtering and mosaicking,” in Proceedings of the IS&T/SPIE Symposium on Electronic Imaging, Image and Video Communications and Processing, Proceedings of SPIE, pp. 663–674, January 2003.
H.-C. Chang, S.-H. Lai, and K.-R. Lu, “A robust and efficient video stabilization algorithm,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '04), pp. 29–32, Taipei, Taiwan, June 2004.
B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of International Joint Conference on Artificial Intelligence (IJCAI '81), pp. 674–679, Vancouver, Canada, 1981.
I. Tsubaki, T. Morita, T. Saito, and K. Aizawa, “An adaptive video stabilization method for reducing visually induced motion sickness,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), pp. 497–500, Genoa, Italy, September 2005.
C. Morimoto and R. Chellappa, “Evaluation of image stabilization algorithms,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '98), vol. 5, pp. 2789–2792, Seattle, Wash, USA, May 1998.
A. ？elebi, O. Akbulut, O. Urhan, and S. Ertürk, “Truncated gray-coded bit-plane matching based motion estimation and its hardware architecture,” IEEE Transactions on Consumer Electronics, vol. 55, no. 3, pp. 1530–1536, 2009.