%0 Journal Article %T Vision Measurement Scheme Using Single Camera Rotation %A Shidu Dong %J ISRN Machine Vision %D 2013 %R 10.1155/2013/874084 %X We propose vision measurement scheme for estimating the distance or size of the object in static scene, which requires single camera with 3-axis accelerometer sensor rotating around a fixed axis. First, we formulate the rotation matrix and translation vector from one coordinate system of the camera to another in terms of the rotation angle, which can be figured out from the readouts of the sensor. Second, with the camera calibration data and through coordinate system transformation, we propose a method for calculating the orientation and position of the rotation axis relative to camera coordinate system. Finally, given the rotation angle and the images of the object in static scene at two different positions, one before and the other after camera rotation, the 3D coordinate of the point on the object can be determined. Experimental results show the validity of our method. 1. Introduction Nowadays, digital camera or mobile phone with camera is very popular. It is appealing and convenient, if they are utilized to estimate the distance or size of an object. For this purpose, stereo images with disparity should be taken [1, 2]. One obvious method for the stereo image acquisition is using two cameras with different view angles. With two images of an object from two cameras and the relative orientation and position of the two different viewpoints, through the correspondence between image points in the two views, the 3D coordinates of the points on the object can be determined [3, 4]. But, in general, mobile phone or professional camera has only one camera and cannot acquire two images from different views simultaneously. Fortunately, there have been many methods for stereo vision system with single camera. The methods may be broadly divided into three categories. First, to obtain virtual images from different viewpoints, additional optical devices are introduced, such as two planar mirrors [5], a biprism [6], convex mirrors [1, 7], or the double lobed hyperbolic mirrors [8]. But these optical devices are expensive and space consuming. Second, 3D information of an object is inferred directly from a still image under the knowledge of some geometrical scene constraints such as planarity of points and parallelism of lines and planes [9¨C11] or prior knowledge about the scene obtained from the supervised learning [12]. Nevertheless, these methods require constrained scenes or extra computation for training the depth models. Third, 3D information is extracted from sequential images with respect to camera movement, which is often adopted in robot area. Due to the %U http://www.hindawi.com/journals/isrn.machine.vision/2013/874084/