全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Smart Localization Using a New Sensor Association Framework for Outdoor Augmented Reality Systems

DOI: 10.1155/2012/634758

Full-Text   Cite this paper   Add to My Lib

Abstract:

Augmented Reality (AR) aims at enhancing our the real world, by adding fictitious elements that are not perceptible naturally such as: computer-generated images, virtual objects, texts, symbols, graphics, sounds, and smells. The quality of the real/virtual registration depends mainly on the accuracy of the 3D camera pose estimation. In this paper, we present an original real-time localization system for outdoor AR which combines three heterogeneous sensors: a camera, a GPS, and an inertial sensor. The proposed system is subdivided into two modules: the main module is vision based; it estimates the user’s location using a markerless tracking method. When the visual tracking fails, the system switches automatically to the secondary localization module composed of the GPS and the inertial sensor. 1. Introduction The idea of combining several kinds of sensors is not recent. The first multi-sensors system appeared with robotic applications where, for example, in [1] Vieville et al. proposed to combine a camera with an inertial sensor to automatically correct the path of an autonomous mobile robot. This idea has been exploited these last years by the community of Mixed Reality. Several works proposed to fuse vision and inertial data sensors, using a Kalman filter [2–6] or a particular filter [7, 8]. The strategy consists in merging all data from all sensors to localize the camera following a prediction/correction model. The data provided by inertial sensors (gyroscopes, magnetometers, etc.) are generally used to predict the 3D motion of the camera which is then adjusted and refined using the vision-based techniques. The Kalman filter is generally implemented to perform the data fusion. Kalman filter is a recursive filter that estimates the state of a linear dynamic system from a series of noisy measurements. Recursive estimation means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. So, no history of observations and/or estimates is required. In [2] You et al. developed a hybrid sensor combining a vision system with three gyroscopes to estimate the orientation of the camera in an outdoor environment. Their visual tracking allows refining the obtained estimation. The system described by Ababsa [5] combines an edge-based tracking with inertial measurements (angular velocity, linear acceleration, magnetic fields). The visual tracking is used for accurate 3D localization while the inertial sensor compensates errors due to sudden motion and occlusion. The measurements of

References

[1]  T. Vieville, F. Romann, B. Hotz et al., “Autonomous navigation of a mobile robot using inertial and visual cues,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 360–367, July 1993.
[2]  S. You, U. Neumann, and R. Azuma, “Orientation tracking for outdoor augmented reality registration,” IEEE Computer Graphics and Applications, vol. 19, no. 6, pp. 36–42, 1999.
[3]  M. Ribo, P. Lang, H. Ganster, M. Brandner, C. Stock, and A. Pinz, “Hybrid tracking for outdoor augmented reality applications,” IEEE Computer Graphics and Applications, vol. 22, no. 6, pp. 54–63, 2002.
[4]  J. D. Hol, T. B. Sch?n, F. Gustafsson, and P. J. Slycke, “Sensor fusion for augmented reality,” in Proceedings of the 9th International Conference on Information Fusion, pp. 1–6, Florence, Italy, July 2006.
[5]  F. Ababsa, “Advanced 3D localization by fusing measurements from GPS, inertial and vision sensors,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC ’09), pp. 871–875, San Antonio, Tex, USA, October 2009.
[6]  G. Bleser and D. Stricker, “Advanced tracking through efficient image processing and visual-inertial sensor fusion,” in Proceedings of IEEE International Conference on Virtual Reality (VR '08), pp. 137–144, March 2008.
[7]  F. Ababsa, J. Y. Didier, M. Mallem, and D. Roussel, “Head motion prediction in augmented reality systems using Monte Carlo particle filters,” in Proceedings of the 13th International Conference on Artificial Reality and Telexixtance (ICAT ’03), pp. 83–88, Tokyo, Japan, 2003.
[8]  F. E. Ababsa and M. Mallem, “Hybrid three-dimensional camera pose estimation using particle filter sensor fusion,” Advanced Robotics, vol. 21, no. 1-2, pp. 165–181, 2007.
[9]  G. Reitmayr and T. W. Drummond, “Initialisation for visual tracking in urban environments,” in Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR '07), Nara, Japan, November 2007.
[10]  Z. Hu, U. Keiichi, H. Lu, and F. Lamosa, “Fusion of vision, 3D gyro and GPS for camera dynamic registration,” in Proceedings of the 17th International Conference on Pattern Recognition (ICPR '04), pp. 351–354, Washington, DC, USA, August 2004.
[11]  M. Aron, G. Simon, and M. O. Berger, “Use of inertial sensors to support video tracking,” Computer Animation and Virtual Worlds, vol. 18, no. 1, pp. 57–68, 2007.
[12]  M. Maidi, F. Ababsa, and M. Mallem, “Vision-inertial tracking system for robust fiducials registration in augmented reality,” in Proceedings of IEEE Symposium Computational Intelligence for Multimedia Signal and Vision Processing (CIMSVP '09), pp. 83–90, Nashville, Tenn, USA, April 2009.
[13]  D. G. Lowe, “Fitting parameterized three-dimensional models to images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 5, pp. 441–450, 1991.
[14]  R. M. Haralick, H. Joo, C. N. Lee, X. Zhuang, V. G. Vaidya, and M. B. Kim, “Pose estimation from corresponding point data,” IEEE Transactions on Systems, Man and Cybernetics, vol. 19, no. 6, pp. 1426–1446, 1989.
[15]  G. Bleser and D. Stricker, “Advanced tracking through efficient image processing and visual-inertial sensor fusion,” Computers and Graphics, vol. 33, no. 1, pp. 59–72, 2009.
[16]  D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
[17]  C. Harris, Tracking with Rigid Models, Active Vision, MIT Press, Cambridge, Mass, USA, 1993.
[18]  M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
[19]  C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University Technical report CMU-CS-91-132, 1991.
[20]  O. D. Faugeras and G. Toscani, “Camera calibration for 3D computer vision,” in Proceedings of the International Workshop on Industrial Applications of Machine Vision and Machine Intelligence, pp. 240–247, 1987.
[21]  C. Williams, “Prediction with Gaussian processes: from linear regression to linear prediction and beyond,” Tech. Rep., Neural Computing Research Group, 1997.
[22]  S. DiVerdi and T. H?llerer, “GroundCam: a tracking modality for mobile mixed reality,” in Proceedings of IEEE International Conference on Virtual Reality (VR '07), pp. 75–82, March 2007.
[23]  F. Ababsa and M. Mallem, “Robust camera pose estimation combining 2D/3D points and lines tracking,” in Proceedings of IEEE International Symposium on Industrial Electronics (ISIE '08), pp. 774–779, Cambridge, UK, July 2008.
[24]  F. Ababsa and M. Mallem, “A robust circular fiducial detection technique and real-time 3D camera tracking,” Journal of Multimedia, vol. 3, no. 4, pp. 34–41, 2008.
[25]  J. Y. Didier, F. Ababsa, and M. Mallem, “Hybrid camera pose estimation combining square fiducials localisation technique and orthogonal iteration algorithm,” International Journal of Image and Graphics, vol. 8, no. 1, pp. 169–188, 2008.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133