|
室内复杂光照环境视觉定位数据集构建研究
|
Abstract:
在具有复杂光照特征的室内环境中,由于缺少公开统一的视觉定位算法验证基准,机器人的视觉定位任务难以进行验证和评估。因此,为了打破数据拥有者和使用者之间的鸿沟,解决面向复杂光照的视觉数据的获取成本高、更新慢的问题,建立可持续更新的机器人视觉定位数据集是十分重要的。本文聚焦复杂光照的室内环境条件,探索标准定位数据集组织架构和数据集构建流程,并根据环境特点进行硬件选型和系统构建,构建了具有复杂照明特点的视觉定位数据集,而且基于本数据集评测了多种视觉定位算法。本文为所有的视觉序列提供真实地面位置,精度达到厘米级,可以用于测试视觉定位算法处理光照变化的能力。本文提出的构建流程可以为其他场景特点数据集的构建提供可迁移的经验,本文提出的数据集可服务于视觉定位验证任务,实现了现实场景和视觉定位算法提升之间的衔接,能给复杂光照环境下的机器人视觉定位系统提供更多的度量方法。
In the indoor environment with complex illumination characteristics, it is difficult to verify and evaluate the visual localization tasks of robots due to the lack of publicly available unified bench-marks for the verification of visual localization algorithms. Therefore, to break the gap between data owners and users and solve the problems of high acquisition cost and slow update of vision data for complex illumination, it is essential to establish a continuously updatable robot vision localization dataset. Focusing on complex illumination environment, this paper explores the standard framework and data set construction process of visual positioning data sets, and conducts hardware selection and system construction according to the characteristics of the environment, and con-structs a visual positioning data set with complex lighting characteristics. The dataset evaluates various visual localization algorithms. This paper provides ground-truth locations for all visual sequences with centimeter-level accuracy, which can be used to test the ability of visual localization algorithms to handle lighting changes. The construction process proposed in this paper can provide transferable experience for the construction of other scene characteristic datasets. This dataset serves visual positioning verification task, realizes the connection between the real scene and the improvement of the visual positioning algorithm, and can provide more measurement methods for robot visual positioning system in complex illumination environment.
[1] | Geiger, A., Lenz, P. and Urtasun, R. (2012) Are We Ready for Autonomous Driving? The KITTI Vision Benchmark Suite. IEEE Conference on Computer Vision & Pattern Recognition, Providence, 16-21 June 2012, 3354-3361.
https://doi.org/10.1109/CVPR.2012.6248074 |
[2] | Burri, M., et al. (2016) The EuRoC Micro Aerial Vehicle Datasets. The International Journal of Robotics Research, 35, 1157-1163. https://doi.org/10.1177/0278364915620033 |
[3] | Smith, M., et al. (2009) The New College Vision and Laser Data Set. The International Journal of Robotics Research, 28, 595-599. https://doi.org/10.1177/0278364909103911 |
[4] | Carlevaris-Bianco, N., Ushani, A.K. and Eustice, R.M. (2016) University of Michigan North Campus Long-Term Vision and Lidar Dataset. The International Journal of Robotics Research, 35, 1023-1035.
https://doi.org/10.1177/0278364915614638 |
[5] | Pfrommer, B., et al. (2017) PennCOSYVIO: A Challenging Visual Inertial Odometry Benchmark. 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May-3 June 2017, 3847-3854.
https://doi.org/10.1109/ICRA.2017.7989443 |
[6] | Schubert, D., et al. (2018) The TUM VI Benchmark for Evaluating Visual-Inertial Odometry. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 1-5 October 2018, 1680-1687.
https://doi.org/10.1109/IROS.2018.8593419 |
[7] | Lee, A.J., et al. (2019) ViViD: Vision for Visibility Da-taset. |
[8] | Huai, J., et al. (2019) Segway DRIVE Benchmark: Place Recognition and SLAM Data Collected by a Fleet of Delivery Robots. |
[9] | Ramezani, M., et al. (2020) The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, 24-30 October 2020, 4353-4360. https://doi.org/10.1109/IROS45743.2020.9340849 |
[10] | Tang, D., et al. (2016) AprilTag Array-Aided Extrinsic Calibration of Camera Laser Multi-Sensor System. Robotics and Biomimetics, 3, 13. https://doi.org/10.1186/s40638-016-0044-0 |
[11] | Olson, E. (2011) AprilTag: A Robust and Flexible Visual Fiducial System. 2011 IEEE International Conference on Robotics and Automation, Shanghai, 9-13 May 2011, 3400-3407. https://doi.org/10.1109/ICRA.2011.5979561 |
[12] | Furgale, P.T., Rehder, J. and Siegwart, R.Y. (2013) Unified Temporal and Spatial Calibration for Multi-Sensor Systems. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 3-7 November 2013, 1280- 1286. https://doi.org/10.1109/IROS.2013.6696514 |
[13] | Maye, J., Furgale, P.T. and Siegwart, R.Y. (2013) Self-Supervised Calibration for Robotic Systems.
https://doi.org/10.1109/IVS.2013.6629513 |
[14] | Zhang, Q. and Pless, R. (2004) Extrinsic Calibration of a Camera and Laser Range Finder (Improves Camera Calibration). 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, 2301-2306. |
[15] | Liu, S., et al. (2021) Brief Industry Paper: The Matter of Time a General and Efficient System for Precise Sensor Synchronization in Robotic Computing. 2021 IEEE 27th Real-Time and Embedded Technology and Applications Symposium (RTAS), Nashville, 18-21 May 2021, 413-416. https://doi.org/10.1109/RTAS52030.2021.00040 |
[16] | CamLaserCalibraTool. 2018. https://github.com/LeatherWang/CamLaserCalibraTool |
[17] | Jia, S., et al. (2021) A Cross-Correction LiDAR SLAM Method for High-Accuracy 2D Mapping of Problematic Scenario. ISPRS Journal of Photogrammetry and Remote Sensing, 171, 367-384.
https://doi.org/10.1016/j.isprsjprs.2020.11.004 |
[18] | Cadena, C., et al. (2016) Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust- Perception Age. IEEE Transactions on Robotics, 32, 1309-1332. https://doi.org/10.1109/TRO.2016.2624754 |
[19] | Mur-Artal, R. and Tardós, J.D. (2017) ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics, 33, 1255-1262. https://doi.org/10.1109/TRO.2017.2705103 |