|
基于YOLOv5的移动机器人动态视觉SLAM算法研究
|
Abstract:
移动机器人在未知环境中,通过同步定位与地图构建(SLAM)技术,实现了精准的自身定位功能。目前大多数视觉SLAM系统均假设环境是静态的,但在实际应用中,由于大量动态目标的存在,严重影响机器人的定位与建图精度。为改善这一情况,本文基于ORB-SLAM3系统提出一种鲁棒的动态视觉SLAM系统,其融合YOLOv5深度学习方法,以减少动态目标的影响。并在公共TUM数据集和真实场景中测试本文算法的性能,结果表明:本文算法与ORB-SLAM3相比,具有更高的鲁棒性。
Mobile robots can achieve precise self-localization through Simultaneous Localization and Mapping (SLAM) technology in unknown environments. Most current visual SLAM systems assume that the environment is static, but in practical applications, the presence of a large number of dynamic objects seriously affects the robot’s localization and mapping accuracy. To improve this situation, this paper proposes a robust dynamic visual SLAM system based on the ORB-SLAM3 system, which integrates the YOLOv5 deep learning method to reduce the impact of dynamic objects. The performance of the algorithm in this paper is tested on the public TUM dataset and real-world scenarios, and the results show that the algorithm in this paper has higher robustness compared with ORB-SLAM3.
[1] | Chen, W., Shang, G., Ji, A., et al. (2022) An Overview on Visual Slam: From Tradition to Semantic. Remote Sensing, 14, Article 3010. https://doi.org/10.3390/rs14133010 |
[2] | Macario Barros, A., Michel, M., Moline, Y., Corre, G. and Carrel, F. (2022) A Comprehensive Survey of Visual SLAM Algorithms. Robotics, 11, Article 24. https://doi.org/10.3390/robotics11010024. |
[3] | Jin, J., Jiang, X., Yu, C., et al. (2023) Dynamic Visual Simultaneous Localization and Mapping Based on Semantic Segmentation Module. Applied Intelligence, 53, 19418-19432 |
[4] | Sharafutdinov, D., Griguletskii, M., Kopanev, P., Kurenkov, M., Ferrer, G., Burkov, A., Gonnochenko, A. and Tsetserukou, D. (2023) Comparison of Modern Open-Source Visual SLAM Approaches. Journal of Intelligent & Robotic Systems, 107, Article No. 43. https://doi.org/10.1007/s10846-023-01812-7 |
[5] | Zhang, Q., Yu, W., Liu, W., et al. (2023) A Lightweight Visual Simultaneous Localization and Mapping Method with a High Precision in Dynamic Scenes. Sensors, 23, Article 9274. https://doi.org/10.3390/s23229274 |
[6] | Campos, C., Elvira, R., Rodríguez, J.J.G., et al. (2021) ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap Slam. IEEE Transactions on Robotics, 37, 1874-1890. https://doi.org/10.1109/TRO.2021.3075644 |
[7] | Sturm, J., Engelhard, N., Endres, F., et al. (2012) A Benchmark for the Evaluation of RGB-D SLAM Systems. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, 7-12 October 2012, 573-580. https://doi.org/10.1109/IROS.2012.6385773 |
[8] | Bescos, B., Campos, C., Tardós, J.D., et al. (2021) DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM. IEEE Robotics and Automation Letters, 6, 5191-5198. https://doi.org/10.1109/LRA.2021.3068640 |
[9] | Fang, B., Mei, G., Yuan, X., et al. (2021) Visual SLAM for Robot Navigation in Healthcare Facility. Pattern Recognition, 113, Article ID: 107822. https://doi.org/10.1016/j.patcog.2021.107822 |