%0 Journal Article %T 基于相机与激光雷达传感器融合的SLAM方法
Fusion Method Based on Camera and Lidar for SLAM %A 王睿忠 %A 汤玉春 %J Software Engineering and Applications %P 366-380 %@ 2325-2278 %D 2023 %I Hans Publishing %R 10.12677/SEA.2023.122037 %X 针对SLAM系统中动态物体对机器人定位和建图造成的误差,提出将相机和激光雷达传感器数据进行融合的方法。采用DBSCAN非监督聚类方法对激光雷达点云进行聚类并分割,对分割后的物体采用改进的3D点云簇评分和匈牙利算法进行匹配,实现相邻帧之间的同一物体判别和物体运动状态的判断;通过统一相机和激光雷达坐标,将属于动态物体范围内的特征点滤除,达到提高后端位姿估计精度的效果。实验结果表明:在经过该融合后的SLAM系统,机器人的旋转误差和平移误差有了明显的下降,定位精度和建图效果得到较好提升。
Aiming at the errors caused by dynamic objects in the robot positioning and mapping in the SLAM system, a method of fusing camera and lidar sensor data is proposed. The DBSCAN unsupervised clustering method is used to cluster and segment the lidar point cloud, and the segmented object is matched with the improved 3D point cloud cluster score and the Hungarian algorithm to achieve similar Discriminate the same object between adjacent frames and judge the motion state of the object; by unifying the coordinates of the camera and lidar, the feature points belonging to the dynamic object range are filtered out to improve the accuracy of the back-end pose estimation. The experimental results show that after the fused SLAM system, the rotation error and translation error of the robot have been significantly reduced, and the positioning accuracy and mapping effect have been better improved. %K 点云聚类,SLAM,机器人,动态物体,相机,激光雷达
Point Cloud %K SLAM %K Robot %K Dynamic Object %K Camera %K Lidar %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=64934