| 32 | 0 | 8 |
| 下载次数 | 被引频次 | 阅读次数 |
为提高视觉SLAM算法在弱纹理动态场景的定位准确性,设计了一种基于点线特征融合和增加目标检测线程进行图像检测的视觉SLAM优化方法。该算法基于ORBSLAM2开源框架,在特征提取阶段增设了线特征提取功能,从而能够同时提取点线特征,获取更为丰富的特征信息;增加目标检测线程,利用MNN推理引擎搭载MobileNet-SSD模型进行动态对象检测,使得算法能够检测出输入图像的动态特征,结合极线约束的几何方法将动态特征点剔除,结合点线特征与动态检测结果,实现了SLAM算法在弱纹理动态场景中快速且精准定位。通过一系列的对比实验,验证了算法在对应场景的性能,提高了SLAM算法的定位准确性。
Abstract:To improve the positioning accuracy of visual SLAM algorithms in weak-texture dynamic scenes, an optimized visual SLAM method integrating point-line feature fusion and an additional target detection thread for image detection is proposed. Designed based on the opensource ORB-SLAM2 framework, the method adds a line feature extraction function in the feature extraction phase,enabling simultaneous extraction of point and line features to obtain richer feature information. It also introduces a target detection thread, which uses the MNN inference engine with the MobileNet-SSD model for dynamic object detection,allowing the algorithm to detect dynamic features in input images. By combining geometric methods with epipolar constraints to eliminate dynamic feature points, and integrating point-line features and dynamic detection, the SLAM algorithm achieves fast and accurate localization in weak-texture dynamic scenes. A series of comparative experiments verify the performance of the algorithm in relevant scenarios, improving the positioning accuracy of the SLAM algorithm.
[1]朱叶青,金瑞,赵良玉.大尺度弱纹理场景下多源信息融合SLAM算法[J].宇航学报,2021,42(10):1271-1282.
[2]赵良玉,金瑞,朱叶青,等.基于点线特征融合的双目惯性SLAM算法[J].航空学报,2022,43(3):325117.
[3]方琪,王晓华,苏杰.基于分组策略的点线特征融合同步定位与地图构建算法[J].激光与光电子学进展,2021,58(14):405-413.
[4]GOMEZ-OJEDA R,BRIALES J,GONZALEZ-JIMENEZ J.PL-SVO:semi-directmonocular visual odometry by combining points and line segments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems.Daejeon,Korea.Piscataway ,NJ:IEEE,2016:4211-4216.
[5]李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(2):214-220+229.
[6]PUMAROLA A,VAKHITOV A,AGUDO A,etal.PLSLAM:Real-time monocular visual SLAM with points and lines[C]//2017 IEEE International Conference on Robotics and Automation. Singapore. Piscataway,NJ:IEEE,2017:4503-4508.
[7]HE Y J,ZHAO J,GUO Y,etal.PL-VIO:tightly-coupled monocular visual-inertial odometry using point and line features[J].Sensors,2018,18(4):1159.
[8]LI Y Y,YUNUS R,BRASCH N,etal.RGB-D SLAM with structural regularities[C]//2021 IEEE International Conference on Robotics and Automation.Xi’an,China.IEEE,2021:11581-11587.
[9]王柯赛,姚锡凡,黄宇,等.动态环境下的视觉SLAM研究评述[J].机器人,2021,43(6):715-732.
[10]YU C,LIU Z X,LIU X J,etal.DS-SLAM:a semantic visual SLAM towards dynamic environments[C]//2018IEEE/RSJ International Conference on Intelligent Robots and Systems.Madrid,Spain.Piscataway,NJ:IEEE,2019:1168-1174.
[11]BESCOS B,FÁCIL J M,CIVERA J,etal.DynaSLAM:tracking,mapping,and inpainting in dynamic scenes[J].IEEE robotics and automation letters,2018,3(4):4076-4083.
[12]WU W X,GUO L,GAO H L,etal.YOLO-SLAM:a semantic SLAM system towards dynamic environment with geometric constraint[J].Neural computing and applications,2022,34(8):6011-6026.
基本信息:
DOI:10.13314/j.cnki.jhbsi.2026.01.001
中图分类号:TP391.41
引用信息:
[1]王晓栋,杨伟高.弱纹理动态场景下视觉SLAM算法优化研究[J].河北软件职业技术学院学报,2026,28(01):6-11.DOI:10.13314/j.cnki.jhbsi.2026.01.001.
基金信息:
2023年广东省普通高校重点领域专项“《新一代信息技术》面向农业场景的图像三维重建技术的研究”(2023ZDZX1078); 2024年广东省普通高校重点领域专项《高端装备制造》“基于视觉引导的宏微操作机器人设计”(2024ZDZX3094); 广州城市职业学院校级教科研项目“人工智能驱动的智能检测与控制团队”(KYTD2023004)