点击上方“”,选择加"
重磅干货,第一时间送达
SLAM它包括两个主要任务:定位和构图。在移动机器人或自动驾驶中,这是一个非常重要的问题:如果机器人想要准确移动,它必须有一个环境地图,因此需要知道机器人的位置来构建环境地图。
本系列文章主要分为四部分:
第一部分将介绍Lidar SLAM,包括Lidar传感器,开源Lidar SLAM系统,Lidar中的深度学习以及挑战和未来。
重点介绍了第二部分Visual SLAM,包括不同厚度的相机传感器SLAM的开源视觉SLAM系统。
第三部分介绍了视觉惯性里程SLAM,视觉SLAM深度学习和未来。
第四部分将介绍激光雷达与视觉的融合。
激光雷达和视觉SLAM系统 说到激光雷达和视觉SLAM两者之间的校准工作对系统至关重要。
多传感器校准
Camera&IMU:Kalibr[1]是解决以下传感器校准的工具箱:
多摄像机校准。
视觉惯性校准(Camera IMU)。
卷帘快门摄像机校准。
Vins视觉与视觉的融合IMU,具有在线空间校准和在线时间校准功能。
MSCKF-VIO有相机和IMU校准功能。
mc-VINS[2]可校准所有多个相机和IMU外部参数和时间偏移。
IMU-TK[3][4]还可以IMU校准内部参数。
论文[5]提出了一种单目论文VIO端到端网络集成了摄像机和IMU的数据。
单目和深度相机
BAD SLAM[6]提出使用同步全球快门RGB以及深度相机的校准基准。
?相机及相机:mcptam[7]是一个使用多摄像头的人SLAM系统。它还可以校准内外参数。
MultiCol SLAM[8]是一个multifisheye相机SLAM。另外,最新版本SVO还可支持多个摄像头。
?Lidar& IMU: LIO-mapping [9]引入了一种紧密耦合Lidar-IMU集成方法。激光雷达和IMU对准是在三维空间和六自由态传感器之间找到外部校准的一种方法。激光雷达的外部标记[10][11]。博士论文[12]阐述了激光雷达校准的工作。
?Camera&Lidar:论文[13]介绍了概率监测算法和连续校准优化器,使相机和激光雷达的校准能够在线和自动进行。
Lidar-Camera [14]利用3D-3D点对应对来Lidar与相机进行外部校准。
RegNet[15]首先使用扫描激光雷达和单目相机来推断多模态传感器之间的6自由度(DOF)深卷积神经网络外部校准(CNN)。
LIMO[16]基于LIDAR用于提取摄像机特征轨迹和运动估计的深度提取算法。CalibNet[17]是一个自监督的深网络,可以实时自动估计三维激光雷达和二维相机之间的六自由度刚体变换。Autoware也可用于激光雷达和相机的校准。
激光雷达与视觉融合
DFuseNet[21]提出了一个CNN,该CNN被设计用于基于从高分辨率强度图像中收集到的上下文线索对一系列稀疏距离测量进行上采样。
LICFusion[22]融合了IMU测量值、稀疏视觉特征和提取物LiDAR点云数据。
任务层:论文[23]是一种基于立体相机和激光雷达融合的感知方案。
为了检测和分类移动物体,将毫米波雷达、激光雷达和相机融为一体。
论文[25]通过深度相机提供的深度信息或与相机相关的激光雷达深度信息来增强VO。
V-Loam[26]提出了视觉里程计与激光雷达里程计相结合的总体框架。从视觉里程计和基于扫描匹配的激光雷达里程计两个方面,提高了实时运动估计和点云匹配算法的性能。
VI-SLAM该系统将精确的激光里程估计器与使用视觉实现环路检测的位置识别算法相结合。[27]SLAM采用跟踪部分RGB-D相机和二维低成本激光雷达通过模式切换和数据集成构建稳定的室内SLAM系统。
VIL-SLAM[28]立体声将紧密耦合VIO结合激光雷达映射和激光雷达增强的视觉环路。[29]将单目摄像头图像与激光距离测量相结合,使视觉冲击不会因尺度不确定性增加而产生误差。在深度学习中,许多方法可以检测和识别摄像机和激光雷达的集成数据,如点集成[30]RoarNet[31]、AVOD[32]、FuseNet[33]。[34]利用激光雷达和摄像机,以端到端可学习的架构完成了非常精确的定位。
融合SLAM挑战与未来
参考文献
【1】Joern Rehder, Janosch Nikolic, Thomas Schneider, Timo Hinzmann, and Roland Siegwart. Extending kalibr: Calibrating the extrinsics of multiple imus and of individual axes. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 4304–4311. IEEE, 2016.
[2] Kevin Eckenhoff, Patrick Geneva, Jesse Bloecker, and Guoquan Huang. Multi-camera visual-inertial navigation with online intrinsic and extrinsic calibration. 2019 International Conference on Robotics and Automation (ICRA), pages 3158–3164, 2019.
[3] A. Tedaldi, A. Pretto, and E. Menegatti. A robust and easy to implement method for imu calibration without external equpments. In Proc. of: IEEE International Conference on Robotics and Automation (ICRA), pages 3042–3049, 2014.
[4] A. Pretto and G. Grisetti. Calibration and performance evaluation of low-cost imus. In Proc. of: 20th IMEKO TC4 International Symposium, pages 429–434, 2014.
【5】] Changhao Chen, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, and Niki Trigoni. Selective sensor fusion for neural visual-inertial odometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10542–10551, 2019.
【6】Thomas Schops, Torsten Sattler, and Marc Pollefeys. Bad slam: Bundle adjusted direct rgb-d slam. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
[7] Adam Harmat, Michael Trentini, and Inna Sharf. Multi-camera tracking and mapping for unmanned aerial vehicles in unstructured environments. Journal of Intelligent & Robotic Systems, 78(2):291– 317, 2015.
【8】Steffen Urban and Stefan Hinz. MultiCol-SLAM - a modular real-time multi-camera slam system. arXiv preprint arXiv:1610.07336, 2016
【9】Haoyang Ye, Yuying Chen, and Ming Liu. Tightly coupled 3d lidar inertial odometry and mapping. arXiv preprint arXiv:1904.06993, 2019.
[10] Deyu Yin, Jingbin Liu, Teng Wu, Keke Liu, Juha Hyypp¨a, and Ruizhi Chen. Extrinsic calibration of 2d laser rangefinders using an existing cuboid-shaped corridor as the reference. Sensors, 18(12):4371, 2018.
[1] Shoubin Chen, Jingbin Liu, Teng Wu, Wenchao Huang, Keke Liu, Deyu Yin, Xinlian Liang, Juha Hyypp¨a, and Ruizhi Chen. Extrinsic calibration of 2d laser rangefinders based on a mobile sphere. Remote Sensing, 10(8):1176, 2018.
[12] Jesse Sol Levinson. Automatic laser calibration, mapping, and localization for autonomous vehicles. Stanford University, 2011
【13】Jesse Levinson and Sebastian Thrun. Automatic online calibration of cameras and lasers. In Robotics: Science and Systems, volume 2, 2013.
[14] A. Dhall, K. Chelani, V. Radhakrishnan, and K. M. Krishna. LiDARCamera Calibration using 3D-3D Point correspondences. ArXiv eprints, May 2017.
[15] Nick Schneider, Florian Piewak, Christoph Stiller, and Uwe Franke. Regnet: Multimodal sensor registration using deep neural networks. In 2017 IEEE intelligent vehicles symposium (IV), pages 1803–1810. IEEE, 2017.
[16] Johannes Graeter, Alexander Wilczynski, and Martin Lauer. Limo: Lidar-monocular visual odometry. 2018.
[17] Ganesh Iyer, J Krishna Murthy, K Madhava Krishna, et al. Calibnet: self-supervised extrinsic calibration using 3d spatial transformer networks. arXiv preprint arXiv:1803.08181, 2018.
【18】Jason Ku, Ali Harakeh, and Steven L Waslander. In defense of classical image processing: Fast depth completion on the cpu. In 2018 15th Conference on Computer and Robot Vision (CRV), pages 16–22. IEEE, 2018
【19】Fangchang Mal and Sertac Karaman. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1–8. IEEE, 2018.
[20] Jonas Uhrig, Nick Schneider, Lukas Schneider, Uwe Franke, Thomas Brox, and Andreas Geiger. Sparsity invariant cnns. In 2017 International Conference on 3D Vision (3DV), pages 11–20. IEEE, 2017.
[21] Shreyas S Shivakumar, Ty Nguyen, Steven W Chen, and Camillo J Taylor. Dfusenet: Deep fusion of rgb and sparse depth information for image guided dense depth completion. arXiv preprint arXiv:1902.00761, 2019.
【22】Xingxing Zuo, Patrick Geneva, Woosik Lee, Yong Liu, and Guoquan Huang. Lic-fusion: Lidar-inertial-camera odometry. arXiv preprint arXiv:1909.04102, 2019.
【23】Olivier Aycard, Qadeer Baig, Siviu Bota, Fawzi Nashashibi, Sergiu Nedevschi, Cosmin Pantilie, Michel Parent, Paulo Resende, and TrungDung Vu. Intersection safety using lidar and stereo vision sensors. In 2011 IEEE Intelligent Vehicles Symposium (IV), pages 863–869. IEEE, 2011.
【24】Ricardo Omar Chavez-Garcia and Olivier Aycard. Multiple sensor fusion and classification for moving object detection and tracking IEEE Transactions on Intelligent Transportation Systems, 17(2):525– 534, 2015.
【25】Ji Zhang, Michael Kaess, and Sanjiv Singh. Real-time depth enhanced monocular odometry. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4973–4980. IEEE, 2014.
【26】Ji Zhang and Sanjiv Singh. Visual-lidar odometry and mapping: Lowdrift, robust, and fast. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 2174–2181. IEEE, 2015.
【27】Yoshua Nava. Visual-LiDAR SLAM with loop closure. PhD thesis, Masters thesis, KTH Royal Institute of Technology, 2018.
【28】Weizhao Shao, Srinivasan Vijayarangan, Cong Li, and George Kantor. Stereo visual inertial lidar simultaneous localization and mapping. arXiv preprint arXiv:1902.10741, 2019.
[29] Franz Andert, Nikolaus Ammann, and Bolko Maass. Lidar-aided camera feature tracking and visual slam for spacecraft low-orbit navigation and planetary landing. In Advances in Aerospace Guidance, Navigation and Control, pages 605–623. Springer, 2015.
[30] Danfei Xu, Dragomir Anguelov, and Ashesh Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 244–253, 2018
【31】Kiwoo Shin, Youngwook Paul Kwon, and Masayoshi Tomizuka. Roarnet: A robust 3d object detection based on region approximation refinement. arXiv preprint arXiv:1811.03818, 2018.
[32] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation. IROS, 2018.
[33] Caner Hazirbas, Lingni Ma, Csaba Domokos, and Daniel Cremers. Fusenet: Incorporating depth into semantic segmentation via fusionbased cnn architecture. In Asian conference on computer vision, pages 213–228. Springer, 2016
【34】 Ming Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 641– 656, 2018.
本文仅做学术分享,如有侵权,请联系删文。
开始面向外开放啦👇👇👇
下载1:OpenCV-Contrib扩展模块中文版教程
在「小白学视觉」公众号后台回复:扩展模块中文教程,即可下载全网第一份OpenCV扩展模块教程中文版,涵盖扩展模块安装、SFM算法、立体视觉、目标跟踪、生物视觉、超分辨率处理等二十多章内容。
下载2:Python视觉实战项目52讲
在「小白学视觉」公众号后台回复:Python视觉实战项目,即可下载包括图像分割、口罩检测、车道线检测、车辆计数、添加眼线、车牌识别、字符识别、情绪检测、文本内容提取、面部识别等31个视觉实战项目,助力快速学校计算机视觉。
下载3:OpenCV实战项目20讲
在「小白学视觉」公众号后台回复:OpenCV实战项目20讲,即可下载含有20个基于OpenCV实现20个实战项目,实现OpenCV学习进阶。
交流群
欢迎加入公众号读者群一起和同行交流,目前有SLAM、三维视觉、传感器、自动驾驶、计算摄影、检测、分割、识别、医学影像、GAN、算法竞赛等微信群(以后会逐渐细分),请扫描下面微信号加群,备注:”昵称+学校/公司+研究方向“,例如:”张三 + 上海交大 + 视觉SLAM“。请按照格式备注,否则不予通过。添加成功后会根据研究方向邀请进入相关微信群。请勿在群内发送广告,否则会请出群,谢谢理解~