资讯详情

最新开源视觉 SLAM 方案

方案

2. S-PTAM(双目 PTAM)

3. MonoSLAM

4. ORB-SLAM2

以下 5, 6, 7, 8 几项是 TUM 计算机视觉组全家桶,官方主页

5. DSO

6. LDSO

7. LSD-SLAM

8. DVO-SLAM

9. SVO

10. DSM

11. openvslam

12. se2lam(估计地面车辆位置的视觉里程)

13. GraphSfM(基于图并行大尺度 SFM)

14. LCSD_SLAM(松耦合半直接法单目 SLAM)

15. RESLAM(基于边的 SLAM)

16. scale_optimization(将单目 DSO 扩展到双目)

17. BAD-SLAM(直接法 RGB-D SLAM)

18. GSLAM(集成 ORB-SLAM2,DSO,SVO 通用框架)

19. ARM-VO(运行于 ARM 处理器上的单目 VO)

20. cvo-rgbd(直接法 RGB-D VO)

二、Semantic / Depp SLAM(12 项)

SLAM 目前,与深度学习相结合的工作主要体现在两个方面:一方面,语义信息参与图纸建设、位置估计等环节,另一方面端到端完成 SLAM 某一步(例如 VO,闭环等。).个人不太关注后者,也欢迎大家issue中分享。

21. MsakFusion

22. SemanticFusion

23. semantic_3d_mapping

24. Kimera(实时测量和语义定位图开源库)

25. NeuroSLAM(脑启发式 SLAM)

26. gradSLAM(自动分区密集 SLAM)

27. ORB-SLAM2 目标检测/分割方案语义建图

28. SIVO(语义辅助特征选择)

29. FILD(附近图增量闭环检测)

30. object-detection-sptam(目标检测和双目 SLAM)

31. Map Slammer(单目深度估计 SLAM)

32. NOLBO(变分模型的概率 SLAM)

三、Multi-Landmarks / Object SLAM(12 项)

事实上,多路标的点、线和平面 SLAM 和物体级 SLAM 完全可以分类 Geometric SLAM 和 Semantic SLAM 但个人对这个方向比较感兴趣(也是我的研究生课题),所以独立出来,开源方案比较少,但是很有意思。

33. PL-SVO(点线 SVO)

34. stvo-pl(双目点线 VO)

35. PL-SLAM(点线 SLAM)

36. PL-VIO

37. lld-slam(用于 SLAM 可学习的线段描述符)

点线结合的工作还有很多,比如国内 交邹丹平老师 Zou D, Wu Y, Pei L, et al.StructVIO: visual-inertial odometry with structural regularity of man-made environments[J]. IEEE Transactions on Robotics,, 35(4): 999-1013. 浙大的 Zuo X, Xie X, Liu Y, et al.Robust visual SLAM with point and line features[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE,: 1775-1782.

38. PlaneSLAM

39. Eigen-Factors(平面对齐特征因子)

40. PlaneLoc

41. Pop-up SLAM

42. Object SLAM

43. voxblox-plusplus(物体级体素建图)

44. Cube SLAM

四、VIO / VISLAM(10 项)

只关注传感器融合中的视觉 惯导,其他传感器像 LiDAR,GPS 关注较少(SLAM 太复杂啦 -_-! )。视惯融合的新工作也相对较少,基本一些经典的方案就够用了。

45. msckf_vio

46. rovio

47. R-VIO

48. okvis

49. VIORB

50. VINS-mono

51. VINS-RGBD

52. Open-VINS

53. versavis(多功能视惯传感器系统)

54. CPI(视惯融合的封闭式预积分)

五、Dynamic SLAM(5 项)

动态 SLAM 这也是一个值得研究的话题。这里不容易分类。许多工作用于语义信息或三维重建。收集方案相对较少。欢迎添加issue。

5. DynamicSemanticMapping(动态语义建图)

 

 

56. DS-SLAM(动态语义 SLAM)

 

 

57. Co-Fusion(实时分割与跟踪多物体)

 

 

58. DynamicFusion

 

 

59. ReFusion(动态场景利用残差三维重建)

 

 

六、Mapping(18 项)

针对建图的工作一方面是利用几何信息进行稠密重建,另一方面很多工作利用语义信息达到了很好的语义重建效果,三维重建本身就是个很大的话题,开源代码也很多,以下方案收集地可能也不太全。

60. InfiniTAM(跨平台 CPU 实时重建)

 

 

61. BundleFusion

 

 

62. KinectFusion

 

 

63. ElasticFusion

 

 

64. Kintinuous

 

 

65. ElasticReconstruction

 

 

66. FlashFusion

 

 

67. RTAB-Map(激光视觉稠密重建)

 

 

68. RobustPCLReconstruction(户外稠密重建)

 

 

69. plane-opt-rgbd(室内平面重建)

 

 

70. DenseSurfelMapping(稠密表面重建)

 

 

71. surfelmeshing(网格重建)

 

 

72. DPPTAM(单目稠密重建)

 

 

73. VI-MEAN(单目视惯稠密重建)

 

 

74. REMODE(单目概率稠密重建)

 

 

75. DeepFactors(实时的概率单目稠密 SLAM)

 

 

76. probabilistic_mapping(单目概率稠密重建)

 

 

77. ORB-SLAM2 单目半稠密建图

 

 

七、Optimization(6 项)

个人感觉优化可能是 SLAM 中最难的一部分了吧 +_+ ,我们一般都是直接用现成的因子图、图优化方案,要创新可不容易,分享山川小哥的入坑指难。

78. 后端优化库

 

 

79. ICE-BA

 

 

80. minisam(因子图最小二乘优化框架)

 

 

81. SA-SHAGO(几何基元图优化)

 

 

82. MH-iSAM2(SLAM 优化器)

 

 

83. MOLA(用于定位和建图的模块化优化框架)

 

  • 一、Geometric SLAM(20 项)

    这一类是传统的基于特征点、直接法或半直接法的 SLAM,虽说传统,但 2019 年也新诞生了 9 个开源方案。

    1. PTAM

  • :Klein G, Murray D. Parallel tracking and mapping for small AR workspaces[C]//Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on. IEEE, : 225-234.
  • :https://github.com/Oxford-PTAM/PTAM-GPL
  • 工程地址:https://www.robots.ox.ac.uk/~gk/PTAM/
  • 作者其他研究:https://www.robots.ox.ac.uk/~gk/publications.html
  • :Taihú Pire,Thomas Fischer, Gastón Castro, Pablo De Cristóforis, Javier Civera and Julio Jacobo Berlles. S-PTAM: Stereo Parallel Tracking and Mapping. Robotics and Autonomous Systems, .
  • :https://github.com/lrse/sptam
  • 作者其他论文:Castro G, Nitsche M A, Pire T, et al. Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robotics and Autonomous Systems, .
  • :Davison A J, Reid I D, Molton N D, et al. MonoSLAM: Real-time single camera SLAM[J]. IEEE transactions on pattern analysis and machine intelligence, , 29(6): 1052-1067.
  • :https://github.com/hanmekim/SceneLib2
  • :Mur-Artal R, Tardós J D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras[J]. IEEE Transactions on Robotics, , 33(5): 1255-1262.
  • :https://github.com/raulmur/ORB\_SLAM2
  • 作者其他论文:

  • :Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. , 2015.
  • :Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, , 2(2): 796-803.
  • :Elvira R, Tardós J D, Montiel J M M. ORBSLAM-Atlas: a robust and accurate multi-map system[J]. arXiv preprint arXiv:1908.11585, .
  • :Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence, , 40(3): 611-625.
  • :https://github.com/JakobEngel/dso
  • :Wang R, Schworer M, Cremers D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras[C]//Proceedings of the IEEE International Conference on Computer Vision. : 3903-3911.
  • :Von Stumberg L, Usenko V, Cremers D. Direct sparse visual-inertial odometry using dynamic marginalization[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, : 2510-2517.
  • 高翔在 DSO 上添加闭环的工作
  • :Gao X, Wang R, Demmel N, et al. LDSO: Direct sparse odometry with loop closure[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 2198-2204.
  • :https://github.com/tum-vision/LDSO
  • :Engel J, Schöps T, Cremers D. LSD-SLAM: Large-scale direct monocular SLAM[C]//European conference on computer vision. Springer, Cham, : 834-849.
  • :https://github.com/tum-vision/lsd\_slam
  • :Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-D cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, : 2100-2106.
  • :https://github.com/tum-vision/dvo\_slam
  • :https://github.com/tum-vision/dvo
  • 其他论文:

  • Kerl C, Sturm J, Cremers D. Robust odometry estimation for RGB-D cameras[C]//2013 IEEE international conference on robotics and automation. IEEE, : 3748-3754.
  • Steinbrücker F, Sturm J, Cremers D. Real-time visual odometry from dense RGB-D images[C]//2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE, : 719-722.
  • 苏黎世大学机器人与感知课题组
  • :Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, : 15-22.
  • :https://github.com/uzh-rpg/rpg\_svo
  • Forster C, Zhang Z, Gassner M, et al. SVO: Semidirect visual odometry for monocular and multicamera systems[J]. IEEE Transactions on Robotics, , 33(2): 249-265.
  • :Zubizarreta J, Aguinaga I, Montiel J M M. Direct sparse mapping[J]. arXiv preprint arXiv:1904.06577, .
  • :https://github.com/jzubizarreta/dsm ;Video
  • 论文:Sumikura S, Shibuya M, Sakurada K. OpenVSLAM: A Versatile Visual SLAM Framework[C]//Proceedings of the 27th ACM International Conference on Multimedia. : 2292-2295.
  • 代码:https://github.com/xdspacelab/openvslam ;文档
  • :Zheng F, Liu Y H. Visual-Odometric Localization and Mapping for Ground Vehicles Using SE (2)-XYZ Constraints[C]//2019 International Conference on Robotics and Automation (). IEEE, : 3556-3562.
  • :https://github.com/izhengfan/se2lam
  • 作者的另外一项工作

  • 论文:Zheng F, Tang H, Liu Y H. Odometry-vision-based ground vehicle motion estimation with se (2)-constrained se (3) poses[J]. IEEE transactions on cybernetics, , 49(7): 2652-2663.
  • 代码:https://github.com/izhengfan/se2clam
  • 论文:Chen Y, Shen S, Chen Y, et al. Graph-Based Parallel Large Scale Structure from Motion[J]. arXiv preprint arXiv:1912.10659, .
  • 代码:https://github.com/AIBluefisher/GraphSfM
  • :Lee S H, Civera J. Loosely-Coupled semi-direct monocular SLAM[J]. IEEE Robotics and Automation Letters, , 4(2): 399-406.
  • :https://github.com/sunghoon031/LCSD\_SLAM ;谷歌学术 ;演示视频
  • 作者另外一篇关于的文章 代码开源 :Lee S H, de Croon G. Stability-based scale estimation for monocular SLAM[J]. IEEE Robotics and Automation Letters, , 3(2): 780-787.
  • :Schenk F, Fraundorfer F. RESLAM: A real-time robust edge-based SLAM system[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, : 154-160.
  • :https://github.com/fabianschenk/RESLAM ; 项目主页
  • :Mo J, Sattar J. Extending Monocular Visual Odometry to Stereo Camera System by Scale Optimization[C]. International Conference on Intelligent Robots and Systems (), .
  • :https://github.com/jiawei-mo/scale\_optimization
  • :Schops T, Sattler T, Pollefeys M. BAD SLAM: Bundle Adjusted Direct RGB-D SLAM[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : 134-144.
  • :https://github.com/ETH3D/badslam
  • :Zhao Y, Xu S, Bu S, et al. GSLAM: A general SLAM framework and benchmark[C]//Proceedings of the IEEE International Conference on Computer Vision. : 1110-1120.
  • :https://github.com/zdzhaoyong/GSLAM
  • :Nejad Z Z, Ahmadabadian A H. ARM-VO: an efficient monocular visual odometry for ground vehicles on ARM CPUs[J]. Machine Vision and Applications, : 1-10.
  • :https://github.com/zanazakaryaie/ARM-VO
  • :Ghaffari M, Clark W, Bloch A, et al. Continuous Direct Sparse Visual Odometry from RGB-D Images[J]. arXiv preprint arXiv:1904.02266, .
  • :https://github.com/MaaniGhaffari/cvo-rgbd
  • :Runz M, Buffier M, Agapito L. Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects[C]//2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, : 10-20.
  • :https://github.com/martinruenz/maskfusion
  • :McCormac J, Handa A, Davison A, et al. Semanticfusion: Dense 3d semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and automation (ICRA). IEEE, : 4628-4635.
  • :https://github.com/seaun163/semanticfusion
  • :Yang S, Huang Y, Scherer S. Semantic 3D occupancy mapping through efficient high order CRFs[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 590-597.
  • :https://github.com/shichaoy/semantic\_3d\_mapping
  • 论文:Rosinol A, Abate M, Chang Y, et al. Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping[J]. arXiv preprint arXiv:1910.02490, .
  • 代码:https://github.com/MIT-SPARK/Kimera ;演示视频
  • :Yu F, Shang J, Hu Y, et al. NeuroSLAM: a brain-inspired SLAM system for 3D environments[J]. Biological Cybernetics, : 1-31.
  • :https://github.com/cognav/NeuroSLAM
  • 第四作者就是 Rat SLAM 的作者,文章也比较了十余种脑启发式的 SLAM
  • :Jatavallabhula K M, Iyer G, Paull L. gradSLAM: Dense SLAM meets Automatic Differentiation[J]. arXiv preprint arXiv:1910.10672, .
  • (预计 20 年 4 月放出):https://github.com/montrealrobotics/gradSLAM ;项目主页,演示视频
  • https://github.com/floatlazer/semantic\_slam
  • https://github.com/qixuxiang/orb-slam2\_with\_semantic\_labelling
  • https://github.com/Ewenwan/ORB\_SLAM2\_SSD\_Semantic
  • :Ganti P, Waslander S. Network Uncertainty Informed Semantic Feature Selection for Visual SLAM[C]//2019 16th Conference on Computer and Robot Vision (CRV). IEEE, : 121-128.
  • :https://github.com/navganti/SIVO
  • :Shan An, Guangfu Che, Fangru Zhou, Xianglong Liu, Xin Ma, Yu Chen.Fast and Incremental Loop Closure Detection using Proximity Graphs. pp. 378-385, The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • :https://github.com/AnshanTJU/FILD
  • :Pire T, Corti J, Grinblat G. Online Object Detection and Localization on Stereo Visual SLAM System[J]. Journal of Intelligent & Robotic Systems, : 1-10.
  • :https://github.com/CIFASIS/object-detection-sptam
  • :Torres-Camara J M, Escalona F, Gomez-Donoso F, et al. Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth[C]//Iberian Robotics conference. Springer, Cham, : 563-574.
  • :https://github.com/jmtc7/mapSlammer
  • :Yu H, Lee B. Not Only Look But Observe: Variational Observation Model of Scene-Level 3D Multi-Object Understanding for Probabilistic SLAM[J]. arXiv preprint arXiv:1907.09760, .
  • :https://github.com/bogus2000/NOLBO
  • :Gomez-Ojeda R, Briales J, Gonzalez-Jimenez J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, : 4211-4216.
  • 代码:https://github.com/rubengooj/pl\-svo
  • :Gomez-Ojeda R, Gonzalez-Jimenez J. Robust stereo visual odometry through a probabilistic combination of points and line segments[C]//2016 IEEE International Conference on Robotics and Automation (). IEEE, : 2521-2526.
  • :https://github.com/rubengooj/stvo-pl
  • :Gomez-Ojeda R, Zuñiga-Noël D, Moreno F A, et al. PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments[J]. arXiv preprint arXiv:1705.09479, .
  • :https://github.com/rubengooj/pl\-slam
  • Gomez-Ojeda R, Moreno F A, Zuñiga-Noël D, et al. PL-SLAM: a stereo SLAM system through the combination of points and line segments[J]. IEEE Transactions on Robotics, , 35(3): 734-746.
  • :He Y, Zhao J, Guo Y, et al. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features[J]. Sensors, , 18(4): 1159.
  • :https://github.com/HeYijia/PL-VIO
  • :https://github.com/Jichao-Peng/VINS-Mono-Optimization
  • :Vakhitov A, Lempitsky V. Learnable line segment descriptor for visual SLAM[J]. IEEE Access, , 7: 39923-39934.
  • :https://github.com/alexandervakhitov/lld-slam ;Video
  • :Wietrzykowski J. On the representation of planes for efficient graph-based slam with high-level features[J]. Journal of Automation Mobile Robotics and Intelligent Systems, , 10.
  • :https://github.com/LRMPUT/PlaneSLAM
  • 作者另外一项开源代码,没有找到对应的论文:https://github.com/LRMPUT/PUTSLAM
  • :Ferrer G. Eigen-Factors: Plane Estimation for Multi-Frame and Time-Continuous Point Cloud Alignment[C]//2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 1278-1284.
  • :https://gitlab.com/gferrer/eigen-factors-iros2019 ;演示视频
  • :Wietrzykowski J, Skrzypczyński P. PlaneLoc: Probabilistic global localization in 3-D using local planar features[J]. Robotics and Autonomous Systems, , 113: 160-173.
  • :https://github.com/LRMPUT/PlaneLoc
  • :Yang S, Song Y, Kaess M, et al. Pop-up slam: Semantic monocular plane slam for low-texture environments[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 1222-1229.
  • :https://github.com/shichaoy/pop\_up\_slam
  • :Mu B, Liu S Y, Paull L, et al. Slam with objects using a nonparametric pose graph[C]//2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 4602-4609.
  • :https://github.com/BeipengMu/objectSLAM ;Video
  • :Grinvald M, Furrer F, Novkovic T, et al. Volumetric instance-aware semantic mapping and 3D object discovery[J]. IEEE Robotics and Automation Letters, , 4(3): 3037-3044.
  • :https://github.com/ethz-asl/voxblox-plusplus
  • :Yang S, Scherer S. Cubeslam: Monocular 3-d object slam[J]. IEEE Transactions on Robotics, , 35(4): 925-938.
  • :https://github.com/shichaoy/cube\_slam
  • 对,这就是带我入坑的一项工作,2018 年 11 月份看到这篇论文(当时是预印版)之后开始学习物体级 SLAM,个人对 Cube SLAM 的一些注释和总结:链接。
  • 也有很多有意思的但没开源的物体级 SLAM

  • Ok K, Liu K, Frey K, et al. Robust Object-based SLAM for High-speed Autonomous Navigation[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, : 669-675.
  • Li J, Meger D, Dudek G. Semantic Mapping for View-Invariant Relocalization[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, : 7108-7115.
  • Nicholson L, Milford M, Sünderhauf N. Quadricslam: Dual quadrics from object detections as landmarks in object-oriented slam[J]. IEEE Robotics and Automation Letters, , 4(1): 1-8.
  • :Sun K, Mohta K, Pfrommer B, et al. Robust stereo visual inertial odometry for fast autonomous flight[J]. IEEE Robotics and Automation Letters, , 3(2): 965-972.
  • :https://github.com/KumarRobotics/msckf\_vio ;Video
  • :Bloesch M, Omari S, Hutter M, et al. Robust visual inertial odometry using a direct EKF-based approach[C]//2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, : 298-304.
  • :https://github.com/ethz-asl/rovio ;Video
  • :Huai Z, Huang G. Robocentric visual-inertial odometry[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 6319-6326.
  • :https://github.com/rpng/R-VIO ;Video
  • :Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research, , 34(3): 314-334.
  • :https://github.com/ethz-asl/okvis
  • :Mur-Artal R, Tardós J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters, , 2(2): 796-803.
  • :https://github.com/jingpang/LearnVIORB (VIORB 本身是没有开源的,这是王京大佬复现的一个版本)
  • :Qin T, Li P, Shen S. Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics, , 34(4): 1004-1020.
  • :https://github.com/HKUST-Aerial\-Robotics/VINS-Mono
  • 双目版 :https://github.com/HKUST-Aerial\-Robotics/VINS-Fusion
  • 移动段 :https://github.com/HKUST-Aerial\-Robotics/VINS-Mobile
  • :Shan Z, Li R, Schwertfeger S. RGBD-Inertial Trajectory Estimation and Mapping for Ground Robots[J]. Sensors, , 19(10): 2251.
  • :https://github.com/STAR-Center/VINS-RGBD ;Video
  • :Geneva P, Eckenhoff K, Lee W, et al. Openvins: A research platform for visual-inertial estimation[C]//IROS 2019 Workshop on Visual-Inertial Navigation: Challenges and Applications, Macau, China. .
  • :https://github.com/rpng/open\_vins
  • 论文:Tschopp F, Riner M, Fehr M, et al. VersaVIS—An Open Versatile Multi-Camera Visual-Inertial Sensor Suite[J]. Sensors, , 20(5): 1439.
  • 代码:https://github.com/ethz-asl/versavis
  • :Eckenhoff K, Geneva P, Huang G. Closed-form preintegration methods for graph-based visual–inertial navigation[J]. The International Journal of Robotics Research, 2018.
  • :https://github.com/rpng/cpi ;Video
  • :Kochanov D, Ošep A, Stückler J, et al. Scene flow propagation for semantic mapping and object discovery in dynamic street scenes[C]//Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on. IEEE, : 1785-1792.
  • :https://github.com/ganlumomo/DynamicSemanticMapping ;wiki
  • :Yu C, Liu Z, Liu X J, et al. DS-SLAM: A semantic visual SLAM towards dynamic environments[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 1168-1174.
  • :https://github.com/ivipsourcecode/DS-SLAM
  • :Rünz M, Agapito L. Co-fusion: Real-time segmentation, tracking and fusion of multiple objects[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4471-4478.
  • :https://github.com/martinruenz/co-fusion ; Video
  • :Newcombe R A, Fox D, Seitz S M. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. : 343-352.
  • :https://github.com/mihaibujanca/dynamicfusion
  • :Palazzolo E, Behley J, Lottes P, et al. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals[J]. arXiv preprint arXiv:1905.02082, .
  • :https://github.com/PRBonn/refusion ;Video
  • 论文:Prisacariu V A, Kähler O, Golodetz S, et al. Infinitam v3: A framework for large-scale 3d reconstruction with loop closure[J]. arXiv preprint arXiv:1708.00783, .
  • 代码:https://github.com/victorprad/InfiniTAM ;project page
  • :Dai A, Nießner M, Zollhöfer M, et al. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration[J]. ACM Transactions on Graphics (TOG), , 36(4): 76a.
  • :https://github.com/niessner/BundleFusion ;工程地址
  • :Newcombe R A, Izadi S, Hilliges O, et al. KinectFusion: Real-time dense surface mapping and tracking[C]//2011 10th IEEE International Symposium on Mixed and Augmented Reality. IEEE, : 127-136.
  • :https://github.com/chrdiller/KinectFusionApp
  • :Whelan T, Salas-Moreno R F, Glocker B, et al. ElasticFusion: Real-time dense SLAM and light source estimation[J]. The International Journal of Robotics Research, , 35(14): 1697-1716.
  • :https://github.com/mp3guy/ElasticFusion
  • ElasticFusion 同一个团队的工作,帝国理工 Stefan Leutenegger 谷歌学术
  • :Whelan T, Kaess M, Johannsson H, et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion[J]. The International Journal of Robotics Research, , 34(4-5): 598-626.
  • :https://github.com/mp3guy/Kintinuous
  • :Choi S, Zhou Q Y, Koltun V. Robust reconstruction of indoor scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : 5556-5565.
  • :https://github.com/qianyizh/ElasticReconstruction ;作者主页
  • :Han L, Fang L. FlashFusion: Real-time Globally Consistent Dense 3D Reconstruction using CPU Computing[C]. RSS, 2018.
  • (一直没放出来):https://github.com/lhanaf/FlashFusion ; Project Page
  • :Labbé M, Michaud F. RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation[J]. Journal of Field Robotics, , 36(2): 416-446.
  • :https://github.com/introlab/rtabmap ;Video ;project page
  • :Lan Z, Yew Z J, Lee G H. Robust Point Cloud Based Reconstruction of Large-Scale Outdoor Scenes[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : 9690-9698.
  • :https://github.com/ziquan111/RobustPCLReconstruction ;Video
  • :Wang C, Guo X. Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. : 49-53.
  • :https://github.com/chaowang15/plane-opt-rgbd
  • :Wang K, Gao F, Shen S. Real-time scalable dense surfel mapping[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, : 6919-6925.
  • :https://github.com/HKUST-Aerial\-Robotics/DenseSurfelMapping
  • :Schöps T, Sattler T, Pollefeys M. Surfelmeshing: Online surfel-based mesh reconstruction[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, .
  • :https://github.com/puzzlepaint/surfelmeshing
  • :Concha Belenguer A, Civera Sancho J. DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence[C]//Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst.  (ART-2015-92153).
  • :https://github.com/alejocb/dpptam
  • :基于超像素的单目 SLAM:Using Superpixels in Monocular SLAM ICRA 2014 ;谷歌学术
  • :Yang Z, Gao F, Shen S. Real-time monocular dense mapping on aerial robots using visual-inertial fusion[C]//2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017: 4552-4559.
  • :https://github.com/dvorak0/VI-MEAN ;Video
  • :Pizzoli M, Forster C, Scaramuzza D. REMODE: Probabilistic, monocular dense reconstruction in real time[C]//2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014: 2609-2616.
  • :https://github.com/uzh-rpg/rpg\_open\_remode
  • :https://github.com/ayushgaud/ORB\_SLAM2 https://github.com/ayushgaud/ORB\_SLAM2
  • 帝国理工学院戴森机器人实验室
  • :Czarnowski J, Laidlow T, Clark R, et al. DeepFactors: Real-Time Probabilistic Dense Monocular SLAM[J]. arXiv preprint arXiv:2001.05049, .
  • :https://github.com/jczarnowski/DeepFactors (还未放出)
  • 其他论文:Bloesch M, Czarnowski J, Clark R, et al. CodeSLAM—learning a compact, optimisable representation for dense visual SLAM[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. : 2560-2568.
  • 港科沈邵劼老师团队
  • :Ling Y, Wang K, Shen S. Probabilistic dense reconstruction from a moving camera[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, : 6364-6371.
  • :https://github.com/ygling2008/probabilistic\_mapping
  • 另外一篇稠密重建文章的代码一直没放出来 Github :Ling Y, Shen S. Real‐time dense mapping for online processing and navigation[J]. Journal of Field Robotics, , 36(5): 1004-1036.
  • :Mur-Artal R, Tardós J D. Probabilistic Semi-Dense Mapping from Highly Accurate Feature-Based Monocular SLAM[C]//Robotics: Science and Systems. , 2015.
  • (本身没有开源,贺博复现的一个版本):https://github.com/HeYijia/ORB\_SLAM2
  • 加上线段之后的半稠密建图

  • :He S, Qin X, Zhang Z, et al. Incremental 3d line segment extraction from semi-dense slam[C]//2018 24th International Conference on Pattern Recognition (ICPR). IEEE, : 1658-1663.
  • :https://github.com/shidahe/semidense-lines
  • 作者在此基础上用于指导远程抓取操作的一项工作:https://github.com/atlas-jj/ORB\-SLAM-free-space-carving
  • :https://github.com/borglab/gtsam ;官网
  • :https://github.com/RainerKuemmerle/g2o
  • :https://ceres-solver.org/
  • :Liu H, Chen M, Zhang G, et al. Ice-ba: Incremental, consistent and efficient bundle adjustment for visual-inertial slam[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : 1974-1982.
  • :https://github.com/baidu/ICE-BA
  • :Dong J, Lv Z. miniSAM: A Flexible Factor Graph Non-linear Least Squares Optimization Framework[J]. arXiv preprint arXiv:1909.00903, .
  • :https://github.com/dongjing3309/minisam ; 文档
  • :Aloise I, Della Corte B, Nardi F, et al. Systematic Handling of Heterogeneous Geometric Primitives in Graph-SLAM Optimization[J]. IEEE Robotics and Automation Letters, , 4(3): 2738-2745.
  • :https://srrg.gitlab.io/sashago-website/index.html#
  • :Hsiao M, Kaess M. MH-iSAM2: Multi-hypothesis iSAM using Bayes Tree and Hypo-tree[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, : 1274-1280.
  • :https://bitbucket.org/rpl\_cmu/mh-isam2\_lib/src/master/
  • :Blanco-Claraco J L. A Modular Optimization Framework for Localization and Mapping[J]. Proc. of Robotics: Science and Systems (RSS), FreiburgimBreisgau, Germany, , 2.
  • :https://github.com/MOLAorg/mola ;Video ;

标签: 2204传感器

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台