以前的文章总结:
多传感器融合SLAM部分:
开源框架测试
一:Tixiao Shan最新力作LVI-SAM(Lio-SAM Vins-Mono),基于视觉激光惯导里程计SLAM框架、环境建设和运行过程_goldqiu的博客-CSDN博客
二.激光SLAM框架学习之A-LOAM框架介绍_goldqiu的博客-CSDN博客
八.激光SLAM框架学习之LeGO-LOAM框架-框架介绍和操作演示_goldqiu的博客-CSDN博客
十一.激光惯导LIO-SLAM框架学习之LIO-SAM框架-框架介绍和操作演示_goldqiu的博客-CSDN博客
十二.激光SLAM框架学习之livox-loam框架安装和运行数据集_goldqiu的博客-CSDN博客
二十二.火星实验室,香港大学R3LIVE框架运行官方数据集_goldqiu的博客-CSDN博客
香港大学火星实验室FAST-LIO2框架运行官方数据集_goldqiu的博客-CSDN博客
二十七-VIO-SLAM开源框架Vin-mono跑EuRoC数据集_goldqiu的博客-CSDN博客
实车测试
七.激光SLAM框架学习之A-LOAM框架---速腾Robosense-室内建图16线雷达_goldqiu的博客-CSDN博客
九.激光SLAM框架学习之LeGO-LOAM框架---速腾Robosense-与其他框架相比,16线雷达的室外图纸被记录和保存_goldqiu的博客-CSDN博客
十三.激光SLAM框架学习之livox-Mid-70雷达和实时室外运行框架_goldqiu的博客-CSDN博客
十六.激光和惯导LIO-SLAM框架学习配置自用传感器实时户外运行LIO-SAM框架_goldqiu的博客-CSDN博客
十八.多个SLAM框架(A-LOAM、Lego-loam、LIO-SAM、livox-loam)粗对比分析室外测试效果_goldqiu的博客-CSDN博客
香港大学火星实验室FAST-LIO2框架运行自录数据集(Mid-70和SBG-Ellipse-N惯导)_goldqiu的博客-CSDN博客
标定
十四.激光和惯导LIO-SLAM框架学习习惯导内参标定_goldqiu的博客-CSDN博客
十五.激光和惯导LIO-SLAM框架学习惯导与雷达外参标定(1)_goldqiu的博客-CSDN博客
二十.激光、视觉和惯导LVIO-SLAM框架学习相机内参标定_goldqiu的博客-CSDN博客
二十一.激光、视觉和惯导LVIO-SLAM框架学习相机与雷达外参标定(1)_goldqiu的博客-CSDN博客
学习开源框架
三.激光SLAM框架学习之A-LOAM框架-项目代码介绍-1.项目文件介绍(主源代码除外)_goldqiu的博客-CSDN博客
四.激光SLAM框架学习之A-LOAM框架-项目代码介绍-2.scanRegistration.cpp--前端雷达处理和特征提取_goldqiu的博客-CSDN博客
五.激光SLAM框架学习之A-LOAM框架-项目代码介绍-3.laserOdometry.cpp--前端雷达里程计和位置粗估计_goldqiu的博客-CSDN博客
六.激光SLAM框架学习之A-LOAM框架-项目代码介绍-4.laserMapping.cpp--估计(优化)后端建图和帧位姿势_goldqiu的博客-CSDN博客
十.激光SLAM框架学习之LeGO-LOAM框架-算法原理和改进_goldqiu的博客-CSDN博客
十七.激光和惯导LIO-SLAM框架学习之IMU和IMU预积分_goldqiu的博客-CSDN博客
十九.激光和惯导LIO-SLAM项目代码介绍-代码框架和文件解释_goldqiu的博客-CSDN博客
二十三.激光和惯导LIO-SLAM框架学习之LIO-SAM项目代码介绍-基础知识介绍_goldqiu的博客-CSDN博客
从这篇博客就开始进入到Localization、Navigation部分了
二十五.SLAM中Mapping和Localization区别和思考
一.模拟二维路径规划-gmapping amcl map_server move_base_goldqiu的博客-CSDN博客
二.实现二维路径规划---gmapping amcl map_server move_base_goldqiu的博客-CSDN博客
一.全局定位-开源定位框架LIO-SAM_based_relocalization实录数据集测试_goldqiu的博客-CSDN博客
二.全局定位-开源定位框架livox-relocalization实录数据集测试_goldqiu的博客-CSDN博客_livox数据集
三.全局定位--LIO-SAM在RTK建图和定位在全局约束下(1)_goldqiu的博客-CSDN博客
栅格地图:
二十八-三维点云实时离线生成二维格栅,三维格栅地图_goldqiu的博客-CSDN博客
git clone https://github.com/IntelRealSense/librealsense cd librealsense
建议下载稍低版本SDK,我下载的是librealsense-2.36.0版,或librealsense-2.45.0
sudo apt-get install libudev-dev pkg-config libgtk-3-dev sudo apt-get install libusb-1.0-0-dev pkg-config sudo apt-get install libglfw3-dev sudo apt-get install libssl-dev
在librealsense文件夹
sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/ sudo udevadm control --reload-rules && udevadm trigger
卸载重装,99-realsense-libusb.rules需要删除文件。
mkdir build cd build cmake ../ -DBUILD_EXAMPLES=true make sudo make install
realsense-viewer检查工具的效果
realsense-viewer
sudo apt-get install ros-melodic-rgbd-launch git clone https://github.com/IntelRealSense/realsense-ros.git git clone https://github.com/pal-robotics/ddynamic_reconfigure.git cd ~/catkin_ws && catkin_make
编译完成后,使用以下命令测试:
roslaunch realsense2_camera demo_pointcloud.launch
建议下载稍微低一点的版本,我分别下载ddynamic_reconfigure-0.2.2和realsense-ros-2.3.2,或者ddynamic_reconfigure-0.3.2和realsense-ros-2.3.2
开启相机节点
roslaunch realsense2_camera rs_rgbd.launch
报错:
/opt/ros/melodic/lib/nodelet/nodelet: symbol lookup error: /home/qjs/code/D435_ws/devel/lib//librealsense2_camera.so: undefined symbol: _ZN20ddynamic_reconfigure19DDynamicReconfigure16registerVariableIiEEvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEET_RKN5boost8functionIFvSA_EEES9_SA_SA_S9_ [camera/realsense2_camera_manager-2] process has died [pid 27469, exit code 127, cmd /opt/ros/melodic/lib/nodelet/nodelet manager __name:=realsense2_camera_manager __log:=/home/qjs/.ros/log/1eca8a82-e8c4-11ec-807e-70b5e831e2ce/camera-realsense2_camer_manager-2.log].
log file: /home/qjs/.ros/log/1eca8a82-e8c4-11ec-807e-70b5e831e2ce/camera-realsense2_camera_manager-2*.log
sudo find / -name librealsense2_camera.so
删除其中一个地方的librealsense2_camera.so,只保留一个。
报警告:
10/06 21:55:35,555 WARNING [140685336372992] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11
不管,可以使用
sudo apt-get install cmake gcc g++ git vim
sudo apt-get install libglew-dev
sudo apt-get install libboost-dev libboost-thread-dev
sudo apt-get install libboost-filesystem-dev
sudo apt-get install libpython2.7-dev
sudo apt-get install build-essential
这两个库,建议分别是0.5和3.2版本,报错可能性小。
cd Pangolin
mkdir build
cd build
cmake ..
make
sudo make install
cd eigen
mkdir build
cd build
cmake ..
make
sudo make install
下载改好的代码:
mkdir build
cmake ..
make -j8
sudo make install
/ORB_SLAM2_modified/build /ORB_SLAM2_modified/Thirdparty/DBoW2/build /ORB_SLAM2_modified/Thirdparty/g2o/build 删掉三个 build 文件夹
在ORB_SLAM2_modified下右键打开终端
cd ORB_SLAM2_modified/Thirdparty/DBoW2
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
重新在ORB_SLAM2_modified下右键打开终端
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j8
数据集测试
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml datasets/rgbd_dataset_freiburg1_xyz datasets/rgbd_dataset_freiburg1_xyz/association.txt
./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM1.yaml datasets/rgbd_dataset_freiburg1_room datasets/rgbd_dataset_freiburg1_room/association.txt
保存的点云在主文件夹下,名称为vslam.pcd

来到ORB_SLAM2_modified文件夹 在打开的文本最后添加你的ROS路径,保存
gedit ~/.bashrc
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/你的目录/ORB_SLAM2_modified/Examples/ROS
保存后在终端输入
source ~/.bashrc
配置ros环境
sudo gedit /opt/ros/melodic/setup.bash
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/你的目录/ORB_SLAM2_modified/Examples/ROS
保存后在终端输入
source /opt/ros/melodic/setup.bash
删除ORB_SLAM2_modified/Examples/ROS/ORB_SLAM2/build中的文件
然后编译
chmod +x build_ros.sh
./build_ros.sh
ROS 报错 ModuleNotFoundError: No module named ‘rospkg‘
pip install rospkg
boost的库问题:
在~/catkin_ws/src/ORB_SLAM2/Examples/ROS/ORB_SLAM2/CMakeLists.txt里添加一句-lboost_system
出现g2o库的报错,大概率是和ros自带的g2o冲突了,卸载:
sudo apt-get remove ros-melodic-libg2o
roslaunch realsense2_camera rs_rgbd.launch
rostopic echo /camera/color/camera_info
header:
seq: 1469
stamp:
secs: 1654853975
nsecs: 751267910
frame_id: "camera_color_optical_frame"
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [611.2393798828125, 0.0, 319.9852600097656, 0.0, 610.677490234375, 242.4569854736328, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [611.2393798828125, 0.0, 319.9852600097656, 0.0, 0.0, 610.677490234375, 242.4569854736328, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
终端中显示的K,为参数,其中K = [fx 0 cx 0 fy cy 0 0 1 ] ,基线50mm
修改参数,得到一个新的D435i.yaml
%YAML:1.0
#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
# Camera calibration and distortion parameters (OpenCV)
Camera.fx: 611.239380
Camera.fy: 610.677490
Camera.cx: 319.985260
Camera.cy: 242.456985
Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
Camera.p3: 0.0
Camera.width: 640
Camera.height: 480
# Camera frames per second
Camera.fps: 30.0
# IR projector baseline times fx (aprox.)
# bf = baseline (in meters) * fx, D435i的 baseline = 50 mm
Camera.bf: 50.0
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1
# Close/Far threshold. Baseline times.
ThDepth: 40.0
# Deptmap values factor
DepthMapFactor: 1000.0
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------
# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1000
# ORB Extractor: Scale factor between levels in the scale pyramid
ORBextractor.scaleFactor: 1.2
# ORB Extractor: Number of levels in the scale pyramid
ORBextractor.nLevels: 8
# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7
#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize:2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500
PointCloudMapping.Resolution: 0.01
meank: 50
thresh: 2.0
运行
roslaunch realsense2_camera rs_rgbd.launch
rosrun ORB_SLAM2 RGBD Vocabulary/ORBvoc.txt Examples/RGB-D/D435i.yaml
录制包
rosbag record -o 20220611.bag /camera/color/image_raw /camera/aligned_depth_to_color/image_raw
https://blog.csdn.net/weixin_44946842/article/details/124055318?utm_source=app&app_version=5.3.0&code=app_1562916241&uLinkId=usr1mkqgl919blen