目录
- LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
-
- Abstract
- I. Introduction
- II. Related Work
- III. Lidar Inertial Odometry Via Smoothing and Mapping
-
- A. System Overview
- B. IMU Preintegration Factor
- C. Lidar Odometry Factor
- D. GPS Factor
- E. Loop Closure Factor
- IV. Experiment
- V. Conclusion and Discussion
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
代码:https://github.com/TixiaoShan/LIO-SAM 视频:https://www.youtube.com/watch?v=A0H8CoORZJU
Abstract
We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior “sub-keyframes.” The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments. 我们提出了通过smoothing and mapping(gtsam优化库)框架实现紧耦合激光雷达惯性里程计LIO-SAM,该framework可实现高精度、实时的移动机器人轨迹估计和地图构建。LIO-SAM在因子图( factor graph)激光雷达惯性里程计已经实现,允许从different sources系统中集成了大量的相对和绝对测量(包括回环)。可以通过IMU预积分的运动估计消除点云的畸变(de-skew),并生成激光雷达里程计优化 initial guess。激光雷达里程计的解被用于估计IMU的偏差(bias)。为了保证高精度的实时性,我们将使用旧的激光雷达scans边缘化优化,而不是激光雷达scans匹配全局地图。局部尺度而不是全局尺度scan匹配可以显著提高系统的实时性能。引入关键帧和滑动窗口的方法可以将新关键帧对齐到固定大小prior子关键帧集合(滑窗中的关键帧)。我们广泛评估了三个平台收集的不同规模和环境生成的数据集中提出的方法。
I. Introduction
State estimation, localization and mapping are fundamental prerequisites for a successful intelligent mobile robot, required for feedback control, obstacle avoidance, and planning, among many other capabilities. Using vision-based and lidar-based sensing, great efforts have been devoted to achieving high-performance real-time simultaneous localization and mapping (SLAM) that can support a mobile robot’s six degree-of-freedom state estimation. Vision-based methods typically use a monocular or stereo camera and triangulate features across successive images to determine the camera motion. Although vision-based methods are especially suitable for place recognition, their sensitivity to initialization, illumination, and range make them unreliable when they alone are used to support an autonomous navigation system. On the other hand, lidar-based methods are largely invariant to illumination change. Especially with the recent availability of long-range, high-resolution 3D lidar, such as the Velodyne VLS-128 and Ouster OS1-128, lidar becomes more suitable to directly capture the fine details of an environment in 3D space. Therefore, this paper focuses on lidar-based state estimation and mapping methods.
状态估计、定位和构图是智能移动机器人的基本先决条件,需要反馈控制、避障和规划。人们致力于实现基于视觉和激光雷达的高性能实时同步定位和映射(SLAM),估计支持移动机器人的六自由状态。基于视觉的方法通常使用单目或立体相机,并在连续图像上对特征进行三角化,以确定相机的运动。虽然基于视觉的方法特别适合位置识别,但它们对初始化、照明和范围的敏感性使它们在支持独立导航系统时不可靠。另一方面,基于激光雷达的方法在很大程度上保持了光的变化。特别是最近,随着远程和高分辨率3D例如,激光雷达的出现Velodyne VLS-128和Outster OS激光雷达1-128更适合直接捕捉3D空间中环境的细节。因此,本文主要研究激光雷达的状态估计和构图方法。
Many lidar-based state estimation and mapping methods have been proposed in the last two decades. Among them, the lidar odometry and mapping (LOAM) method proposed in [1] for low-drift and real-time state estimation and mapping is among the most widely used. LOAM, which uses a lidar and an inertial measurement unit (IMU), ahieves state-ofthe-art performance and has been ranked as the top lidarbased method since its release on the KITTI odometry benchmark site [2]. Despite its success, LOAM presents some limitations - by saving its data in a global voxel map, it is often difficult to perform loop closure detection and incorporate other absolute measurements, e.g., GPS, for pose correction. Its online optimization process becomes less efficient when this voxel map becomes dense in a feature-rich environment. LOAM also suffers from drift in large-scale tests, as it is a scan-matching based method at its core 在过去的二十年中,人们提出了许多基于激光雷达的状态估计和建图方法。其中,在[1]中提出的用于低漂移和实时状态估计和映射的激光雷达里程计和建图(LOAM)方法是应用最广泛的方法之一。LOAM使用激光雷达和惯性测量单元(IMU),具有最先进的性能,自其在KITTI里程计基准站点发布以来,一直被列为基于激光的最佳方法[2]。尽管取得了成功,但LOAM仍存在一些局限性—通过将其数据保存在全局体素图(voxel map)中,通常很难执行回环检测并结合其他绝对测量(例如GPS)进行位姿校正。当voxel map在特征丰富的环境中变得稠密时,其在线优化的效率会降低。在大型场景的测试中,LOAM也会受到漂移的影响,因为它的核心是一种基于扫描匹配的方法。
In this paper, we propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, to address the aforementioned problems. We assume a nonlinear motion model for point cloud de-skew, estimating the sensor motion during a lidar scan using raw IMU measurements. In addition to de-skewing point clouds, the estimated motion serves as an initial guess for lidar odometry optimization. The obtained lidar odometry solution is then used to estimate the bias of the IMU in the factor graph. By introducing a global factor graph for robot trajectory estimation, we can efficiently perform sensor fusion using lidar and IMU measurements, incorporate place recognition among robot poses, and introduce absolute measurements, such as GPS positioning and compass heading, when they are available. This collection of factors from various sources is used for joint optimization of the graph. Additionally, we marginalize old lidar scans for pose optimization, rather than matching scans to a global map like LOAM. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixedsize set of prior “sub-keyframes.” The main contributions of our work can be summarized as follows: 在本文中,我们提出了一个通过smoothing and mapping的紧耦合激光雷达惯性里程计框架LIO-SAM,以解决上述问题。我们假定一个点云去畸变(de-skew)的非线性运动模型,使用原始IMU测量值估计激光雷达扫描期间的传感器运动。除了对点云去畸变外,估计的运动还可作为激光雷达里程计优化的initial guess。然后使用获得的激光雷达里程计的解来估计因子图中IMU的偏差。通过引入用于机器人轨迹估计的全局因子图,我们可以使用激光雷达和IMU测量有效地执行传感器融合,在机器人位姿之间加入位置识别,并在可用时引入绝对测量,如GPS定位和指南针朝向。This collection of factors from various sources用于因子图的联合优化。此外,为了优化姿态,我们将旧的激光雷达scans边缘化,而不是将scans与global map匹配(不同于LOAM的方法)。在局部尺度而不是全局尺度上进行扫描匹配,可以显著提高系统的实时性能,选择性引入关键帧以及高效的滑动窗口方法(将新关键帧对齐到先前“子关键帧”的固定大小set)也是如此。我们工作的主要贡献可总结如下:
• A tightly-coupled lidar inertial odometry framework built atop a factor graph, that is suitable for multi-sensor fusion and global optimization. • An efficient, local sliding window-based scan-matching approach that enables real-time performance by registering selectively chosen new keyframes to a fixed-size set of prior sub-keyframes. • The proposed framework is extensively validated with tests across various scales, vehicles, and environments. •一个紧耦合的激光雷达惯性里程计框架,构建在因子图之上,适用于多传感器融合和全局优化。 •一种高效、基于局部滑动窗口的扫描匹配方法,通过选择性地将新关键帧对齐到固定大小的prior子关键帧集中,实现实时性能。 •提出的框架在各种规模、车辆和环境的测试中得到广泛验证。
II. Related Work
III. Lidar Inertial Odometry Via Smoothing and Mapping
A. System Overview
We first define frames and notation that we use throughout the paper. We denote the world frame as W and the robot body frame as B. We also assume the IMU frame coincides with the robot body frame for convenience. The robot state x can be written as: 我们首先定义了在本文中使用的框架和符号。我们将世界坐标系表示为W,机器人body坐标系表示为B。为了方便起见,我们还假设IMU坐标系与body坐标系重合。机器人状态x(15维9+3+1+2)可以写为: where R ∈ S O ( 3 ) \mathbf{R} \in S O(3) R∈SO(3) is the rotation matrix, p ∈ R 3 \mathbf{p} \in \mathbb{R}^{3} p∈R3 is the position vector, v \mathbf{v} v is the speed, and b \mathbf{b} b is the IMU bias. The transformation T ∈ S E ( 3 ) \mathbf{T} \in S E(3) T∈SE(3) from B \mathbf{B} B to W \mathbf{W} W is represented as T = [ R ∣ p ] \mathbf{T}=[\mathbf{R} \mid \mathbf{p}] T=[R∣p]. 其中, R ∈ S O ( 3 ) \mathbf{R} \in S O(3) R∈SO(3) 是旋转矩阵, p ∈ R 3 \mathbf{p} \in \mathbb{R}^{3} p∈R3是位置向量, v \mathbf{v} v是速度, b \mathbf{b} b是IMU偏差。从 B \mathbf{B} B到 W \mathbf{W} W的变换 T ∈ S E ( 3 ) \mathbf{T} \in S E(3) T∈SE(3)被表示为 T = [ R ∣ p ] \mathbf{T}=[\mathbf{R} \mid \mathbf{p}] T=[R∣p]。
An overview of the proposed system is shown in Figure 1 . The system receives sensor data from a 3D lidar, an IMU and optionally a GPS. We seek to estimate the state of the robot and its trajectory using the observations of these sensors. This state estimation problem can be formulated as a maximum a posteriori (MAP) problem. We use a factor graph to model this problem, as it is better suited to perform inference when compared with Bayes nets. With the assumption of a Gaussian noise model, the MAP inference for our problem is equivalent to solving a nonlinear least-squares problem [18]. Note that without loss of generality, the proposed system can also incorporate measurements from other sensors, such as elevation from an altimeter or heading from a compass. 系统的overview如图1所示。该系统从3D激光雷达、IMU和可选的GPS中接收传感器数据。我们试图利用这些传感器的观测来估计机器人的状态及其轨迹。这种状态估计问题可以表述为最大后验概率(MAP)问题。我们使用因子图来模拟这个问题,因为与贝叶斯网络相比,因子图更适合进行推理(inference)。在高斯噪声模型的假设下,我们的最大后验问题等价于解决非线性最小二乘问题[18]。请注意,在不丢失通用性的情况下,提出的系统还可以包含来自其他传感器的测量,例如来自高度计的高度信息或来自指南针的方向信息。 Fig. 1: The system structure of LIO-SAM. The system receives input from a 3D lidar, an IMU and optionally a GPS. Four types of factors are introduced to construct the factor graph.: (a) IMU preintegration factor, (b) lidar odometry factor, © GPS factor, and (d) loop closure factor. The generation of these factors is discussed in Section III. 图1:LIO-SAM的系统结构。该系统接收来自3D激光雷达、IMU和(可选)GPS的输入。引入四种类型的因子来构建因子图:(a) IMU预积分系数,(b)激光雷达里程计系数,(c)GPS系数和(d)回环系数。Section III讨论了这些因素的产生。
We introduce four types of factors along with one variable type for factor graph construction. This variable, representing the robot’s state at a specific time, is attributed to the nodes of the graph. The four types of factors are: (a) IMU preintegration factors, (b) lidar odometry factors, © GPS factors, and (d) loop closure factors. A new robot state node x is added to the graph when the change in robot pose exceeds a user-defined threshold. The factor graph is optimized upon the insertion of a new node using incremental smoothing and mapping with the Bayes tree (iSAM2) [19]. The process for generating these factors is described in the following sections. 我们介绍了四种类型的因子以及一种用于因子图构造的变量类型。此变量表示机器人在特定时间的状态,主要来自于因子图的节点。这四类因素是:(a)IMU预积分系数,(b)激光雷达里程计系数,(c)GPS系数,以及(d)回环系数。当机器人姿势的变化超过用户定义的阈值时,新的机器人状态节点x将添加到因子图中。在插入新节点时,使用增量平滑(incremental smoothing )和贝叶斯树(iSAM2)映射优化因子图[19]。生成这些因素的过程将在以下章节中描述。
B. IMU Preintegration Factor
The measurements of angular velocity and acceleration from an IMU are defined using Eqs. 2 and 3: IMU的角速度和加速度测量值使用等式2,3定义: where ω t ^ \hat{ω_t} ωt^ and a t ^ \hat{a_t} at^ are the raw IMU measurements in B B B at time t t t. ω t ^ \hat{ω_t} ωt^ and a t ^ \hat{a_t} at^ are affected by a slowly varying bias b t b_t bt and white noise n t n_t nt. R t B W R^{BW}_t RtBW is the rotation matrix from W W W to B B B. g g g is the constant gravity vector in W W W 其中 ω t ^ \hat{ω_t} ωt^和 a t ^ \hat{a_t} at^是 t t t时刻在 B B Body坐标系下原始的IMU测量值。 ω t ^ \hat{ω_t} ωt^和 a t ^ \hat{a_t} at^受到缓慢变化的偏差 b t b_t bt和白噪声 n t n_t nt的影响。 R t B W R^{BW}_t RtBW是从 W W W到 B B B的旋转矩阵。 g g g是在World坐标系下的恒定重力矢量。
We can now use the measurements from the IMU to infer the motion of the robot. The velocity, position and rotation of the robot at time t + ∆ t t + ∆t t+∆t can be computed as follows: 我们现在可以使用IMU的测量值来推断机器人的运动。机器人在时间 t + ∆ t t + ∆t t+∆t的速度、位置和旋转的计算方法如下:
对比VINS,LIO-SAM这里也是在世界坐标系下(
vins中认为加速度与重力加速度同向,所以用+,LIO-SAM假定为反向,所以是-,其实正负无所谓,依据个人定义而定)VINS中采用了积分的形式,只是表达方式不同,与LIO-SAM的公式含义是相同的。最多一个是中值积分,另一个是欧拉积分 p b k + 1 w = p b k w + v b k w Δ t k + ∬ t ∈ [ t k , t k + 1 ] ( R t w ( a ^ t − b a t − n a ) − g w ) d t 2 \mathbf{p}_{b_{k+1}}^{w}=\mathbf{p}_{b_{k}}^{w}+\mathbf{v}_{b_{k}}^{w} \Delta t_{k}+\iint_{t \in\left[t_{k}, t_{k+1}\right]}\left(\mathbf{R}_{t}^{w}\left(\hat{\mathbf{a}}_{t}-\mathbf{b}_{a_{t}}-\mathbf{n}_{a}\right)-\mathbf{g}^{w}\right) d t^{2} pbk+1w=pbkw+vbkwΔtk+∬t∈[tk,tk+1](Rtw(a^t−bat−na)−gw)dt2 v b k + 1 w = v b k w + ∫ t ∈ [ t k , t k + 1 ] ( R t w ( a ^ t − b a t − n a ) − g w ) d t \mathbf{v}_{b_{k+1}}^{w}=\mathbf{v}_{b_{k}}^{w}+\int_{t \in\left[t_{k}, t_{k+1}\right]}\left(\mathbf{R}_{t}^{w}\left(\hat{\mathbf{a}}_{t}-\mathbf{b}_{a_{t}}-\mathbf{n}_{a}\right)-\mathbf{g}^{w}\right) d t vb 标签: h8a传感器