The following sections summairze the Grid-based FastSLAM algorithm which is one instance of FastSLAM. This algorithm estimates the trajectory of a mobile robot while simultaneously creating a grid map of the environment. Grid-based FastSLAM is combination of a particle filter such as Adaptive Monte Carlo Localization (amcl) and a mapping algorithm such as occupancy grid mapping.
SLAM Fundamentals
SLAM stands for Simultaneous Localization and Mapping sometimes refered to as Concurrent Localization and Mappping (CLAM). The SLAM algorithm combines localization and mapping, where a robot has access only to its own movement and sensory data. The robot must build a map while simultaneously localizing itself relative to the map.
The map and the robot pose will be uncertain, and the errors in the robot’s pose estimate and map will be correlated. The accuracy of the map depends on the accuracy of the localization and vice versa.
Chicken and eggo problem: The map is needed for localization, and the robot’s pose is needed for mapping. This makes SLAM a real challenge but is essential for mobile robotics.
They must be able to move in environments they have never seen before. Examples are a vacuum cleaner where also the map can change due to moving furniture. Of course self driving vehicles require SLAM to
update their maps while localizing themselfs in it.
There exist generally five categories of SLAM algorithms:
- Extended Kalman Filter SLAM (EKF)
- Sparse Extended Information Filter (SEIF)
- Extended Information Form (EIF)
- FastSLAM
- GraphSLAM
This posts describes the FastSLAM approach which uses a particle filter and a low dimensional Extended Kalman filter. This algorithm will be adapted to grid maps which results in Grid-based FastSLAM. GraphSLAM on the other hand uses constraints to represent relationships between robot poses and the environment. With this, the algorithm tries to resolve all the constraints to create the most likely map given the data. An implementation of GraphSLAM is called Real Time Apperance Based Mapping (RTABMap).
Localization
In Localization problems a map is known beforehand and the robot pose is estimated using its sensor mesaurements $z_{1:t}$, control inputs $u_{1:t}$ and its initial pose $x_{1:t-1}$. With this data, the new belief $p(x_{1:t}|x_{1:t-1}, z_{1:t}, u_{1:t})$ can be computed as a probability distribution.
The localization estimation can be done with an Extended Kalman filter or Monte Carlo localization. With the Monte Carlo particle filter approach (MCL) each particle consists of the robot pose $(x, y, \theta)$ and its importance weight $w$. With motion and sensor updates, followed by resampling it is possible to estimate the robots pose.
Mapping
In mapping problems the robot pose $x_{1:t}$ is known and the map $m_{t}$ at time $t$, either static or dynamic is unknown. Therefore the mapping problem is to find the posterior belief of the map $p(m_t|x_{1:t}, z_{1:t})$ given the robot poses and its measurements $z_{1:t}$.
The challenges in mapping are the number of state variables. In localization, only the robots pose is estimated with its $x$ and $y$ position. A map on the other hand lies in a continuous space. This can lead to infinitely many variables used to describe the map. Additional uncertainty is present through sensor data perception. Other challenges are the space and its geometries that should be mapped. For example repetitive environments such as walkways with no doors or similar looking ones.
The mapping algorithm that is described in this post is occupancy grid mapping. The algorithm can map any arbitrary environment by dividing it into a finite number of grid cells.
SLAM Characteristics
SLAM exists in two forms which are Online SLAM and Full SLAM. In both forms, the algorithm estimates a map of its environment. However, Online SLAM estimates only single poses of the robot at specific time instances. Given the measurements $z_{1:t}$ and the control inputs $u_{1:t}$ the problem is to find the posterior belief of the robot pose $x_{t}$ at time $t$ and the map $m_{t}$.
\[\begin{align} p(x_{t}, m|z_{1:t}, u_{1:t}) \end{align}\]Full SLAM on the other hand, estimates a full trajectory $x_{1:t}$ of the robot instead of just a single pose $x_t$ at a particular time step.
\[\begin{align} p(x_{1:t}, m|z_{1:t}, u_{1:t}) \end{align}\]Both problems are related to each other. The Online SLAM problem is result of integrating over the individual robot poses of the Full SLAM problem once at a time.
\[\begin{align} \underbrace{p(x_t, m|z_{1:t}, u_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \underbrace{p(x\_{1:t}, m|z_{1:t}, u_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1} \end{align}\]Another characteristic of SLAM is that it is a continouous and discrete problem. Robot poses and object or landmark locations are continouous aspects of the SLAM problem. While sensing the environment continously, a discrete relation between detected objects and newly detected ones needs to be made. This relation is known by correspondance and helps the robot to detect if it has been in the same location. With SLAM, a mobile robot is establishing a discrete relation between newly and previously detected objects.
Correspondences should be included in the estimation problem meaning that the posterior includes the correspondence in both, the online and full SLAM problem.
\[\begin{align} &\text{Online SLAM: } p(x_t, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_t, m, c_t|z_{1:t}, u_{1:t}) \\ &\text{Full SLAM: } p(x_{1:t}, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t}) \end{align}\]The advantage to add the correspondances to both problems is to have the robot better understand where it is located by establishing a relation between objects. The relation between the online SLAM and full SLAM problem is defined as
\[\begin{align} \underbrace{p(x_t, m, c_t|z_{1:t}, u_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \sum_{c_1} \sum_{c_2} \dots \sum_{c_{t-1}} \underbrace{p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1} \end{align}\]where it is now required to sum over the correspondence values and integrate over the robot poses from the Full SLAM problem.
Challenges
The continouous portion consists of the robot poses and object locations and is highly dimensional.
Also the discrete correspondences between detected objects are highly dimensional.
These aspects require an approximation even when known correspondences are assumed.
There exist two instances of FastSLAM that require known correspondences which are FastSLAM 1.0 and FastSLAM 2.0. With these approaches each particle holds a guess of the robot trajectory and by doing so the SLAM problem is reduced to mapping with known poses.
To do SLAM without known correspondences, meaning without known landmark positions the algorithm in the following section can be used.
Grid-based FastSLAM Algorithm
FastSLAM solves the Full SLAM problem with known correspondences using a custom particle filter approach known by the Rao-Blackwellized particle filter approach. This approach estimates a posterior over the trajectory using a particle filter.
With this trajectory the robot poses are now known and the mapping problem is then solved with a low dimensional Extended Kalman Filter. This filter models independent features of the map with local Gaussians.
Using a grid map the environment can be modeled and FastSLAM gets extended without predefining any landmark positions. This allows to solve the SLAM problem in an arbitrary environment.
With the Grid-based FastSLAM algorithm, each particle holds a guess of the robot trajectory using a MCL particle filter. Addionaly, each particle maintains its own map by utilizing the occupancy grid mapping algorithm.
The steps of the algorithm consist of sampling motion $p(x_t|x_{t-1}^{[k]}, u_t)$, map estimation $p(m_t|z_t, x_t^{[k]}, m_{t-1}^{[k]})$ and importance weight $p(z_t|x_t^{[k]}, m^{[k]})$.
The algorithm takes the previous belief $X_{t-1}$ or pose, the actuation commands $u_t$ and the sensor measurements $z_t$ as input. Initially $M \in \mathbb{R}$ particles are generated randomly which defines the initial belief $\bar{X}_t$.
The first for loop represents the motion, sensor and map update steps. Here, the pose of each particle is estimated and the likelihoods of the measurements and the map are updated.
To update the measurements model likelihood the importance weight technique is used in the measurement_model_map
function. In the update_occupancy_grid
function, each particle updates its map using the occupancy grid mapping algorithm.
The newly estimated k-th particle pose, map and likelihood of the measurement are all added to the hypotetical belief $\bar{X}_t$.
In the second for loop the resampling process of the particles takes place. The resampling is implementd using a resampling wheel technique. Here, particle measurements that are close to the robots real world measurement values are redrawn more frequently in upcoming iterations. The drawn particle poses and maps are added to the system belief $X_t$ which is returnd from the algorithm to start a new iteration with the next motion and sensor updates.
ROS gmapping
The gmapping ROS package uses the Grid-based FastSLAM algorithm. This package contains the single slam_gmapping
node, which subscribes to the tf
and scans
topics.
Using these inputs, it generates a 2D occupancy grid map and outputs robot poses on the map
and entropy
topics. Additional map data is provided through the map_metadata
topic. Another way to access the map is to use the service provided by the node.
To demonstrate gmapping, turtlebot will be deployed in the willow garage environment inside gazebo. Moving the turtlebout around using the teleop
package and running the slam_gmapping
node will generate a map.
Links
Refere to Wikipedia for a list of SLAM methods. There you can also find resources for the FastSLAM instances:
Further details about MCL are found in the paper of Sebastian Thrun et al. The gmapping algorithm can be found here.
Reference
This post is a summary of the lesson on FastSLAM from the Robotics Nanodegree of Udacity.
Comments