Intro To LIDAR SLAM

I’m two years in to my PhD in robotics and things are going well. I’m working on robotic perception at the NASA Jet Propulsion Laboratory over the summer and I recently had a paper accepted to the conference on Field and Service Robotics. There’s just one problem: I still haven’t won the bet that led me to return to grad school in the first place; I haven’t built a robotic system for autonomous indoor mapping. Today we’ll take a big step toward that goal by starting our development of a 2D LIDAR-based SLAM system.

In this post I’ll propose a strategy for solving the localization and mapping portion of my problem. I’ll also introduce the Intel Research Center (IRC) dataset as a means for evaluating my SLAM efforts. I’ll demonstrate my developments first on the IRC dataset, but the eventual goal will be to use them on a real robot.

The Strategy

The goal of this series is to develop LIDAR-based 2 dimensional SLAM. Of course, numerous open source packages already exist for LIDAR SLAM but, as always, my goal is to understand SLAM on a fundamental level. That’s why I’m building everything from scratch and taking a detailed look at the underlying math.

I start by introducing the dataset below. In subsequent posts I present tutorials on basic methods for LIDAR odometry via matching between LIDAR scans and then matching scans to a persistent map representation. I then develop a method for LIDAR SLAM that fuses information from scan matching and wheel odometry. Lastly I present a full SLAM method which estimates both a globally-consistent pose graph and map. This last step will require all the previously presented methods as well as methods we haven’t discussed such as place recognition and loop closure. Links to all the posts in the series can be found below (these will be populated incrementally as work on this project progresses):

Why LIDAR?

LIDAR is an interesting and versatile sensor. In many ways 2D LIDAR measurements are very similar to the measurements we used in the UTIAS dataset in my EKF SLAM tutorial. As with the UTIAS dataset, the measurement model is simply the range and bearing to the measured landmark or obstacle. No complicated projection or distortion models are required to extract this information, so LIDAR is a pretty gentle introduction to the use of raw sensor data.

Unlike the UTIAS dataset, however, our sensor measurements are not associated with particular landmarks in the environment. Instead a LIDAR measurement is associated with whatever surface happened to get in the way of the laser. This makes data association more difficult. Luckily LIDAR makes up for this ambiguity by giving us much more information per timestep. At a given timestep in the UTIAS dataset we get just a few range and bearing measurements to nearby landmarks. With LIDAR, on the other hand, we get range and bearing measurements to everything in the robot’s vicinity. So instead of just estimating the locations of discrete landmarks we can estimate a continuous map of the whole environment. This is exactly what we need for an indoor mapping robot.

A SICK LMS sensor, similar to the one used to create the IRC Dataset.

Lastly, LIDAR sensors have a (mostly deserved) reputation for being extremely expensive and therefore out of reach for the robotics hobbyist. For instance, the 2D scanning LIDAR used to create the IRC dataset costs thousands of US dollars. How many basement tinkerers have that kind of money to spend on their hobby?

rplidar A3, an inexpensive scanning 2D lidar sensor

Luckily for us, cheaper LIDAR sensors, such as the Rplidar A3, have recently been introduced. These have lower range and lower precision than their more expensive cousins. They are totally sufficient for hobby projects though, and their cost is in the hundreds of dollars rather than the thousands.

Of course, publicly available datasets like the IRC dataset make it possible to do robotics work without spending any money. So let’s start there!

The IRC Dataset

The IRC dataset is an indoor localization and mapping dataset recorded at the Intel Research Center in Seattle in 2003. It is available on the MPRT website but I’d recommend getting it from my github repo instead. The raw data is the same in either case, but my repo has a few helpful scripts for loading, aligning, and visualizing the data. The original dataset has some ambiguities. For instance, it doesn’t specify the angular spacing between the lidar beams or the transformation between the odometry coordinate frame and the lidar coordinate frame. My DataLoader.py script resolves these problems for you!

The DataLoader reads the dataset from the txt files. It converts the odometry measurements into transformation matrices. It also converts the raw LIDAR measurements into cartesian points in the robot’s coordinate frame. Finally, it packages these measurements in Measurement objects. This is a much easier way to interact with the data than reading the txt files line by line.

You can see the LIDAR data in the robot’s frame of reference by running my script, VisializeLaser.py. A short bit of that script’s visualization is shown below:

Raw LIDAR measurements shown in the robot’s frame of reference.

Also, to make sure I got the sensor coordinate frame transformation correct, I made VisualizeMeasurements.py. It plots the pose of the robot in the global coordinate frame calculated by integrating the odometry measurements. We can also see the LIDAR measurements transformed into the global frame. As is shown below, the robot’s moves while the LIDAR measurements stay (more or less) static. This indicates we have the robot-to-LIDAR transformation correct. The visualization also shows the LIDAR measurements don’t stay completely static. This indicates the odometry is pretty noisy and accumulates drift quickly. We also notice the odometry noise is biased pretty heavily in rotation. This will be significant when we try to use it later.

Visualization of the odometry and lidar measurements together.