LIDAR Odometry with ICP

This post is the second in a series of tutorials on SLAM using scanning 2D LIDAR and wheel odometry. The other posts in the series can be found in the links below. The links will be updated as work on the series progresses.

In the previous post I introduced the Intel Research Center (IRC) Dataset and we used it in some basic visualizations. In particular, I used the robot’s odometry to get a rough estimate of its pose over time by simply concatenating the relative transformations between timesteps, T_i 0 \leq i \leq n:

    \begin{equation*} T_n = T_1 T_2 \dots T_n \end{equation*}

I then used the transform T_n to project the laser scan at each timestep into the global coordinate frame: p_g = T_n p_l where p_g is the set of homogeneous scan points in the global frame, p_l is the set of homogeneous points in the robot’s local frame, and T_n is the estimated transform between the global and local coordinate frames. This process is visualized in VisualizeMeasurements.py in my github repo:

Visualization of the odometry and lidar measurements together.

Watching this visualization even over a short time, it’s obvious that the robot’s odometry is very noisy and collects drift very quickly. With perfect odometry, the objects measured by the LIDAR would stay static as the robot moves past them. This is clearly not the case. Interestingly, the odometry seems to be fairly reliable for translational motion, but it drifts quickly in rotation. This will be significant later.

How Do We Fix The Drift Problem?

It’s clear to us the robot’s wheel odometry isn’t sufficient to estimate its motion. We know this because we can overlay the robot’s LIDAR scans in our minds and get a sense of how the robot’s estimated motion is deviating from its true motion. If we can do this in our minds, could we tell the robot how to do it? Can the robot use its LIDAR scans to estimate its own motion?

Hopefully you’ve guessed the answer is yes, through a process called scan matching. Basically the goal is to take a new scan from the robot’s LIDAR and find the transformation that best aligns the new scan with either previous scans or some sort of abstracted map. There are many ways to implement this idea and for this tutorial I’m going to demonstrate the simplest method: using the Iterative Closest Point (ICP) algorithm to align the newest LIDAR scan with the previous scan.

Iterative Closest Point In Pictures

The ICP algorithm involves 3 steps: association, transformation, and error evaluation. These are repeated until the scans are aligned satisfactorily. I’ll first demonstrate the process pictorially with an example from the IRL dataset and delve into the math below. Let’s first imagine we want to find the transformation that aligns the two scans pictured below. The previous scan, referred to as the target, is in cyan while the new scan, also called the source is in magenta. The goal is to find the rigid transformation (rotation and translation) that best aligns the source to the target.

Two scans we wish to align using ICP

Our first step in estimating this transformation is to decide which points in the source scan correspond to the same physical features as points in the target scan. The simplest way to do this is through a nearest neighbor search: points in the source scan are associated to the nearest point in the target scan. In the image below I’ve found the nearest neighbors of each point in the target scan. Associated points are connected with blue lines:

The association step

We can immediately see some mistakes in the nearest neighbor search, but in general the associations pictured will pull the source points in the right direction. The next step in the process is transformation. We find the transformation that, when applied to the source points, minimizes the mean-squared distance between the associated points: \tilde{T}=\underset{T}{\text{argmin}}\frac{1}{N}\sum_i^N \|t_i - Ts_i\|^2 where \tilde{T} is the final estimated transform and t_i and s_i are target points and source points, respectively. The result of this estimation is pictured below:

Result of the first transformation step

After this we evaluate the error in the alignment as e=\frac{1}{N}\sum_i^N \|t_i - Ts_i\|^2 and decide if we need to repeat the above process. In this case our scans still aren’t aligned very well, so we redo the associations with the transformed source points and repeat the process.

Nearest neighbor associations for the transformed source points

After five iterations our scans the algorithm finds a pretty good alignment:

Scan alignment after the fifth iteration

The Math Details

ICP is actually pretty straightforward, mathematically. The association step is pretty simple. It can just be a brute-force search for the nearest pairs of points between the source and target scans. If you’re working with large scans though, it’s a good idea to use KD Trees for this step.

The real trick to ICP is in the transformation step. Basically, we find the covariance between the two point sets, the matrix M.

    \begin{equation*} M = PQ^T  \end{equation*}

Where P is a matrix whose i^\text{th} column is s_i - \mu_s or the i^\text{th} source point expressed relative to the centroid of the source point set \mu_s. Similarly, Q is a matrix whose i^\text{th} column is t_i - \mu_t. Once we have the covariance matrix M, we find the rotation R between the two point clouds using singular value decomposition:

    \begin{align*} M &= UWV^T \\ R &= VU^T \end{align*}

If you’re wondering how you break the M matrix down into U, W, and V^T, know that most linear algebra packages (including matlab and numpy) have functions for SVD. That’s about as far as you need to get into it. If you’re interested though, the wikipedia page has some good details.

We’ve found the rotation R between the point sets, now we just need the translation t. Luckily it’s pretty simple, just the difference between the centroids of the point clouds t = \mu_t - \mu_s.

Once we have our translation and rotation we evaluate the alignment error as e=\frac{1}{N}\sum_i^N \|t_i - Rs_i + t\|^2. We use this to determine if we should quit or iterate again. Normally we stop the process if the error at the current step is below a threshold, if the difference between the error at the current step and the previous step’s error is below a threshold, or if we’ve reached a maximum number of iterations.

Below is a visualization of a simple ICP motion estimation algorithm. It simply aligns the newest scan to the previous scan to find the motion of the robot between scans:

Demonstration of motion estimation using ICP

Note that this method for motion estimation works pretty well sometimes. If the robot is near large sections of wall at different angles it can estimate its transformation between scans pretty reliably. This is because it has good environmental queues to its motion in all directions. On the other hand, if the robot is in a mostly straight hallway, there’s really nothing in its measurements that will tell it how it’s moving along the hallway. One alignment is as good as any other as long as the walls line up. In these cases the robot’s estimates of its translation are very poor. Similarly, if there just aren’t a lot of unique, persistent features in the scan, which happens sometimes when the robot approaches corners, there aren’t any good cues for the robot to estimate its rotation.

So, matching successive LIDAR scans via the iterative closest point algorithm can give our robot some information about its own movement. However, this on its own is not enough to provide a reliable motion estimate. Luckily, our robot has wheel odometry in addition to LIDAR. Even luckier, in fact, ICP is pretty reliable at estimating rotations but poor with translation in some cases. The wheel odometry, on the other hand, gives us very accurate translation but it is very unreliable with rotation. Next time, we’ll experiment with fusing information from these two sensors to create a more reliable motion estimate.

The Code

Below you can see an implementation of the ICP algorithm in python. You can find the full class, Align2D.py, in my github repo as well as a demonstration of its use in VisualizeICP.py.

# uses the iterative closest point algorithm to find the
# transformation between the source and target point clouds
# that minimizes the sum of squared errors between nearest 
# neighbors in the two point clouds
# params:
#   max_iter: int, max number of iterations
#   min_delta_err: float, minimum change in alignment error
def AlignICP(self, max_iter, min_delta_err):

	mean_sq_error = 1.0e6 # initialize error as large number
	delta_err = 1.0e6    # change in error (used in stopping condition)
	T = self.init_T
	num_iter = 0         # number of iterations
	tf_source = self.source

	while delta_err > min_delta_err and num_iter < max_iter:

		# find correspondences via nearest-neighbor search
		matched_trg_pts,matched_src_pts,indices = self.FindCorrespondences(tf_source)

		# find alingment between source and corresponding target points via SVD
		# note: svd step doesn't use homogeneous points
		new_T = self.AlignSVD(matched_src_pts, matched_trg_pts)

		# update transformation between point sets
		T = np.dot(T,new_T)

		# apply transformation to the source points
		tf_source = np.dot(self.source,T.T)

		# find mean squared error between transformed source points and target points
		new_err = 0
		for i in range(len(indices)):
			if indices[i] != -1:
				diff = tf_source[i,:2] - self.target[indices[i],:2]
				new_err += np.dot(diff,diff.T)

		new_err /= float(len(matched_trg_pts))

		# update error and calculate delta error
		delta_err = abs(mean_sq_error - new_err)
		mean_sq_error = new_err

		num_iter += 1

	return T

# uses singular value decomposition to find the 
# transformation from the target to the source point cloud
# assumes source and target point clounds are ordered such that 
# corresponding points are at the same indices in each array
#
# params:
#   source: numpy array representing source pointcloud
#   target: numpy array representing target pointcloud
# returns:
#   T: transformation between the two point clouds
def AlignSVD(self, source, target):

	# first find the centroids of both point clouds
	src_centroid = self.GetCentroid(source)
	trg_centroid = self.GetCentroid(target)

	# get the point clouds in reference to their centroids
	source_centered = source - src_centroid
	target_centered = target - trg_centroid

	# get cross covariance matrix M
	M = np.dot(source_centered.T,target_centered)

	# get singular value decomposition of the cross covariance matrix
	U,W,V_t = np.linalg.svd(M)

	# get rotation between the two point clouds
	R = np.dot(V_t.T,U.T)

	# get the translation (simply the difference between the point clound centroids)
	t = trg_centroid - src_centroid

	# assemble translation and rotation into a transformation matrix
	T = np.identity(3)
	T[:2,2] = np.squeeze(t)
	T[:2,:2] = R

	return T

def GetCentroid(self, points):
	point_sum = np.sum(points,axis=0)
	return point_sum / float(len(points))

Intro To LIDAR SLAM

I’m two years in to my PhD in robotics and things are going well. I’m working on robotic perception at the NASA Jet Propulsion Laboratory over the summer and I recently had a paper accepted to the conference on Field and Service Robotics. There’s just one problem: I still haven’t won the bet that led me to return to grad school in the first place; I haven’t built a robotic system for autonomous indoor mapping. Today we’ll take a big step toward that goal by starting our development of a 2D LIDAR-based SLAM system.

In this post I’ll propose a strategy for solving the localization and mapping portion of my problem. I’ll also introduce the Intel Research Center (IRC) dataset as a means for evaluating my SLAM efforts. I’ll demonstrate my developments first on the IRC dataset, but the eventual goal will be to use them on a real robot.

The Strategy

The goal of this series is to develop LIDAR-based 2 dimensional SLAM. Of course, numerous open source packages already exist for LIDAR SLAM but, as always, my goal is to understand SLAM on a fundamental level. That’s why I’m building everything from scratch and taking a detailed look at the underlying math.

I start by introducing the dataset below. In subsequent posts I present tutorials on basic methods for LIDAR odometry via matching between LIDAR scans and then matching scans to a persistent map representation. I then develop a method for LIDAR SLAM that fuses information from scan matching and wheel odometry. Lastly I present a full SLAM method which estimates both a globally-consistent pose graph and map. This last step will require all the previously presented methods as well as methods we haven’t discussed such as place recognition and loop closure. Links to all the posts in the series can be found below (these will be populated incrementally as work on this project progresses):

Why LIDAR?

LIDAR is an interesting and versatile sensor. In many ways 2D LIDAR measurements are very similar to the measurements we used in the UTIAS dataset in my EKF SLAM tutorial. As with the UTIAS dataset, the measurement model is simply the range and bearing to the measured landmark or obstacle. No complicated projection or distortion models are required to extract this information, so LIDAR is a pretty gentle introduction to the use of raw sensor data.

Unlike the UTIAS dataset, however, our sensor measurements are not associated with particular landmarks in the environment. Instead a LIDAR measurement is associated with whatever surface happened to get in the way of the laser. This makes data association more difficult. Luckily LIDAR makes up for this ambiguity by giving us much more information per timestep. At a given timestep in the UTIAS dataset we get just a few range and bearing measurements to nearby landmarks. With LIDAR, on the other hand, we get range and bearing measurements to everything in the robot’s vicinity. So instead of just estimating the locations of discrete landmarks we can estimate a continuous map of the whole environment. This is exactly what we need for an indoor mapping robot.

A SICK LMS sensor, similar to the one used to create the IRC Dataset.

Lastly, LIDAR sensors have a (mostly deserved) reputation for being extremely expensive and therefore out of reach for the robotics hobbyist. For instance, the 2D scanning LIDAR used to create the IRC dataset costs thousands of US dollars. How many basement tinkerers have that kind of money to spend on their hobby?

rplidar A3, an inexpensive scanning 2D lidar sensor

Luckily for us, cheaper LIDAR sensors, such as the Rplidar A3, have recently been introduced. These have lower range and lower precision than their more expensive cousins. They are totally sufficient for hobby projects though, and their cost is in the hundreds of dollars rather than the thousands.

Of course, publicly available datasets like the IRC dataset make it possible to do robotics work without spending any money. So let’s start there!

The IRC Dataset

The IRC dataset is an indoor localization and mapping dataset recorded at the Intel Research Center in Seattle in 2003. It is available on the MPRT website but I’d recommend getting it from my github repo instead. The raw data is the same in either case, but my repo has a few helpful scripts for loading, aligning, and visualizing the data. The original dataset has some ambiguities. For instance, it doesn’t specify the angular spacing between the lidar beams or the transformation between the odometry coordinate frame and the lidar coordinate frame. My DataLoader.py script resolves these problems for you!

The DataLoader reads the dataset from the txt files. It converts the odometry measurements into transformation matrices. It also converts the raw LIDAR measurements into cartesian points in the robot’s coordinate frame. Finally, it packages these measurements in Measurement objects. This is a much easier way to interact with the data than reading the txt files line by line.

You can see the LIDAR data in the robot’s frame of reference by running my script, VisializeLaser.py. A short bit of that script’s visualization is shown below:

Raw LIDAR measurements shown in the robot’s frame of reference.

Also, to make sure I got the sensor coordinate frame transformation correct, I made VisualizeMeasurements.py. It plots the pose of the robot in the global coordinate frame calculated by integrating the odometry measurements. We can also see the LIDAR measurements transformed into the global frame. As is shown below, the robot’s moves while the LIDAR measurements stay (more or less) static. This indicates we have the robot-to-LIDAR transformation correct. The visualization also shows the LIDAR measurements don’t stay completely static. This indicates the odometry is pretty noisy and accumulates drift quickly. We also notice the odometry noise is biased pretty heavily in rotation. This will be significant when we try to use it later.

Visualization of the odometry and lidar measurements together.