Andrew Kramer’s Research

Me on the Teneriffe Falls trail outside North Bend, WA.

I’m Andrew Kramer. I recently completed my PhD in robotics and I have accepted an applied scientist position with Amazon’s Scout project starting in July 2021. My PhD research focused on the use of millimeter wave radar for robotic perception, with a focus on state estimation and mapping for robotic explorers in visually degraded environments (VDEs). Visual degradation can include complete darkness, thick fog and smoke, or high shadowing and specularity.

Robotic perception in VDEs is an important research topic for a number of reasons. For NASA, perception in VDEs is necessary for exploring subsurface caverns on the moon and Mars; environments with thick haze like Titan and Venus; or situations where an aerial explorer like the Mars Copter can stir up large plumes of dust. Closer to home, autonomous robots face VDEs in search-and-rescue and disaster response scenarios and exploring subterranean environments as in the DARPA Subterranean challenge. Unfortunately, the most commonly used robotic perception techniques, vision and lidar, are severely limited in VDEs. So robotic perception methods that easily generalize to VDEs are acutely needed. A summary of the research I conducted over the course of my PhD can by found in the video below:

If you’d like more detail on any of my research projects you can find below abstracts and links to the full text of papers that have been accepted for publication, as well as more demonstration videos:

In my limited free time I also maintain my own projects in robotic perception as a hobby. My blog documents these projects with detailed descriptions and tutorials. My goal is to deeply understand how common robotics algorithms function, so I don’t use off-the-shelf software. Instead I build, test, and present my own versions of popular systems like EKF SLAM and SLAM using 2D LIDAR. Along the way I also sometimes post irrelevant but fun diversions, like how to power Nixie tubes with an iPhone charger. Check out my blog and feel free to use my work as a jumping-off point for your own projects!


ColoRadar: The Direct 3D Millimeter Wave Radar Dataset​

[submitted to IJRR]

Millimeter wave radar is becoming increasingly popular as a sensing modality for robotic mapping and state estimation. However, there are very few publicly available datasets that include dense, high-resolution millimeter wave radar scans and there are none focused on 3D odometry and mapping. In this paper we present a solution to that problem. The ColoRadar dataset includes 3 different forms of dense, high-resolution radar data from 2 FMCW radar sensors as well as 3D lidar, IMU, and highly accurate groundtruth for the sensor rig’s pose over approximately 2 hours of data collection in highly diverse 3D environments.



Radar-Inertial State Estimation and Mapping For MAVs In Dense Smoke And Fog

[published at ISER 2020]

The four different visibility levels in which I tested my radar-inertial system.

In this work we present a suite of perception techniques for reliable state estimation in GPS-denied and visually degraded environments (VDEs) by fusing measurements from an automotive-grade millimeter-wave radar and an inertial measurement unit (IMU). The proposed method is resistant to low illumination, subtle texture, or visual obscurants such as fog that might constitute such an environment. The proposed method estimates the robot’s global frame velocity and orientation over a sliding window of radar and IMU measurements. Our approach also uses a new radar-based mapping method featuring a learned noise filter using the PointNet segmentation network trained using weak supervision. We show in quantitative experiments that our method’s accuracy is comparable to visual-inertial odometry (VIO) and lidar odometry when sensing conditions are favorable, i.e. the environment is not visually degraded. We then demonstrate that both VIO and lidar are severely impacted by fog, while our method is unaffected.

The video below gives a brief overview of the training procedure and implementation of the learned noise filter.


Radar-Inertial Ego-Velocity Estimation For Visually Degraded Environments

[full paper published at ICRA 2020]

Shows the distribution of radar returns in a subterranean scene.

We present an approach for estimating the body-frame velocity of a mobile robot. We combine measurements from a millimeter-wave radar-on-a-chip sensor and an inertial measurement unit (IMU) in a batch optimization over a sliding window of recent measurements. The sensor suite employed is lightweight, low-power, and is invariant to ambient lighting conditions. This makes the proposed approach an attractive solution for platforms with limitations around payload and longevity, such as aerial vehicles conducting autonomous exploration in perceptually degraded operating conditions, including subterranean environments. We compare our radar-inertial velocity estimates to those from a visual-inertial (VI) approach.We show the accuracy of our method is comparable to VI in conditions favorable to VI, and far exceeds the accuracy of VI when conditions deteriorate.

The video below is a recording of my presentation for ICRA.


Visual-Inertial SLAM in Subterranean Environments

[full paper published at FSR 2019]

Example image from our subterranean visual-inertial dataset.

Among the most challenging of environments in which an autonomous mobile robot might be required to serve is the subterranean environment. The complete lack of ambient light, unavailability of GPS, and geometric ambiguity make subterranean simultaneous localization and mapping (SLAM) exceptionally difficult. While there are many possible solutions to this problem, a visual-inertial frame-work has the potential to be fielded on a variety of robotic platforms which can operate in the spatially constrained and hazardous environments presented by the subterranean domain. In this work we present an evaluation of visual-inertial SLAM in the subterranean environment with onboard lighting and show that it can consistently perform quite well, with less than 4% translational drift. However, this performance is dependent on including some modifications that depart from the typical formulation of VI-SLAM, as well as careful tuning of the system’s visual tracking parameters. We discuss the sometimes counter-intuitive effects of these parameters and provide insight into how they affect the system’s overall performance.


Random Picture Of My Dog

Thanks for scrolling all the way to the bottom! Here’s a picture of my dog!