Colin’s Spooky Brain Transplant!

In the spirit of Halloween (which I realize was nearly a month ago), I’ve done some unholy, Frankenstein-esque modifications to Colin’s brain!

igor_and_abby


Moving the Arduino to the Breadboard

First of all, I’ve been using an Arduino as Colin’s brain but an Arduino is actually pretty bulky, considering its capabilities. So I’ve replaced this:ArduinoDuemilanove

with this:

atmega328

It’s an ATmega328; the same chip the Arduino uses. If you flash it with an Arduino bootloader you can program and use it exactly like you would an Arduino. This page has a great tutorial for programming and using the bare chip on a breadboard. Also, if you’re willing to pay a little extra you can buy a chip that has the bootloader pre-installed. Both SparkFun and Adafruit sell pre-programmed ATmega328s.

To switch from an Arduino to an ATmega328 I just needed to rejigger my power supply a bit and rewire everything for the new pin mapping (good thing I kept detailed notes on my wiring, right?)

Next, Colin’s control libraries are getting pretty big. Right now they take up about 25% of the ATmega’s memory. When all’s said and done, the control libraries should only be a small part of his program’s total size. So we’re going to need some more space before long.

Also, writing and testing code with the Arduino is getting to be kind of a pain. First, you have to upload your program with the Arduino IDE every time you make a change or want to run a new program. This can get pretty cumbersome, especially if you want to make minor tweaks to a program or try different versions of it. Also, there’s no possibility for multi-threading or concurrent execution. So you’re pretty limited in what your programs can do.


Adding Computing Power!

The solution to this could be to use a more powerful single-board computer like a Raspberry Pi. Since a Pi just runs linux, I can run write, edit, and compile programs on the Pi itself. Also, because it has WiFi, I can leave it connected to Colin and control it via SSH. This makes programming much, much easier. However, it has a couple of significant downsides. It only has one PWM pin, so it can’t directly control two motors. Also, it can’t handle hardware interrupts so it can’t really use motor encoders either.

My solution is to keep the ATmega for low-level motor and sensor control while the Raspberry Pi handles the high-level processing. The Pi and the ATmega will communicate over a serial bus. They will use this to relay commands, position updates, sensor readings, and so on.

This means Colin now has two brains! This is what he looks like with his new braaaains!

colin_with_pi_annotated

Note that the Raspberry Pi uses 3.3V logic and all of the other hardware runs at 5V, so I need to use a level shifter to communicate between the Pi and the ATmega. Also, the Raspberry Pi runs on 5V USB power. I already have my voltage regulator supplying 5V for my ATmega, so I simply cut the end off an old micro USB cable and connected the +5V and GND wires to my breadboard’s power rail. The Pi draws a lot of current, around 1.3A under stress, and my voltage regulator can only provide 1.5 to 1.75A. So I might need to upgrade to bigger voltage regulator soon.

I’m also re-configuring Colin’s sonar sensors. I will start out with 8 sensors arranged in a ring, controlled independently by an ATtiny84. Since each sensor has a range of about 15 degrees, 24 total sensors are necessary to completely cover 360 degrees. I’m not sure I’ll need to actually use 24 sensors, but my design will allow me to add additional sensor rings if necessary. I’ll go into detail on these in a later post.

I’ll explain how to coordinate the raspberry pi and the ATmega328 using serial communication in a later post, and I’ll provide an overview of my ATtiny-based sensor controllers shortly thereafter!

Odometry With Arduino

Now that we can control the speed of Colin’s wheels, we can tell how far Colin has moved using odometry. It involves counting the encoder ticks for Colin’s motors and integrating that information over time to determine Colin’s change in position. This method has the distinct advantage that it relies on the actual motion of Colin’s wheels, and thus doesn’t require absolute accuracy from the speed control algorithm. Odometry also provides a good motion model that can be used as part of a larger localization algorithm. As such it’s a good stepping stone toward my goal of making a simultaneous localization and mapping program for Colin!

This tutorial owes a lot to MIT’s primer on odometry and motor control. It does a great job explaining the theory behind odometry.


Theory Basics

The position of a robot in space is referred to as its pose, which is defined by six quantities: its translation in Cartesian coordinates (x, y, and z) and its rotation about those three axes (θx, θy, and θz). Luckily, a differential drive robot like Colin can only translate in two dimensions and rotate in one, so Colin’s pose can be defined by three quantities (x, y, and θz).

Let’s say Colin’s initial pose is (0, 0, 0) at t=t_{0}. How can we determine his change in pose when t=t_{0}+\Delta t where \Delta t is the time interval between pose updates? Because we’re already using encoders to control Colin’s speed, it’s easy to keep track of the distance Colin’s wheels have turned. In fact, my Encoder class already does this with its getDistance() function.

Let’s say d_{left} is the distance turned by the left wheel over \Delta t, and d_{right} is the same quantity for the right wheel. Knowing these two distances can tell us a couple things. If d_{left}=d_{right} then Colin traveled in a straight line. If d_{left}\gt d_{right} he turned to the right and if d_{left}\lt d_{right} he turned left. We can also use d_{left} and d_{right} to calculate Colin’s exact translation and rotation.

To simplify things a bit we’ll assume Colin’s wheel speeds are constant, which adds a negligible amount of error as long as we keep \Delta t small. This assumption means that Colin is always travelling along a circular arc. The length of this arc, d_{center} is given by the average of d_{left} and d_{right}:

d_{center}=\frac{d_{left}+d_{right}}{2}

We’ll say that Colin’s rotation in radians over \Delta t is \phi. Also, let r_{left} be the distance between the center of Colin’s arc of travel and his left wheel and r_{right} be the same distance for the right wheel. This means that d_{left}=\phi r_{left} and d_{right}=\phi r_{right}. Also, r_{left}=r_{right}+d_{wheels} where d_{wheels} is the distance between Colin’s wheels. With a little bit of algebra we can show the following:

\phi=\frac{d_{right}-d_{left}}{{d_{wheels}}}

We can also calculate Colin’s change in his x and y coordinates via the following equations:

x'=x+d_{center}cos(\theta)

y'=y+d_{center}sin(\theta)

Where x' and y' are the new x and y position, respectively. It’s important to note that the above equations are simplified. They assume that Colin’s motion happens in two discrete phases: he rotates in place and then translates along a straight line. This is clearly not true, but as long as \phi is small, the error introduced is negligible. This means that, as with our prior simplification, we need to keep \Delta t small to make this work. I’m not going to go into all the details here, but if you’re interested you can find the full derivation in the MIT odometry tutorial.

So, now that we have worked out the mathematical underpinnings for odometry, we can translate this into code!


Odometry Code

The magic happens in my new DifferentialDrive library. We’ll just go over the odometry portion today, but DifferentialDrive allows the user to control an arbitrary differential drive robot by specifying the robot’s translational and angular velocities and, optionally, the distance the robot should travel. I’ll explain all of that in a later post and include some implementation examples as well!

void DifferentialDrive::updatePosition()
{
   // get the angular distance traveled by each wheel since the last update
   double leftDegrees = _leftWheel->getDistance();
   double rightDegrees = _rightWheel->getDistance();

   // convert the angular distances to linear distances
   double dLeft = leftDegrees / _degreesPerMillimeter;
   double dRight = rightDegrees / _degreesPerMillimeter;

   // calculate the length of the arc traveled by Colin
   double dCenter = (dLeft + dRight) / 2.0;

   // calculate Colin's change in angle
   double phi = (dRight - dLeft) / (double)_wheelDistance;
   // add the change in angle to the previous angle
   _theta += phi;
   // constrain _theta to the range 0 to 2 pi
   if (_theta > 2.0 * pi) _theta -= 2.0 * pi;
   if (_theta < 0.0) _theta += 2.0 * pi;

   // update Colin's x and y coordinates
   _xPosition += dCenter * cos(_theta);
   _yPosition += dCenter * sin(_theta);
}

The above function needs to be called every \Delta t and, to keep the error from our simplifications small \Delta t needs to be small. In my testing I’ve found that doing a position update with the same frequency as the updates for the PID motor controller (every 50ms) results in good accuracy over short distances. However, this update involves a significant amount of extra computation, and doing it 20 times per second might require an excessive amount of processor time if you’re trying to do a lot of other computation at the same time. I’ve found that doing position updates half as often (every 100ms) results in very little loss of accuracy, so it’s entirely possible to balance accuracy and the resources your program has to spare.


Further Work

First of all, we need to integrate the above update function into the larger class that controls Colin’s motion. I’ll demonstrate that in a later post and include some examples that show how to use the class in an Arduino sketch.

Also, odometry can only be used to calculate Colin’s position relative to his starting position. It cannot be used to determine his absolute position in a space unless his starting position is known.

The larger problem is that odometry is inherently inaccurate. Encoder ticks do not translate directly into distance traveled by the wheel because wheels slip, the wheels aren’t perfectly circular, the ground isn’t perfectly flat, encoder ticks might be missed, and the motor gearbox has backlash that isn’t accounted for in our model. This means that Colin’s position calculated from odometry will gradually diverge from his true position. We could use other methods that might be more accurate, such as optical flow and IMUs. However, any sensor we might use suffers from some inherent random error, known as noise, and this error will accumulate over time.

To compensate for this error we can calculate Colin’s probable position by incorporating data from another sensor. This is what I’ll be working on over the next several months. First I’ll develop a program to localize him to a pre-existing (or a priori) map, and then I’ll work on a program that allows him to build his map on the fly.

I should note that software for this purpose is already available as part of the robot operating system (ROS), but I’m not interested in pre-made solutions. My goal here is to develop these solutions myself so we can all learn the intimate details of their operation.