Intro to the EKF Step 4: SLAM With Unknown Correspondence

This tutorial is part of a series on Simultaneous Localization and Mapping (SLAM) using the extended Kalman filter. See the following for other tutorials in the series:

This tutorial builds on the previous tutorials on localization and SLAM. When doing SLAM with unknown measurement correspondence we no longer have any prior information about the robot’s environment. When we get a measurement we don’t know to which landmark it corresponds. This means we have to use a maximum likelihood measurement association method similar to the one used in the tutorial on localization with unknown correlation. There’s another, more difficult challenge though. Not only do we have no idea where the landmarks are in the environment, we don’t even know how many landmarks exist in the environment. This is the most difficult scenario for a SLAM system but, as you’ll see it only takes a couple of tweaks to the SLAM algorithm presented in the previous tutorial to handle this situation.


The State And Covariance Matrices

As in the previous tutorial on SLAM with known correlation, the state mean vector \mu and covariance matrix \Sigma need to contain entries for the robot’s pose as well as the landmark locations. The big difference here is that we don’t know how many landmarks we’ll be tracking. This means we need to add new landmarks to the state as we find them.

You may notice this presents a problem for the complexity of the algorithm. Every time a new landmark is added to the state the state covariance matrix grows, adding complexity to every calculation involving the covariance matrix, of which there are several. This is why the complexity of EKF SLAM scales with the number of landmarks in the state. This isn’t a problem if the robot always stays in the same area and if it can accurately associate measurements to existing landmarks. In this case the number of landmarks is roughly constant so the complexity of the algorithm stays constant. However, if the robot explores new areas the number of landmarks being tracked, and thus the complexity of the algorithm, grows with time. In this case it can become computationally intractable very quickly.


Limiting Complexity

There are a number of ways to limit the growth in EKF SLAM’s complexity. For instance, one can remove landmarks from the state if they haven’t been observed for a certain amount of time. Or, for a more sophisticated approach, one could keep two classes of landmarks: active and inactive. A landmark that hasn’t been observed for a certain amount of time is moved from the state matrix to the inactive set. In this formulation only the active landmarks are used in computing the state estimate, but if an inactive landmark is observed it can be added back to the state. This would prevent the complexity of the algorithm from growing out of control but also allow the robot to recognize previously visited locations, limiting drift in the state estimate. This method requires an extremely reliable measurement association algorithm, however. Without bulletproof measurement associations it is more likely to create new landmarks than associate measurements to landmarks in the inactive set. For this one usually needs more information from a measurement than just range and bearing. For instance, it is helpful to have information about the landmark’s shape or appearance in addition to its location relative to the robot.


Measurement Association

Our measurement association algorithm for SLAM is very similar to the one we used in localization with unknown correspondence. There’s an important additional requirement though. When doing localization we just need to decide which landmark is most likely to be associated with the current measurement. With SLAM we also need to decide if its more likely that a given measurement is associated with an existing landmark or with a new landmark.

As in the case for localization, for each new measurement we’ll loop over every known landmark and calculate the likelihood the measurement associates to that landmark. We then take the landmark most likely to be associated to the current measurement. If its likelihood of association is above a threshold value, we associate the measurement to that landmark. If its likelihood is below the threshold value we instead use that measurement to create a new landmark. In the algorithm below we calculate the Mahalanobis distance, \pi_k, between each landmark and the measurement, and we instead search for the landmark association with the minimum Mahalanobis distance. This is just another way of doing maximum likelihood association. Note also we’re using the same F_{x,k} matrix as in the previous tutorial.

    \begin{align*} \text{for all}&\text{ measurements } z_t^i=[r_t^i\ \phi_t^i]^T \text{ do} \\ &\begin{bmatrix} \bar{\mu}_{N_t+1,x} \\ \bar{\mu}_{N_t+1,y} \end{bmatrix} = \begin{bmatrix} \bar{\mu}_{t,x} \\ \bar{\mu}_{t,y} \end{bmatrix} + r_t^i \begin{bmatrix} \cos(\phi_t^i+\bar{\mu}_{t,\theta}) \\ \sin(\phi_t^i+\bar{\mu}_{t,\theta}) \end{bmatrix} \\ \text{for}&\text{ all landmarks } k \text{ to } N_{t+1} \text{ do} \\ &q = (m_{k,x} - \bar{\mu}_{t,x})^2 + (m_{k,y} - \bar{\mu}_{t,y})^2 \\ &r = \sqrt{q} \\ &\hat{z}_t^k = \begin{bmatrix} \sqrt{q} \\ \text{atan2}(m_{k,y}-\bar{\mu}_{t,y},\ m_{k,x}-\bar{\mu}_{t,x})-\bar{\mu}_{t,\theta} \end{bmatrix} \\ &F_{x,k} = \begin{bmatrix} 1&0&0&0&\dots&0&0&\dots&0\\0&1&0&0&\dots&0&0&\dots&0\\0&0&1&0&\dots&0&0&\dots&0\\0&0&0&0&\dots&1&0&\dots&0\\0&0&0&0&\dots&0&1&\dots&0 \end{bmatrix} \\ &h_t^k = \begin{bmatrix} -\delta_{k,x}/r & -\delta_{k,y}/r & 0 & \delta_{k,x}/r & \delta_{k,y}/r \\ \delta_{k,y}/q & -\delta_{k,x}/q & -1 & -\delta_{k,y}/q & \delta_{k,x}/q \end{bmatrix} \\ &H_t^k=h_t^kF_{x,k}\\ &\Psi_k=H_t^k\bar{\Sigma}_t[H_t^k]^T + Q_t \\ &\pi_k=(z_t^i-\hat{z}_t^k)^T\Psi_k^{-1}(z_t^i-\hat{z}_t^k) \\ \text{end}&\text{for} \\ \pi_{N_t+1}&=\alpha \\ j(i) &= \text{argmin}_k\ \pi_k\\ N_t &= \text{max}\{N_t,j(i)\} \\ \text{endfor}& \end{align*}

To summarize, at the start of the algorithm we create a new landmark at the end of the state mean vector \mu. We then calculate the Mahalanobis distance between all landmarks and the current measurement. We then set the distance for the newly created landmark to a threshold value, \alpha. We finally associate the measurement to the landmark with the lowest distance. If no landmark has a distance lower than \alpha, the new landmark is added to the state.


The Complete Algorithm

The code for the complete algorithm is shown below. Note the algorithm is very sensitive to incorrect measurement association. That’s why, when we’re doing measurement association, we keep track of the two best associations (line 113). Then we check to make sure the best association is significantly better than the second best (line 154). If not, we consider the measurement to be ambiguous and discard it. Even with this precaution the algorithm still occasionally makes bad associations. Because of this the performance of the algorithm is way less than perfect. There are several ways to deal with this, such as multi-hypothesis tracking. I may go into detail on this method in a later post.

addpath ../common;

deltaT = .02;
alphas = [.11 .01 .18 .08 0 0]; % motion model noise parameters

% measurement model noise parameters
Q_t = [11.7   0;
       0    0.18];
   
n_robots = 1;
robot_num = 1;
[Barcodes, Landmark_Groundtruth, Robots] = loadMRCLAMdataSet(n_robots);
[Robots, timesteps] = sampleMRCLAMdataSet(Robots, deltaT);

% add pose estimate matrix to Robots
Robots{robot_num}.Est = zeros(size(Robots{robot_num}.G,1), 4);

start = 600;
t = Robots{robot_num}.G(start, 1);

% initialize state mean
stateMean = [Robots{robot_num}.G(start,2);
            Robots{robot_num}.G(start,3);
            Robots{robot_num}.G(start,4)];
        
stateCov = zeros(3, 3);
stateCov(1:3,1:3) = 0.001;

measurementIndex = 1;
meas_count = 0;
n_landmarks = 0;

% set up map between barcodes and landmark IDs
codeDict = containers.Map(Barcodes(:,2),Barcodes(:,1));

% loop through all odometry and measurement samples
% updating the robot's pose estimate with each step
% reference table 10.1 in Probabilistic Robotics
for i = start:size(Robots{robot_num}.G, 1)
    
    % update time
    t = Robots{robot_num}.G(i, 1);
    
    % update movement vector
    u_t = [Robots{robot_num}.O(i, 2); Robots{robot_num}.O(i, 3)];
    % update robot bearing
    theta = stateMean(3, 1);

    rot = deltaT * u_t(2);
    halfRot = rot / 2;
    trans = u_t(1) * deltaT;
    
    % calculate pose update from odometry
    poseUpdate = [trans * cos(theta + halfRot);
                  trans * sin(theta + halfRot);
                  rot];
              
    % calculate updated state mean
    F_x = [eye(3) zeros(3, size(stateMean, 1) - 3)];
    stateMeanBar = stateMean + F_x' * poseUpdate;
    stateMeanBar(3) = conBear(stateMeanBar(3));
    
    % calculate movement jacobian
    g_t = [0 0 trans * -sin(theta + halfRot);
           0 0 trans * cos(theta + halfRot);
           0 0 0];
    G_t = eye(size(stateMean, 1)) + F_x' * g_t * F_x;
    
    % calculate motion covariance in control space
    M_t = [(alphas(1) * abs(u_t(1)) + alphas(2) * abs(u_t(2)))^2 0;
           0 (alphas(3) * abs(u_t(1)) + alphas(4) * abs(u_t(2)))^2];
       
    % calculate Jacobian to transform motion covariance to state space
    V_t = [cos(theta + halfRot) -0.5 * sin(theta + halfRot);
           sin(theta + halfRot) 0.5 * cos(theta + halfRot);
           0 1];
    
    % update state covariance
    R_t = V_t * M_t * V_t';
    stateCovBar = (G_t * stateCov * G_t') + (F_x' * R_t * F_x);
    
    % get measurements
    [z, measurementIndex] = getObservations(Robots, robot_num, t, measurementIndex, codeDict);
    
    % if features are observed
    % loop over all features and compute Kalman gain
    if z(3,1) > 0
        for k = 1:size(z, 2) % loop over every measurement

            predZ   = zeros(2, n_landmarks+1);
            predPsi = zeros(n_landmarks+1, 2, 2);
            predH   = zeros(n_landmarks+1, 2, 2*(n_landmarks+1)+3);
            pi_k    = zeros(n_landmarks+1, 1);
            
            % create temporary new landmark at observed position
            temp_mark = [stateMeanBar(1) + z(1,k) * cos(z(2,k) + stateMeanBar(3));
                         stateMeanBar(2) + z(1,k) * sin(z(2,k) + stateMeanBar(3))]; 
            stateMeanTemp = [stateMeanBar;
                             temp_mark];
            stateCovTemp = [stateCovBar zeros(size(stateCovBar,1),2);
                            zeros(2,size(stateCovBar,2) + 2)];
            % initialize covariance for new landmark proportional
            % to range measurement squared
            for ii = (size(stateCovTemp,1)-1):(size(stateCovTemp,1))
                stateCovTemp(ii,ii) = (z(1,k)^2)/130;
            end
            
            % loop over all landmarks  (including the temp landmark) and 
            % compute likelihood of correspondence with the new landmark
            max_j = 0;
            min_pi = 10*ones(2,1);
            for j = 1:n_landmarks+1

                delta = [stateMeanTemp(2*j+2) - stateMeanTemp(1);
                         stateMeanTemp(2*j+3) - stateMeanTemp(2)];
                       
                q = delta' * delta;
                r = sqrt(q);
                
                predZ(:,j) = [r;
                              conBear(atan2(delta(2), delta(1)) - stateMeanTemp(3))];
                F_xj = [eye(3)     zeros(3,2*j-2) zeros(3,2) zeros(3,2*(n_landmarks+1) - 2*j);
                        zeros(2,3)   zeros(2,2*j-2) eye(2)   zeros(2,2*(n_landmarks+1) - 2*j)];
                
                
                h_t = [-delta(1)/r -delta(2)/r  0   delta(1)/r delta(2)/r;
                       delta(2)/q    -delta(1)/q    -1  -delta(2)/q  delta(1)/q];
                predH(j,:,:) = h_t * F_xj;
                predPsi(j,:,:) = squeeze(predH(j,:,:)) * stateCovTemp * ...
                                 squeeze(predH(j,:,:))' + Q_t;
                if j <= n_landmarks
                    pi_k(j) = ((z(1:2,k)-predZ(:,j))'...
                              /(squeeze(predPsi(j,:,:))))...
                              *(z(1:2,k)-predZ(:,j));
                else
                    pi_k(j) = 0.84; % alpha: min mahalanobis distance to
                                    %        add landmark to map
                end
                % track best two associations
                if pi_k(j) < min_pi(1)
                    min_pi(2) = min_pi(1);
                    max_j = j;
                    min_pi(1) = pi_k(j);
                end
            end
            
            H = squeeze(predH(max_j,:,:));
            
            % best association must be significantly better than
            % the second best, otherwise the measurement is 
            % thrown out
            if (min_pi(2) / min_pi(1) > 1.6)
                meas_count = meas_count + 1;
                % if a landmark is added to the map expand the 
                % state and covariance matrices
                if max_j > n_landmarks
                    stateMeanBar = stateMeanTemp;
                    stateCovBar = stateCovTemp;
                    n_landmarks = n_landmarks + 1;
                % if measurement is associated to an existing landmark
                % truncate h matrix to prevent dim mismatch
                else
                    H = H(:,1:n_landmarks*2 + 3);

                    K = stateCovBar * H' / (squeeze(predPsi(max_j,:,:))); 

                    stateMeanBar = stateMeanBar + K * ...
                                    (z(1:2,k) - predZ(:,max_j));
                    stateMeanBar(3) = conBear(stateMeanBar(3));
                    stateCovBar = (eye(size(stateCovBar)) - (K * H)) * stateCovBar;
                end
            end
        end
    end
    
    % update state mean and covariance
    stateMean = stateMeanBar;
    stateCov = stateCovBar;
    
    % add new pose mean to estimated poses
    Robots{robot_num}.Est(i,:) = [t stateMean(1) stateMean(2) stateMean(3)];
end

p_loss = path_loss(Robots, robot_num, start);
disp(p_loss);
Robots{robot_num}.Map = stateMean;

%animateMRCLAMdataSet(Robots, Landmark_Groundtruth, timesteps, deltaT);

Intro to the EKF Step 3: SLAM With Known Correspondence

This tutorial is part of a series on Simultaneous Localization and Mapping (SLAM) using the extended Kalman filter. See the following for other tutorials in the series:

This tutorial builds on the previous two tutorials on localization. When doing SLAM with known measurement correspondence the number of landmarks is still fixed. Also, when we get a range and bearing measurement, we know to which landmark that measurement corresponds. This means we don’t need to use the maximum likelihood measurement association method from the previous tutorial. The big difference here is that we don’t have any prior information on where the landmarks are in the world. So instead of just using measurements to refine our pose estimate, we’ll also use them to estimate the landmark locations. Don’t worry though, the process is remarkably similar to simple localization. There’s just a couple of significant changes to highlight.


The State and Covariance Matrices

Since we don’t know where the landmarks are, we need simultaneously estimate their locations as well as our robot’s pose. This means the landmark locations are now part of the state of the system we’re trying to estimate. So the coordinates of the landmarks are now included in the state vector. When doing simple localization the state vector had dimension 3\times1. Now when doing SLAM the state vector has dimension 2n+3\times1 where n is the number of landmarks in our problem. In this formulation the coordinates of landmark j (x_j and y_j) are found at indices 2i+2 and 2i+3 of the state vector.

Remember, our estimate of the system’s state is quantified by the state mean vector \mu and covariance \Sigma. So in order to estimate a larger \mu vector we need to keep track of a larger \Sigma matrix as well. Specifically, we’ll expand the covariance matrix from 3\times3 to 2n+3\times2n+3 where n is again the number of landmarks.


The Update Step

To accommodate the larger size of our state vector we’ll need to make some changes to the EKF algorithm as well. Let’s start with the update step, where we update the pose of the robot based on the control inputs. We still need to update the elements of the state vector and covariance matrix that correspond to the robot’s pose. However, the robot’s control inputs clearly don’t affect the positions of the landmarks. So we need a way to update the robot’s pose without affecting the landmark estimates. The answer here is a special matrix we’ll call F_x. F_x is simply a 3\times2n+3 matrix where the rightmost block is a 3\times3 identity matrix and the other entries are zeros:

    \begin{equation*} F_k=\begin{bmatrix} 1 & 0 & 0 & 0 & \dots & 0 \\ 0 & 1 & 0 & 0 & \dots & 0 \\ 0 & 0 & 1 & 0 & \dots & 0 \end{bmatrix} \end{equation*}

F_x is used to map updates to the robot’s pose mean and covariance to the higher-dimensional space of the full state, which includes the landmark positions. So the update step now looks like this:

    \begin{align*} \bar{\mu}_k &= \mu_{k-1} + F_x^Tu_k' \\ G_k &= I_{2n+3}+F_x^Tg_kF_x \\ \bar{\Sigma}_k &= G_k \Sigma_{k-1} G_k^T + F_x^T R F_x \end{align*}

where u_k' is the motion update calculated from the control input at timestep k (not the raw control input), g_k is the motion Jacobian evaluated at timestep k, I_{2n+3} is a 2n+3\times2n+3 identity matrix, and R is the constant process noise covariance.


The Measurement Step

We need to make a similar change in the measurement step. A given measurement corresponds to only one landmark, so that measurement shouldn’t affect the estimates of the other landmarks. However, each measurement also affects the robot’s pose estimate. So we’ll need a way of mapping our corrections from the 2D landmark and 3D pose spaces to the full state space while ensuring only the relevant quantities in the mean vector, \mu and covariance matrix, \Sigma are affected. For this we need another F matrix. This time it will be F_{xj}, with the subscript xj signifying that it relates the pose x to landmark j. F_{xj} is a 5\times2n+3 matrix. The 3\times3 block in the upper right is an identity matrix, corresponding to the pose terms in the state and covariance matrix. F_{xj} also has a 2\times2 identity block occupying the lower two rows and starting at the index 2j+2 where j is the index of the landmark corresponding to the current measurement. All other entries in F_{xj} are zeros:

    \begin{equation*} F_{xj} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & 0 & \dots  \\ 0 & 1 & 0 & \dots & 0 & 0 & \dots  \\ 0 & 0 & 1 & \dots & 0 & 0 & \dots  \\ 0 & 0 & 0 & \dots & 1 & 0 & \dots  \\ 0 & 0 & 0 & \dots & 0 & 1 & \dots  \end{bmatrix} \end{equation*}

This cool matrix allows us to incorporate one measurement at a time, only updating the terms in the state and covariance matrices that correspond to the robot’s pose and the relevant landmark. We do this as follows:

    \begin{align*} H_k &= h_k F_{xj} \\ K &= \bar{\Sigma}H_k^T(H_k\bar{\Sigma}H_k^T+Q) ^{-1}\\ \mu_k &= \bar{\mu}_k + K(z-\hat{z}) \\ \Sigma_k &= (I_{2n+3} - KH_k)*\bar{\Sigma}_k \end{align*}

where h_k is the measurement Jacobian. With only these augmentations we can change the localization algorithm with known correspondence to a SLAM algorithm with known correspondence. Not too complicated, right?


The Complete Algorithm

The code for the complete algorithm is shown below. It is very similar to the code for EKF localization with known correlation with state vector and covariance matrices that now include landmarks and with the update and measurement steps augmented as discussed above. Note also that after the states are estimated, in lines 157 through 167, the system calculates the sum of squared errors for both the robot positions and the landmark positions. This gives us a rough way to judge the quality of our filter tuning. I’ll go into a more principled method for Kalman filter tuning in a later post, but this works well enough for demonstration purposes.

addpath ../common;

deltaT = .02;
alphas = [.1 .01 .18 .08 0 0]; % motion model noise parameters

% measurement model noise parameters
Q_t = [11.8 0;
       0    .18];
   
n_robots = 1;
robot_num = 1;
n_landmarks = 15;
[Barcodes, Landmark_Groundtruth, Robots] = loadMRCLAMdataSet(n_robots);
[Robots, timesteps] = sampleMRCLAMdataSet(Robots, deltaT);

% add pose estimate matrix to Robots
Robots{robot_num}.Est = zeros(size(Robots{robot_num}.G,1), 4);

start = 600;
t = Robots{robot_num}.G(start, 1);

% initialize state mean with the groundtruth
% pose at timestep zero
stateMean = [Robots{robot_num}.G(start,2);
            Robots{robot_num}.G(start,3);
            Robots{robot_num}.G(start,4)];
        
% reshapes stateMean to 1x(2*n_landmarks+3)
% and remaps motion updates to new state vector
F_x = [eye(3) zeros(3, 2 * n_landmarks)];

stateMean = F_x' * stateMean;

% initialize diagonal of pose covariance with small, nonzero values
% since we have a good estimate of the starting pose
stateCov = zeros(2 * n_landmarks + 3, 2 * n_landmarks + 3);
stateCov(1:3,1:3) = 0.001;

% initialize landmark variances to large values since we have
% no prior information on the locations of the landmarks
for i = 4:n_landmarks * 2 + 3
    stateCov(i,i) = 10;
end
   
measurementIndex = 1;

% set up map between barcodes and landmark IDs
% barcode values are NOT the same as landmark IDs
codeDict = containers.Map(Barcodes(:,2),Barcodes(:,1));

% loop through all odometry and measurement samples
% updating the robot's pose estimate with each step
% reference table 10.1 in Probabilistic Robotics
for i = start:size(Robots{robot_num}.G, 1)
    
    % update time
    t = Robots{robot_num}.G(i, 1);
    
    % get control inputs at current timestep
    u_t = Robots{robot_num}.O(i, 2:end);
    % update robot bearing to last timestep's estimate
    theta = stateMean(3, 1);
    % calculate rotation and translation over last timestep
    rot = deltaT * u_t(2);
    halfRot = rot / 2;
    trans = u_t(1) * deltaT;
    
    % calculate pose update in world frame
    poseUpdate = [trans * cos(theta + halfRot);
                  trans * sin(theta + halfRot);
                  rot];
              
    % updated state mean and constrain bearing
    % to +/- pi
    stateMeanBar = stateMean + F_x' * poseUpdate;
    stateMeanBar(3) = conBear(stateMeanBar(3));
    
    % calculate movement jacobian
    g_t = [0 0 trans * -sin(theta + halfRot);
           0 0 trans * cos(theta + halfRot);
           0 0 0];
    G_t = eye(2 * n_landmarks + 3) + F_x' * g_t * F_x;
     
    % calculate motion covariance in control space
    M_t = [(alphas(1) * abs(u_t(1)) + alphas(2) * abs(u_t(2)))^2 0;
           0 (alphas(3) * abs(u_t(1)) + alphas(4) * abs(u_t(2)))^2];
       
    % calculate Jacobian to transform motion covariance to state space
    V_t = [cos(theta + halfRot) -0.5 * sin(theta + halfRot);
           sin(theta + halfRot) 0.5 * cos(theta + halfRot);
           0 1];
    
    % update state covariance
    R_t = V_t * M_t * V_t';
    stateCovBar = (G_t * stateCov * G_t') + (F_x' * R_t * F_x);
    
    % get measurements for current timestep
    [z, measurementIndex] = getObservations(Robots, robot_num, t, measurementIndex, codeDict);

    % if measurements are available
    if z(3,1) > 1
        S = zeros(size(z,2),3,3);
        zHat = zeros(2, size(z,2));
        % loop over every measurement and use to correct state estimate
        for k = 1:size(z, 2) 
            j = z(3,k);
            
            % if the landmark has never been seen before
            % add it to the state vector
            if stateMeanBar(j*2 + 2) == 0
                landmark_pos = [z(1,k) * cos(z(2,k) + stateMeanBar(3));
                                z(1,k) * sin(z(2,k) + stateMeanBar(3))];
                stateMeanBar(2*j+2:2*j+3) = [stateMeanBar(1) + landmark_pos(1);
                                             stateMeanBar(2) + landmark_pos(2)];
                continue;
            end

            % compute predicted range and bearing to the landmark
            delta = [stateMeanBar(2*j+2) - stateMeanBar(1);
                     stateMeanBar(2*j+3) - stateMeanBar(2)];
            q = delta' * delta;
            r = sqrt(q); % predicted range to landmark
            
            % predicted bearing to landmark
            pred_bear = conBear(atan2(delta(2), delta(1)) - stateMeanBar(3));
            % create predicted measurement
            zHat(:,k) = [r;
                         pred_bear];
            % calculate measurement Jacobian    
            h_t = [-delta(1)/r -delta(2)/r  0   delta(1)/r delta(2)/r;
                   delta(2)/q    -delta(1)/q    -1  -delta(2)/q  delta(1)/q];
                     
            F_1 = [eye(3); zeros(2,3)];
            F_2 = [zeros(3,2); eye(2)];
            F_xj = [F_1 zeros(5,2*j-2) F_2 zeros(5,2*n_landmarks - 2*j)];
            
            H_t = h_t * F_xj;
            
            % compute Kalman gain
            K = stateCovBar * H_t' / (H_t * stateCovBar * H_t' + Q_t);
            
            % incorporate new measurement into state mean and covariance
            stateMeanBar = stateMeanBar + K * (z(1:2,k) - zHat(:,k));
            stateCovBar = (eye(2*n_landmarks+3) - (K * H_t)) * stateCovBar;
        end
    end
    
    % update state mean and covariance
    stateMean = stateMeanBar;
    stateMean(3) = conBear(stateMean(3));
    stateCov = stateCovBar;
    
    % add new pose mean to estimated poses
    Robots{robot_num}.Est(i,:) = [t stateMean(1) stateMean(2) stateMean(3)];
end

% calculate land_loss: sum of squares of error in final landmark position
land_loss = 0;
for i = 1:n_landmarks
    x_diff = stateMean(2 * i + 1) - Landmark_Groundtruth(i, 2);
    y_diff = stateMean(2 * i + 2) - Landmark_Groundtruth(i, 3);
    sq_err = x_diff^2 + y_diff^2;
    land_loss = land_loss + sq_err;
end
disp(land_loss);

p_loss = path_loss(Robots, robot_num, start);
disp(p_loss);
animateMRCLAMdataSet(Robots, Landmark_Groundtruth, timesteps, deltaT);

Intro to the EKF Step 2: Localization With Unknown Correspondence

This tutorial is part of a series on Simultaneous Localization and Mapping (SLAM) using the extended Kalman filter. See the following for other tutorials in the series:

This tutorial will build incrementally on the previous tutorial: Localization With Known Correlation. We’re still just doing localization, which means the robot starts with a map of landmarks that does not change over the course of its run. The difference is this time when the robot gets a range and bearing measurement, it will not know to which landmark that measurement corresponds.

In this tutorial I will explain the process for estimating the correspondence between measurements and landmarks and present the correspondence algorithm on its own. Then I will show how the correspondence estimation algorithm can be integrated with the EKF localization algorithm introduced in the prior tutorial. As before, we’ll be using the UTIAS dataset for demonstration purposes.


Measurement Correspondence Intro

When our robot gets a measurement to a nearby landmark it can use that information to refine its estimate of its pose. In the last tutorial, each measurement received by our robot was in the form of z_t^i = [r_t^i\ \phi_t^i\ s_t^i]^T where z_t^i is the measurement to landmark i at time t. This vector contains the range r_t^i, bearing \phi_t^i, and, crucially, identity of the landmark being measured s_t^i. This could be interpreted as “landmark s is r meters away, at a bearing of \phi radians.”

In our current problem, the measurement vector will be z_t^i=[r_t^i\ \phi_t^i]^T. So we are no longer directly telling the robot to which landmark each measurement corresponds. Without knowledge of this correspondence measurements are basically useless to our robot. Sure, it knows the location of a landmark relative to its own location. But without knowing where that landmark is in the world it can’t use that information to refine its estimate of its pose.

So our robot needs a way to estimate the correspondence between its measurements and the landmarks in its map. One way it can do this is through maximum likelihood data association.


Maximum Likelihood Data Association

In maximum likelihood data association, as presented in section 7.5.2 of Probabilistic Robotics, the robot computes the probability that its current measurement corresponds to every landmark in its map. It then chooses the landmark with the highest probability of correspondence and assumes it is the correct association. This can be written as

\hat{c}_t = \text{argmax}_c\ p(z_t|c_{1:t},\ m,\ z_{1:t-1},\ u_{1:t})

In other words, given the robot’s past measurement correspondences c_{1:t}, map m, past measurements z_{1:t-1}, and past control inputs u_{1:t}, which landmark has the highest probability of producing measurement z_t?

In practice we find this by calculating the expected measurement and measurement covariance for every landmark in the map. We can then use these to calculate the measurement probability for every landmark and then select the landmark with the highest probability. The algorithm goes as follows:

    \begin{align*} \text{for all}&\text{ measurements } z_t^i=[r_t^i\ \phi_t^i]^T \text{ do} \\ \text{for}&\text{ all landmarks } k \text{ in the map } m \text{ do} \\ &q = (m_{k,x} - \bar{\mu}_{t,x})^2 + (m_{k,y} - \bar{\mu}_{t,y})^2 \\ &\hat{z}_t^k = \begin{bmatrix} \sqrt{q} \\ \text{atan2}(m_{k,y}-\bar{\mu}_{t,y},\ m_{k,x}-\bar{\mu}_{t,x})-\bar{\mu}_{t,\theta} \end{bmatrix} \\ &H_t^k = \begin{bmatrix} -\frac{m_{k,x}-\bar{\mu}_{t,x}}{\sqrt{q}} & -\frac{m_{k,y}-\bar{\mu}_{t,y}}{\sqrt{q}} & 0 \\ \frac{m_{k,y}-\bar{\mu}_{t,y}}{q} & -\frac{m_{k,x}-\bar{\mu}_{t,x}}{q} & -1\end{bmatrix} \\ &S_t^k = H_t^k \bar{\Sigma}_t [H_t^k]^T + Q_t \\ \text{end}&\text{for} \\ j(i) &= \text{argmax}_k\ \text{det}(2\pi S_t^k)^{-\frac{1}{2}} \text{exp}\{-\frac{1}{2}(z_t^i-\hat{z}_t^k)[S_t^k]^{-1}(z_t^i-\hat{z}_t^k)^T\}\\ \text{endfor}& \end{align*}

The end result j(i) is the estimated correspondence j for each measurement i. It should be noted that the probability estimates are conditioned on past correspondences. However, this method assumes that the past estimates are correct. This means that the correspondence estimates are very sensitive to incorrect associations; incorrect estimates are likely to remain incorrect on subsequent measurements. This makes the maximum likelihood method brittle and likely to diverge when measurements are incorrectly associated.

There are alternative methods that are more robust to incorrect associations. Multi-Hypothesis Tracking is one of these, but the maximum likelihood method is the simplest. For an introduction I think the maximum likelihood method is sufficient.


The Complete Algorithm

The code for the complete algorithm is shown below. It is the same as the code for EKF localization with known correlation with the addition of the maximum likelihood algorithm for measurement correspondence (lines 85 to 120). Note also that the code rendering program I’m using doesn’t seem to handle MATLAB code very well, and some of the highlighting is a little messed up.

addpath ../common;

deltaT = .02;
alphas = [.2 .03 .09 .08 0 0]; % motion model noise parameters

% measurement model noise parameters
sigma_range = .43;
sigma_bearing = .6;

Q_t = [sigma_range^2 0;
       0 sigma_bearing^2];

n_robots = 1;
robot_num = 1;
n_landmarks = 15;
[Barcodes, Landmark_Groundtruth, Robots] = loadMRCLAMdataSet(n_robots);
[Robots, timesteps] = sampleMRCLAMdataSet(Robots, deltaT);

% add pose estimate matrix to Robots
Robots{robot_num}.Est = zeros(size(Robots{robot_num}.G,1), 4);

start = 1; 
t = Robots{robot_num}.G(start, 1); 
% need mean and covariance for the initial pose estimate
poseMean = [Robots{robot_num}.G(start,2);
            Robots{robot_num}.G(start,3);
            Robots{robot_num}.G(start,4)];
poseCov = 0.01*eye(3);
       
measurementIndex = 1;

% set up map between barcodes and landmark IDs
codeDict = containers.Map(Barcodes(:,2),Barcodes(:,1));

while (Robots{robot_num}.M(measurementIndex, 1) < t - .05)
        measurementIndex = measurementIndex + 1;
end

% loop through all odometry and measurement samples
% updating the robot's pose estimate with each step
% reference table 7.3 in Probabilistic Robotics
for i = start:size(Robots{robot_num}.G, 1)
    theta = poseMean(3, 1);
    % update time
    t = Robots{robot_num}.G(i, 1);
    % update movement vector
    u_t = [Robots{robot_num}.O(i, 2); Robots{robot_num}.O(i, 3)];

    rot = deltaT * u_t(2);
    halfRot = rot / 2;
    trans = u_t(1) * deltaT;
    
    % calculate the movement Jacobian
    G_t = [1 0 trans * -sin(theta + halfRot);
           0 1 trans * cos(theta + halfRot);
           0 0 1];
    % calculate motion covariance in control space
    M_t = [(alphas(1) * abs(u_t(1)) + alphas(2) * abs(u_t(2)))^2 0;
           0 (alphas(3) * abs(u_t(1)) + alphas(4) * abs(u_t(2)))^2];
    % calculate Jacobian to transform motion covariance to state space
    V_t = [cos(theta + halfRot) -0.5 * sin(theta + halfRot);
           sin(theta + halfRot) 0.5 * cos(theta + halfRot);
           0 1];
    
    % calculate pose update from odometry
    poseUpdate = [trans * cos(theta + halfRot);
                  trans * sin(theta + halfRot);
                  rot];
    
    poseMeanBar = poseMean + poseUpdate;
    poseCovBar = G_t * poseCov * G_t' + V_t * M_t * V_t';
    
    % get measurements
    [z, measurementIndex] = getObservations(Robots, robot_num, t, measurementIndex, codeDict);
    
    % remove landmark ID from measurement because
    % we're trying to live without that
    z = z(1:2,:);
    
    % if features are observed
    % loop over all features and compute Kalman gain
    if z(1,1) > 0.1
        for k = 1:size(z, 2) % loop over every observed landmark
            
            % loop over all landmarks and compute MLE correspondence
            predZ = zeros(n_landmarks, 1, 2); % predicted measurements
            predS = zeros(n_landmarks, 2, 2); % predicted measurement covariances
            predH = zeros(n_landmarks, 2, 3); % predicted measurement Jacobians
            maxJ = 0; 
            landmarkIndex = 0;
            for j = 1:n_landmarks
                xDist = Landmark_Groundtruth(j, 2) - poseMeanBar(1);
                yDist = Landmark_Groundtruth(j, 3) - poseMeanBar(2);
                q = xDist^2 + yDist^2;
                
                % calculate predicted measurement
                predZ(j,:,:) = [sqrt(q);
                            conBear(atan2(yDist, xDist) - poseMeanBar(3))];
                        
                % calculate predicted measurement Jacobian
                predH(j,:,:) = [-xDist/sqrt(q) -yDist/sqrt(q) 0;
                                yDist/q        -xDist/q      -1];
                            
                % calculate predicted measurement covariance
                predS(j,:,:) = squeeze(predH(j,:,:)) * poseCovBar ...
                               * squeeze(predH(j,:,:))' + Q_t;
                
                % calculate probability of measurement correspondence          
                thisJ = det(2 * pi * squeeze(predS(j,:,:)))^(-0.5) * ...
                        exp(-0.5 * (z(:,k) - squeeze(predZ(j,:,:)))' ...
                        * inv(squeeze(predS(j,:,:))) ...
                        * (z(:,k) - squeeze(predZ(j,:,:))));
                
                % update correspondence if the probability is
                % higher than the previous maximum
                if thisJ > maxJ
                    maxJ = thisJ;
                    landmarkIndex = j;
                end
            end

            % compute Kalman gain
            K = poseCovBar * squeeze(predH(landmarkIndex,:,:))' ...
                * inv(squeeze(predS(landmarkIndex,:,:)));

            % update pose mean and covariance estimates
            poseMeanBar = poseMeanBar + K * (z(:,k) - squeeze(predZ(landmarkIndex,:,:)));
            poseCovBar = (eye(3) - (K * squeeze(predH(landmarkIndex,:,:)))) * poseCovBar;
        end
    end
    
    % update pose mean and covariance
    poseMean = poseMeanBar;
    poseMean(3) = conBear(poseMean(3));
    poseCov = poseCovBar;
    
    % add pose mean to estimated position vector
    Robots{robot_num}.Est(i,:) = [t poseMean(1) poseMean(2) poseMean(3)];
    
    % calculate error between mean pose and groundtruth 
    % for testing only
    %{
    groundtruth = [Robots{robot_num}.G(i, 2); 
                   Robots{robot_num}.G(i, 3); 
                   Robots{robot_num}.G(i, 4)];
    error = groundtruth - poseMean;
    %}
   
end
animateMRCLAMdataSet(Robots, Landmark_Groundtruth, timesteps, deltaT);

Intro to the EKF Step 1: Localization With Known Correlation

This tutorial is part of a series on Simultaneous Localization and Mapping (SLAM) using the extended Kalman filter. See the following for other tutorials in the series:

In this tutorial I’ll focus on one of the easiest state estimation problems: localization with known measurement correspondence. The “localization” part means that the robot already knows the positions of all of the landmarks in the environment. The “known correspondence” part means that when the robot measures the range and bearing to a landmark it always knows which landmark it’s measuring.

In this tutorial I’ll explain the EKF algorithm and then demonstrate how it can be implemented using the UTIAS dataset. I’ll break it down into the following sections:


Intro To The Algorithm

First off, why do we need this complicated algorithm for localization? Can’t we just track our position with odometry? This isn’t possible because a robot’s pose over time is not deterministic; it can’t be calculated exactly due to errors (also referred to as noise) in the odometry which accumulate over time.

This is why the Kalman filter represents the robot’s pose at time t as a multivariate normal probability distribution with a mean at \mu_t and covariance of \Sigma_t. The robot’s expected (or most likely) pose is at \mu_t and the uncertainty of that pose is quantified by \Sigma_t. The Kalman filter updates the robot’s pose using the robot’s motion from odometry u_t. Then it corrects for noise in the motion update using the robot’s sensor readings z_t.

One important advantage of the Kalman filter (and other similar algorithms) is that it is recursive. It only requires information about the state of the robot at time t-1 in order to estimate the state at time t. In other words, it doesn’t require information about any of the prior states of the robot from time t_0 to time t-2. This means it requires less memory and processor time to compute!

Below I present a mathematical definition for the extended Kalman filter. Most of it is taken from table 7.2 of Probabilistic Robotics by Thrun, Fox, and Burgard. It will be helpful to follow along in Probabilistic Robotics in addition to this tutorial. I’d also recommend reading chapter 3 of that book to gain a better understanding of Gaussian filters and recursive estimation.

Top of page


Inputs

  • the mean of the pose estimate from the previous timestep: \mu_{t-1} = \begin{bmatrix} x_{t-1} &y_{t-1} &\theta_{t-1} \end{bmatrix}^T
    • x_{t-1} is the robot’s previous x position in meters
    • y_{t-1} is the robot’s previous y position in meters
    • \theta_{t-1} is the robot’s previous heading in radians
  • the covariance matrix of the previous timestep, \Sigma_{t-1}
    • this represents the uncertainty of the pose estimate
  • the control inputs (or odometry) at the current timestep, u_t=\begin{bmatrix} v_t &\omega_t \end{bmatrix}^T
    • v_t is the robot’s linear velocity in meters/sec
    • \omega_t is the robot’s angular velocity in radians/sec
  • the measurements from the current timestep, z_t = \begin{bmatrix} z_t^1 &z_t^2 &\dots &z_t^k \end{bmatrix} where k is the number of measurements at time t
    • each z_t^i = \begin{bmatrix} r_t^i &\phi_t^i &s_t^i \end{bmatrix}^T where
      • r_t^i is the range to landmark i in meters
      • \phi_t^i is the bearing to landmark i in radians
      • s_t^i is the ID of landmark i
  • c_t is the correspondence between the measurement and landmarks
    • for our purposes this will be ignored for now
  • the map m = \begin{bmatrix}m_1 &m_2 &\dots &m_n \end{bmatrix} where n is the number of landmarks
    • each m_i = \begin{bmatrix} m_{i,x} &m_{i,y} &m_{i,s} \end{bmatrix}^T where
      • m_{i,x} is the x position of landmark i in meters
      • m_{i,y} is the y position of landmark i in meters
      • m_{i,s} is the ID of landmark i

Top of page


Outputs

  • the updated pose estimate, \mu_t
  • the updated covariance matrix, \Sigma_t
  • the measurement probability, p_{z_t}, which is the likelihood of the measurements z_t given the robot’s estimated pose before measurements were incorporated: \bar{\mu}_t
    • We’ll be ignoring this for now as well

Top of page


Algorithm

    \begin{flalign*} \text{Loc} &\text{alization with known correspondence}(\mu_{t-1}, \Sigma_{t-1},u_t,z_t,c_t,m): \\ &\theta = u_{t-1,\theta} \\ &G_t = \begin{bmatrix} 1 &0 &-v_t \Delta t \sin(\theta + \frac{\omega_t \Delta t}{2})\\ 0 &1 &v_t \Delta t \cos(\theta + \frac{\omega_t \Delta t}{2})\\ 0 &0 &1 \end{bmatrix} \\ &V_t = \begin{bmatrix} \cos(\theta + \frac{\omega_t \Delta t}{2}) &-\frac{1}{2} \sin(\theta + \frac{\omega_t \Delta t}{2}) \\ \sin(\theta + \frac{\omega_t \Delta t}{2}) &\frac{1}{2} \cos(\theta + \frac{\omega_t \Delta t}{2}) \\ 0 &1 \end{bmatrix} \\ &M_t = \begin{bmatrix} \alpha_1 v_{t}^{2} + \alpha_2 \omega_{t}^{2} &0 \\ 0 &\alpha_{3} v_{t}^{2} + \alpha_4 \omega_{t}^{2} \end{bmatrix} \\ &\bar{\mu_t} = \mu_{t-1} + \begin{bmatrix} v_t \Delta t \cos(\theta + \frac{\omega_t \Delta t}{2}) \\ v_t \Delta t \sin(\theta + \frac{\omega_t \Delta t}{2}) \\ \omega_t \Delta t \end{bmatrix} \\ &\bar{\Sigma}_t = G_t \Sigma_{t-1} G_{t}^{T} + V_t M_t V_{t}^{T} \\ &Q_t = \begin{bmatrix} \sigma_{r}^{2} &0 &0 \\ 0 &\sigma_{\phi}^{2} &0 \\ 0 &0 &\sigma_{s}^{2} \end{bmatrix}\\ &\text{for all observed features $z_{t}^{i}=\begin{bmatrix} r_{t}^{i} &\phi_{t}^{i} &s_{t}^{i}\end{bmatrix}^T$ do} \\ \end{flalign*}

    \begin{flalign*} &j=c_{t}^{i}\\ &q = (m_{j,x} - \bar{\mu}_{t,x})^2 + (m_{j,y} - \bar{\mu}_{t,y})^2 \\ &\hat{z}_{t}^{i} = \begin{bmatrix} \sqrt{q} \\ \text{atan2}(m_{j,y} - \bar{\mu}_{t,y} \text{,  }m_{j,x} - \bar{\mu}_{t,x}) - \bar{\mu}_{t, \theta} \\ m_{j,s} \end{bmatrix} \\ &H_{t}^{i} = \begin{bmatrix} -\frac{m_{j,x} - \bar{\mu}_{t,x}}{\sqrt{q}} & -\frac{m_{j,y} - \bar{\mu}_{t,y}}{\sqrt{q}} &0 \\ \frac{m_{j,y} - \bar{\mu}_{t,y}}{q} &-\frac{m_{j,x} - \bar{\mu}_{t,x}}{q} &-1 \\ 0 &0 &0 \end{bmatrix} \\ &S_{t}^{i} = H_{t}^{i} \bar{\Sigma}_t \lbrack H_{t}^{i} \rbrack ^T + Q_t \\ &K_{t}^{i} = \bar{\Sigma}_t \lbrack H_{t}^{i} \rbrack ^T \lbrack S_{t}^{i} \rbrack ^{-1} \\ &\bar{\mu}_t = \bar{\mu}_t + K_{t}^{i}(z_{t}^{i} - \hat{z}_{t}^{i}) \\ &\bar{\Sigma}_t = (I - K_{t}^{i} H_t^i) \bar{\Sigma}_t \\ \end{flalign*}

    \begin{flalign*} &\text{end for} \\ &\mu_t = \bar{\mu}_t \\ &\Sigma_t = \bar{\Sigma}_t \\ &p_{z_t} = \prod_i \det(2 \pi S_t^i)^{-\frac{1}{2}} \exp \{ -\frac{1}{2}(z_t^i - \hat{z}_t^i)^T \lbrack S_t^i \rbrack ^{-1} (z_t^i - \hat{z}_t^i) \} \\ &\text{return } \mu_t, \Sigma_t, p_{z_t} \\ \end{flalign*}

Top of page


Motion Update

The algorithm is divided into two parts: the motion update and the sensor update. First, in the motion update the odometry information is incorporated into the state estimate. The motion update is based around a motion model g(u_t, x_{t-1}), which calculates a new pose based on the previous pose x_{t-1} and the odometry u_t.

(1)   \begin{equation*} \bar{\mu_t} = g(u_t, x_{t-1}) = \mu_{t-1} + \begin{bmatrix} v_t \Delta t \cos(\theta + \frac{\omega_t \Delta t}{2}) \\ v_t \Delta t \sin(\theta + \frac{\omega_t \Delta t}{2}) \\ \omega_t \Delta t \end{bmatrix} \end{equation*}

In this equation \bar{\mu}_t denotes the estimated updated mean of the pose distribution. The motion model I’m using assumes that the robot rotates in place, then drives in a straight line, then rotates in place again. This is a common approximation and it’s accurate enough as long as \Delta t is small.

In addition to calculating a new mean for our robot’s pose distribution \bar{\mu}_t we need calculate an estimated covariance for the new pose, \bar{\Sigma}_t. In order to do this we need to multiply the covariance from the last timestep, \Sigma_{t-1}  by the derivative (also called the Jacobian) of the motion model with respect to the previous state x_{t-1}:

(2)   \begin{equation*} G_t = \frac{\partial g(u_t, \mu_{t-1})}{\partial x_{t-1}} = \begin{bmatrix} 1 &0 &-v_t \Delta t \sin(\theta + \frac{\omega_t \Delta t}{2})\\ 0 &1 &v_t \Delta t \cos(\theta + \frac{\omega_t \Delta t}{2})\\ 0 &0 &1 \end{bmatrix} \end{equation*}

The multiplication looks like the following: G_t \Sigma_{t-1} G_t^T. We also need to calculate the additional noise (or uncertainty) associated with the motion that occurred between t-1 and t.

(3)   \begin{equation*} M_t = \begin{bmatrix} \alpha_1 v_{t}^{2} + \alpha_2 \omega_{t}^{2} &0 \\ 0 &\alpha_{3} v_{t}^{2} + \alpha_4 \omega_{t}^{2} \end{bmatrix} \end{equation*}

The alphas \alpha_1 through \alpha_4 are noise parameters that, unfortunately, need to be hand-tuned for the particular robot being used. Also, M_t is in control space and we need to map it into state space. To do this we need the Jacobian of the motion model with respect to the motion parameters u_t:

(4)   \begin{equation*} V_t = \frac{\partial g(u_t, \mu_{t-1})}{\partial u_t} = \begin{bmatrix} \cos(\theta + \frac{\omega_t \Delta t}{2}) &-\frac{1}{2} \sin(\theta + \frac{\omega_t \Delta t}{2}) \\ \sin(\theta + \frac{\omega_t \Delta t}{2}) &\frac{1}{2} \cos(\theta + \frac{\omega_t \Delta t}{2}) \\ 0 &1 \end{bmatrix} \end{equation*}

So we can map the noise M_t to state space with the following multiplication: V_t M_t V_t^T. To get the complete updated covariance \bar{\Sigma}_t we add the transformed covariance from the previous timestep to the additional noise:

(5)   \begin{equation*} \bar{\Sigma}_t = G_t \Sigma_{t-1} G_{t}^{T} + V_t M_t V_{t}^{T} \end{equation*}

So the motion update step updates the robot’s pose mean and covariance based on its odometry. You may have noticed, however, that the covariance only gets larger in this step. This means our uncertainty about the robot’s pose always grows when we incorporate odometry. We can decrease the uncertainty by incorporating sensor information, which is what we do in the next step.

Note: The motion model and associated Jacobians presented in the preceding section are different from those presented in Probabilistic Robotics. The model presented in Probabilistic Robotics assumes that the robot is always moving in an arc and its angular velocity, \omega_t, is never zero. The Jacobians for this model involve dividing by \omega_t. This means that the model breaks if \omega_t is ever equal to zero. This is a problem for us because \omega_t often is equal to zero in the UTIAS dataset. That is why I subbed in my own motion model.

Top of page


Sensor Update

The sensor update refines the robot’s pose estimate and decreases its uncertainty. This happens in the for loop of the algorithm. It is run for every measurement that occurs at timestep t.

In the first step of the sensor update we calculate what the range and bearing to the landmark would have been if the estimated pose mean \bar{\mu}_t were correct. This is called the expected measurement \hat{z}_t and it is calculated using the measurement model h_t(x_t,m).

(6)   \begin{equation*} q = (m_{j,x} - \bar{\mu}_{t,x})^2 + (m_{j,y} - \bar{\mu}_{t,y})^2 \end{equation*}

(7)   \begin{equation*} \hat{z}_{t}^{i} = h(x_t, m) = \begin{bmatrix} \sqrt{q} \\ \text{atan2}(m_{j,y} - \bar{\mu}_{t,y} \text{,  }m_{j,x} - \bar{\mu}_{t,x}) - \bar{\mu}_{t, \theta} \\ m_{j,s} \end{bmatrix} \\ \end{equation*}

Remember that m_{j,x}, m_{j,y}, and m_{j,s} are the x position, y position, and ID of landmark j, respectively.

We also need to calculate the additional noise associated with the measurement. To do this we need the Jacobian of the measurement model with respect to the robot’s pose:

(8)   \begin{equation*} H_{t}^{i} = \frac{\partial h(\bar{\mu}_t, m)}{\partial x_t} = \begin{bmatrix} -\frac{m_{j,x} - \bar{\mu}_{t,x}}{\sqrt{q}} & -\frac{m_{j,y} - \bar{\mu}_{t,y}}{\sqrt{q}} &0 \\ \frac{m_{j,y} - \bar{\mu}_{t,y}}{q} &-\frac{m_{j,x} - \bar{\mu}_{t,x}}{q} &-1 \\ 0 &0 &0 \end{bmatrix} \end{equation*}

The measurement noise is calculated by multiplying the prior estimated covariance with the measurement model Jacobian and adding additional noise for the new measurement. The additional noise Q_t is a constant for all measurements. It is a 3 \times 3 diagonal matrix with noise parameters \sigma_r, \sigma_\phi, and \sigma_s. The sigmas represent the noise for landmark range, bearing, and noise, respectively. Like the alphas in equation (3), the sigmas must be hand-tuned in order to obtain good performance.

(9)   \begin{equation*} S_t^i = H_t^i \bar{\Sigma}_t \lbrack H_t^i \rbrack ^T + Q_t \end{equation*}

S_t^i is then used to calculate the Kalman gain, K_t^i. The Kalman gain determines how heavily the sensor information should be weighted when correcting the estimated new pose \bar{\mu}_t and \bar{\Sigma}_t.

(10)   \begin{equation*} K_{t}^{i} = \bar{\Sigma}_t \lbrack H_{t}^{i} \rbrack ^T \lbrack S_{t}^{i} \rbrack ^{-1} \end{equation*}

The Kalman gain is used to update the estimated pose mean and covariance \bar{\mu}_t and \bar{\Sigma}_t as follows:

(11)   \begin{equation*} \bar{\mu}_t = \bar{\mu}_t + K_{t}^{i}(z_{t}^{i} - \hat{z}_{t}^{i}) \end{equation*}

(12)   \begin{equation*} \bar{\Sigma}_t = (I - K_{t}^{i} H_t^i) \bar{\Sigma}_t \end{equation*}

The amount of correction to the estimated pose mean depends on K_t^i and the difference between the expected measurement \hat{z}_t^i (calculated in equation (7) and the actual measurement z_t^i. A larger discrepancy between the expected and actual measurement will result in a larger correction.

After the sensor update is complete the pose mean and covariance are updated with the estimated values: \mu_t = \bar{\mu}_t and \Sigma_t = \bar{\Sigma}_t. The measurement probability is also calculated here but we are not making use of this value.

Top of page


Code

When writing robotics software I’m often frustrated by the separation between theory and software engineering. There are many good theoretical explanations of the extended Kalman filter online. There are also several good examples of code that uses an EKF. But I was unable to find any textbook, academic paper, blog post, or stackexchange thread that did a good job of demonstrating how the math translates into code. I’m hoping my code below will change that. I’ve been careful to explain all major steps and refer to the equations above in the comments. I hope you find it helpful!

Lastly, note that comments in MATLAB begin with a % symbol. The code rendering plugin I used doesn’t seem to get this when highlighting the code, however. So be aware that the highlighting below isn’t always accurate, but the actual code is.

addpath ../common; % adds support functions used in all my EKF programs
                   % check my github for details

deltaT = .02; 
alphas = [.2 .03 .09 .08 0 0]; % robot-dependent motion noise parameters
                               % see equation 3

% robot-dependent sensor noise parameters
% see equation 9
sigma_range = 2;
sigma_bearing = 3;
sigma_id = 1;

Q_t = [sigma_range^2 0 0;
       0 sigma_bearing^2 0;
       0 0 sigma_id^2];

measurement_prob = 0;
n_robots = 1;
robot_num = 1;

% load and resample raw data from UTIAS data set
% The robot's position groundtruth, odometry, and measurements 
% are stored in Robots
[Barcodes, Landmark_Groundtruth, Robots] = loadMRCLAMdataSet(n_robots);
[Robots, timesteps] = sampleMRCLAMdataSet(Robots, deltaT);

% add pose estimate matrix to Robots
% data will be added to this as the program runs
Robots{robot_num}.Est = zeros(size(Robots{robot_num}.G,1), 4);

% initialize time, and pose estimate
start = 600; % start index is set to 600 because earlier data was 
             % found to cause problems
t = Robots{robot_num}.G(start, 1); % set start time

% set starting pose mean to pose groundtruth at start time
poseMean = [Robots{robot_num}.G(start,2);
            Robots{robot_num}.G(start,3);
            Robots{robot_num}.G(start,4)];
poseCov = [0.01 0.01 0.01;
           0.01 0.01 0.01;
           0.01 0.01 0.01];
       
% tracks which measurement is next received
measurementIndex = 1;

% set up map between barcodes and landmark IDs
codeDict = containers.Map(Barcodes(:,2),Barcodes(:,1));

% advance measurement index until the next measurement time 
% is greater than the starting time
while (Robots{robot_num}.M(measurementIndex, 1) < t - .05)
        measurementIndex = measurementIndex + 1;
end

% loop through all odometry and measurement samples
% updating the robot's pose estimate with each step
% reference table 7.2 in Probabilistic Robotics
for i = start:size(Robots{robot_num}.G, 1)
    theta = poseMean(3, 1);
    % update time
    t = Robots{robot_num}.G(i, 1);
    % update movement vector per equation 1
    u_t = [Robots{robot_num}.O(i, 2); Robots{robot_num}.O(i, 3)];

    rot = deltaT * u_t(2);
    halfRot = rot / 2;
    trans = u_t(1) * deltaT;
    
    % calculate the movement Jacobian per equation 2
    G_t = [1 0 trans * -sin(theta + halfRot);
           0 1 trans * cos(theta + halfRot);
           0 0 1];
    % calculate motion covariance in control space per equation 3
    M_t = [(alphas(1) * abs(u_t(1)) + alphas(2) * abs(u_t(2)))^2 0;
           0 (alphas(3) * abs(u_t(1)) + alphas(4) * abs(u_t(2)))^2];
       
    % calculate Jacobian to transform motion covariance to state space
    % per equation 4
    V_t = [cos(theta + halfRot) -0.5 * sin(theta + halfRot);
           sin(theta + halfRot) 0.5 * cos(theta + halfRot);
           0 1];
    
    % calculate pose update
    poseUpdate = [trans * cos(theta + halfRot);
                  trans * sin(theta + halfRot);
                  rot];
              
    % calculate estimated pose mean per equation 1
    poseMeanBar = poseMean + poseUpdate;
    % calculate estimated pose covariance per equation 5
    poseCovBar = G_t * poseCov * G_t' + V_t * M_t * V_t';
    
    
    % get measurements for the current timestep, if any exist
    [z, measurementIndex] = getObservations(Robots, robot_num, t, measurementIndex, codeDict);
    
    % create two matrices for expected measurement and measurement
    % covariance
    S = zeros(size(z,2),3,3);
    zHat = zeros(3, size(z,2));
    
    % if any measurements are available
    if z(3,1) > 1
        for k = 1:size(z, 2) % loop over every measurement
            j = z(3,k);

            % get coordinates of the measured landmark
            m = Landmark_Groundtruth(j, 2:3);

            % compute the expected measurement per equations 6 and 7
            xDist = m(1) - poseMeanBar(1);
            yDist = m(2) - poseMeanBar(2);
            q = xDist^2 + yDist^2;
            
            % constrains expected bearing to between 0 and 2*pi
            pred_bear = conBear(atan2(yDist, xDist) - poseMeanBar(3));
            
            zHat(:,k) = [sqrt(q);
                         pred_bear;
                         j];

            % calculate Jacobian of the measurement model per equation 8
            H = [(-1 * (xDist / sqrt(q))) (-1 * (yDist / sqrt(q))) 0;
                 (yDist / q) (-1 * (xDist / q)) -1;
                 0 0 0];
             
            % compute S per equation 9
            S(k,:,:) = H * poseCovBar * H' + Q_t;

            % compute Kalman gain per equation 10
            K = poseCov * H' * inv(squeeze(S(k,:,:)));
                
            % update pose mean and covariance estimates
            % per equations 11 and 12
            poseMeanBar = poseMeanBar + K * (z(:,k) - zHat(:,k));
            poseCovBar = (eye(3) - (K * H)) * poseCovBar;
        end
    end
    
    % update pose mean and covariance
    % constrains heading to between 0 and 2*pi
    poseMean = poseMeanBar;
    poseMean(3) = conBear(poseMean(3));
    poseCov = poseCovBar;
    
    % add pose mean to estimated position vector
    Robots{robot_num}.Est(i,:) = [t poseMean(1) poseMean(2) poseMean(3)];
    
    % calculate error between mean pose and groundtruth
    % for testing only
    %{
    groundtruth = [Robots{robot_num}.G(i, 2); 
                   Robots{robot_num}.G(i, 3); 
                   Robots{robot_num}.G(i, 4)];
    error = groundtruth - poseMean;
    %}
   
end

% display results!
animateMRCLAMdataSet(Robots, Landmark_Groundtruth, timesteps, deltaT);

Note: My tuning of the noise parameters in the code below is workable. But I didn’t spend as much time on them as I did for subsequent iterations of this tutorial. If you’d like to play with them I’m sure you can improve on my tunings!

Top of page

Intro to the EKF Step 0: The Data

This post is part of a series on SLAM with the extended Kalman filter. For other posts in the series see below.

Intro To The UTIAS Dataset

Writing programs for actual robots is difficult. Robots are expensive, testing code takes a lot of time, and the results are often difficult to interpret. This is why we don’t do most of our software development on actual robots. Instead it is much easier to run the actual robot once and log all of the robot’s sensor data, command inputs, and other relevant data. This way we can test our software on the logged data in a few seconds rather than doing a new test run every time we make a change to our code.

Setup used to make the UTIAS dataset

However, it’s difficult to make our own datasets for localization and mapping. In order to evaluate the accuracy of our SLAM software we need to know the actual (or groundtruth) pose of the robot with high accuracy at all times. This requires the use of motion capture setups which are well beyond the reach of most hobbyists. The good news is that university research groups routinely make their data sets available to the public. One such data set that is useful for our purposes is the UTIAS Multi-Robot Localization and Mapping (MRCLAM) dataset from the Autonomous Space Robotics Lab at the University of Toronto. I’ll refer to it as the UTIAS dataset for short.

The UTIAS data includes everything we’ll need to test our localization and mapping software: logs of the robot’s sensor data, accurate groundtruth poses for the robots throughout the run, and groundtruth positions for each landmark. I’ll go through each of those in detail below. You should take this opportunity download and extract “Dataset 1” from the download section of the MRCLAM webpage. You should also read the “Data Collection Details” of that page.


The Robots

The UTIAS dataset actually includes data from five different robots, all of which were run at the same time. This is because the dataset was originally intended for use in cooperative localization and mapping. However, this adds an unnecessary complication for our purposes. We will only be using the data from robot 1 in this tutorial.

The Measurements

The robot’s measurements are split up into two files: one for landmark measurements (Robot1_Measurements.dat) and one for odometry data (Robot1_Odometry.dat).

Illustration of a landmark measurement.

Each line of the Measurements file represents a measurement of a single landmark. We define a “measurement” with three quantities: range, the distance between the robot and the landmark in meters; bearing, the angle in radians between the landmark and the robot’s heading; and landmark ID, the unique identifier of the landmark being measured. Each line of the measurements file records the time the measurement occurred, the barcode for the landmark that was measured, the range, and the bearing. Note that the landmark barcode in the measurements file is not equivalent to the landmark ID in the landmark groundtruth file. The barcodes.dat file defines the mapping between the landmark barcodes and ID numbers.

The odometry file is pretty simple by comparison. Each line specifies the time, the robot’s linear velocity (in m/s), and its angular velocity (in rad/s).

The Groundtruth

Robot1_Groundtruth.dat specifies the robot’s actual pose in the global reference frame at any given time throughout the run. This is the part of the dataset that required the use of a motion capture system. Most of us could not recreate this on our own, so it’s nice that the University of Toronto made their data publicly available. Each line of the groundtruth file specifies the time, the robot’s x position in meters, the robot’s y position in meters, and the robot’s heading in radians.


The Landmarks

Landmark_Groundtruth.dat specifies the actual position of all of the landmarks in the environment. Each line specifies the landmark ID number, the x position, the y position, the standard deviation of the x position, and the std dev of the y position.


The Utilities

There are also a few useful MATLAB scripts available on the MRCLAM webpage.

loadMRCLAMdataSet.m will load the information in the .dat files into MATLAB variables. sampleMRCLAMdataSet.m will resample the data at 50Hz. This means that all of the measurements, odometry, and groundtruth information will come in at a predictable time interval. That will make things much easier. Lastly, animateMRCLAMdataSet.m will animate the motion of the robots throughout the run.

The problem with these scripts is they assume we want to work with all 5 robots. There are adapted versions of the scripts in my GitHub repo that only work with robot 1. Additionally, I adapted animateMRCLAMdataSet.m to animate both the groundtruth pose of robot 1 and the pose estimated by the EKF localization program. This will allow us to see just how well our program’s pose estimate tracks the robot’s true pose.

Intro to the Extended Kalman Filter

I started a PhD in computer science at CU Boulder four months ago. This is mainly because three years ago I made a wager. Since then I’ve become completely fascinated with robotics and have decided to pursue robotics research as a career.

Colin with his new sonar

Colin, my robot

I’m currently working at the ARPG at CU Boulder. As part of that work I wrote an implementation of SLAM using the Extended Kalman Filter (EKF). The EKF is the oldest probabilistic SLAM method. It’s a great introduction to SLAM techniques. However, it’s not very good by modern standards; it’s computationally expensive and it requires hand tuning several parameters to achieve passably accurate operation. But it is a good step in learning SLAM techniques, and I will likely implement an EKF on Colin at some point.

Tutorial Structure

Over the next several posts I’ll take you through my implementation of EKF SLAM in several steps. At the first step the robot will have a lot of prior knowledge about its environment and sensor readings. We’ll progressively take away pieces of the robot’s prior knowledge, making the task harder and harder as we go. In our last step the robot will have no prior knowledge. Our solution to this last step will be a fully-fledged SLAM program.

See the following for links to other tutorials in this series:

Dataset

I will use the UTIAS robot localization dataset for my tutorials. I’ll go into more detail on how to use and interpret the UTIAS data in my next post. The important thing to understand is that the UTIAS data abstracts away large parts of the SLAM problem such as controls and computer vision. This means we can just focus on programming and tuning our Extended Kalman Filters when using the UTIAS data. However, it also means we can’t simply upload our EKF programs to Colin. We would also need to write supporting programs that could supply data to the EKF in a form it can use.

Reference Material

Most of the math I’ll present in the next several posts came from Probabilistic Robotics, by Thrun, Fox, and Burgard. It’s a great book for anyone who wants to learn the theory behind fundamental robotics problems. It’s also not difficult to find a free pdf copy of an earlier edition if you don’t want to pay $85 for the current edition. But be warned that there are some math errors in the free pdf edition, so you may need to re-derive some things if you use it.

Probabilistic Robotics has a great section on Extended Kalman Filters and I’d recommend reading it thoroughly. However, the motion model it presents can’t be used with the UTIAS dataset so I’ll be using a different one. I’ll highlight where I deviate from the method presented in Probabilistic Robotics so as not to confuse those of you who are using it as a reference.

Programming Note

Lastly, the code I’m presenting is written in MATLAB. For those who don’t have access to MATLAB I would suggest using Python and NumPy. NumPy has all of the required functions to complete these tutorials. The only thing to watch out for is that Python uses zero-based indexing while MATLAB uses one-based indexing. If you don’t keep this in mind it could cause you some serious headaches.

ATtiny Sonar Controller

HC-SR04 sensor

Remember these guys? HC-SR04 ultrasonic rangefinders. Colin used to have three of them, but I’ve recently been reworking his sensor layout. I’m starting by adding five more sensors for a total of eight. Colin’s Arduino has just barely enough free pins to accommodate eight sensors, but I don’t want to spend all of my available pins on sensors. That would leave me no room to expand. Further, I’m not totally sure eight sonar sensors will be enough. My HC-SR04 sensors have a range of about 15 degrees, so it would take 24 sonar sensors to cover 360 degrees. I don’t know if that will be necessary, but I’d like to have the option.

There are a couple of ways to do this. The simplest would probably be to use a shift register. Shift registers allow you to use three I/O pins to control more than a thousand additional pins, but operating them involves some computing overhead and the Arduino still has to be involved in every sensor reading operation.

The chip for my sonar controller, an ATtiny84

An Atmel ATtiny84

The method I’ve chosen is to use a separate microcontroller, an ATtiny84, to handle the sonar sensors. The Aduino tells the ATtiny that it needs new sensor readings. Then it goes and performs some other computations while the ATtiny pings its sensors. Then, when the new sensor data is ready, the ATtiny sends back the new readings all at once. Using this scheme, the Arduino doesn’t have to spend processor time reading sensors. Instead it can focus on other problems. Also, since it can communicate with the Arduino via I2C, it only requires 2 I/O pins! Here’s what Colin looks like with his new sonar layout.

Colin with his new sonar

Colin with his new sonar ring

As with my earlier post on Raspberry Pi to Arduino communication, there’s several steps to this process. Luckily it’s a lot simpler this time.


The Communication Protocol

The communication protocol is going to be very similar to the one that I outlined for communicating between a Raspberry Pi and an Arduino, only simplified. Basically, the Arduino tells the ATtiny when it needs updated sonar readings. The ATtiny then pings all of its sensors and sends back the readings when it’s done. This is how it works:

  • The Arduino sends a byte to the ATtiny
    • The byte has no meaning, it’s just a flag to signal the ATtiny to ping its sensors
  • The ATtiny pings its sensors
  • The ATtiny sends its readings back to the Arduino as 16 bytes
  • The Arduino assembles the bytes into 8 16-bit ints

One other difference with the Raspberry Pi to Arduino protocol is that we’ll be using I2C instead of serial. If you’re not familiar with I2C I’d suggest reading up on it. Sparkfun has a great tutorial.

That’s really all there is to it! The protocol is simple, but there’s some fiddly details to work out in the code.

Top of page


The Sonar Controller

sonar controller with ATtiny84

My ATtiny84-based sonar controller

My sonar controller basically consists of an ATtiny84, with eight inputs for HC-SR04 ultrasonic rangefinders. If you’re interested in making your own, you can download my eagle files for the schematic, board layout, and the gerber files for fabbing the PCB. The gerbers are formatted for fabrication by OSH Park, which I highly recommend for low-volume jobs.

The controllers each have two sets of inputs for power and I2C communication so multiple controllers can be chained together on the same bus. Note that there are two spots for 4.7 kOhm resistors on each controller. These are pull-ups for the I2C buses. If you’re chaining multiple controllers together, only one of them needs pull-up resistors. The resistor spots on the other controllers can be left empty.

I also designed a laser-cut frame to hold the sonar sensors and controller, as well as some fittings to attach the whole business to Colin. The design is pictured below, and you can download the SVG files to make your own parts here.

Colin's new sonar arrangement

Colin with his new sensor arrangement and fittings

If you’d rather not fab a custom circuit board, you can set it up on a breadboard as shown below. It’s not particularly useful on a breadboard, but it will allow you to experiment with and test it.

breadboard wiring

Breadboard wiring for ATtiny sonar controller

Top of page


The Code!

ATtiny Code

Let’s start out with a couple of preliminaries. First, for I2C communication I’m using the TinyWireS and TinyWireM libraries. We need our ATtiny to perform as both master and slave, but there is no existing library that allows this. Fortunately there’s a way to hack around this problem.

I’m also using the NewPing library. The NewPing library doesn’t work with ATtinies, but there’s a hack for this too. You just need to comment out the functions for Timer2 and Timer4 because the ATtiny does not have these timers.

You can find the complete program for the ATtiny here. If you’re not familiar with how to program ATtinies, I’d suggest going through this very thorough tutorial.

Initiating A Sensor Update

The function below pings the eight sonar sensors and records the ping times in microseconds.

void updatePingTimes(uint8_t bytes)
{
  TinyWireS.receive();
  for (int i = 0; i < NUM_SONAR; i++)
  {
    pingTimes[i] = sonar[i].ping();
  }
  sendTimes();
}

The function above is invoked using an interrupt when the ATtiny receives a byte from the Arduino. The line below is called in the setup() function to tell the ATtiny to generate an interrupt and call the updatePingTimes() function whenever data is received from the Arduino.

TinyWireS.onReceive(updatePingTimes);

Sending Data Back To The Arduino

When the sonar sensors have been updated, the sendTimes() function is called to send the updated ping times back to the Arduino. It is important to note that the ATtiny must be functioning as a slave in order to use the onReceive() function. To send data back to the Arduino, the ATtiny must function as a master. This makes the sendTimes() function a bit more complicated.

void sendTimes()
{
  // clear the I2C registers
  USICR = 0;
  USISR = 0;
  USIDR = 0;

  // start I2C with ATtiny as master
  TinyWireM.begin();

  // transmit ping times to Arduino
  TinyWireM.beginTransmission(MASTER_ADDRESS);
  for (int i = 0; i < NUM_SONAR; i++)
  {
    int thisTime = pingTimes[i];
    if (thisTime == 0) thisTime = (double)MAX_DISTANCE / speedOfSound;
    uint8_t firstByte = thisTime & 0xFF;
    uint8_t secondByte = (thisTime >> 8) & 0xFF;
    TinyWireM.write(firstByte);
    TinyWireM.write(secondByte);
  }
  TinyWireM.endTransmission();

  // clear I2C registers again
  USICR = 0;
  USISR = 0;
  USIDR = 0;

  // put ATtiny back in slave mode
  TinyWireS.begin(OWN_ADDRESS);
}

As I said before, I don’t know of any library that allows an ATtiny to function as both master and slave. Fortunately we can hack our way around this problem by clearing the ATtiny’s I2C registers and restarting communication in master mode. After we’ve transmitted the ping times back to the Arduino we need to clear the I2C registers again and put the ATtiny back in slave mode so it can be ready for the next request from the Arduino. I don’t mind telling you it took me a lot of time with the ATtiny’s datasheet to figure out how to make that work.


Arduino Code

For reference you can find the complete program for the Arduino here. To get things going we need to include the Wire library and put the following two lines in the setup() function:

// begin communication with sonar controller
Wire.begin(OWN_ADDRESS);
Wire.onReceive(updateDistances);

Adding the onReceive() function is particularly important because it allows the Arduino to do other processing tasks while the ATtiny pings its sensors. When the ATtiny sends data it generates an interrupt, the Arduino stops what it’s currently doing and receives the new data, then goes back to whatever it was doing before.

Receiving Data

The updateDistances() function works as follows:

// updates sonar distance measurements when a new reading is available
void updateDistances(int bytes)
{
  for (int i = 0; i < NUM_SONAR; i++)
    readSonar(i);
  distancesRead = true;
}

// reads the distance for a single sonar sensor
// expects to receive values as ints broken into 2 byte pairs,
// least significant byte first
void readSonar(int index)
{
  int firstByte = Wire.read();
  int secondByte = Wire.read();
  sonarDistances[index] = ((double)((secondByte << 8) | firstByte)) * speedOfSound * 0.5;
}

Notice that the sonar distance is multiplied by 0.5 when it’s added to the sonarDistances array. This is because the ATtiny returns the total time of flight for the ultrasonic ping. The ping needs to go from the sensor, to the obstacle, and back. This means multiplying the ping time by the speed of sound would result in twice the distance between the obstacle and the sensor.

Requesting An Update

Fortunately, requesting an update is pretty simple. The Arduino just needs to send a byte, any byte, to the ATtiny to let it know it needs new sensor readings.

// requests a sonar update from the sonar controller at the given address
// by sending a meaningless value via I2C to trigger an update
void requestSonarUpdate(int address)
{
  Wire.beginTransmission(address);
  Wire.write(trig);
  Wire.endTransmission();
}

After this function executes, the Arduino can perform other processing tasks until the ATtiny sends back updated sonar readings.

Top of page


Going Further

There are a number of ways to solve the problem of coordinating large numbers of sonar sensors, and the above method is only one possibility. It takes a long time to update sonar sensors, up to 15 milliseconds for 8 sensors. For an Arduino running at 16 MHz, that’s 240,000 processor cycles. So it’s advantageous to use a method that allows the Arduino to do something else while the sensors are being updated. My method does this, but one could also take the Arduino out of the loop entirely and have the Raspberry Pi talk to the sonar controller directly. It would be interesting to implement this method and compare it to the one presented above.

Top of page

Raspberry Pi to Arduino Communication For Robot Control


Arduino to Raspberry Pi communicationGood news, everyone! I’ve come up with a good way for Colin’s Raspberry Pi to talk to his Arduino. To review, the idea was that the Arduino could handle low-level control functions like speed control, odometry, and sensor reading. This leaves the Raspberry Pi free to handle high-level control like obstacle avoidance, motion planning, and state estimation.

There are a few steps in this process:


Freeing Up the Raspberry Pi’s Serial Port

The first problem we have to deal with is that the Raspbian reserves its serial port for use by a serial console. So the Raspberry Pi’s GPIO serial port is totally useless until you free it up. I’m not going into all the details here, but I found this guide really helpful. The important things to remember are that you need to enable uart in config.txt and disable the serial console. This will allow our program to use the serial port.

Top of page


The Communication Protocol

The communication protocol works like this:

  • The Raspberry Pi sends 2 16-bit ints to the Arduino
    • The first int is the commanded translational velocity
    • The second int is the commanded angular velocity
  • The Arduino sets its speeds accordingly and then updates its sonar sensors
  • After the sensors are updated, the Arduino sends 11 16-bit ints back to the Raspberry Pi
    • The first 8 ints are the distance readings from the 8 sonar sensors
    • The last 3 ints are the Colin’s x and y position and his heading, calculated from odometry

Using this protocol, Colin will run at the commanded speeds until he receives another command. This causes a problem: Colin will continue to run even if the serial communication fails and the Raspberry Pi stops sending commands all together. This means he could run off a cliff and there would be nothing to stop him!

To fix this I have the Raspberry Pi send commands at a regular interval. If, for example, Colin expects to get a new command every quarter second and he doesn’t get a command at the expected time, he will know there’s a communication problem. If Colin detects a communication fault he can respond in a fail-safe manner by stopping.

Communication Formatting

How do we send ints over serial though? Arduino’s Serial.print()  function converts int and float values to strings of chars before sending them. At first this might seem really convenient, but it actually causes more problems than it solves. First, Arduino and C++ don’t have a great function for converting char strings to int or float values. Second, C++ doesn’t automatically convert numerical types to strings before sending them via serial, so you’d need to write your own function for that. Lastly, if we convert numbers to strings, their lengths will be variable. This means we would need to define some way to tell when one value ends and the next begins.

The good news is none of that is necessary! You know why? Every command and sensor packet will be exactly the same length: 2 16-bit ints for commands and 11 16-bit ints for responses. Further, the ints will always come in the same order. So the meaning of each byte is predictable.

The only problem is you can only send individual bytes over serial. But this is easily solved by splitting the ints into their component bytes before sending:

char firstByte = (byte)(value & 0xFF);
char secondByte = (byte)((value >> 8) & 0xFF);

and reassembling them on the other end:

int value = (secondByte << 8) | firstByte;

Top of page


Wiring It Up

Wiring up the Raspberry Pi to the Arduino is pretty simple, but there’s an important catch. The Raspberry Pi uses 3.3 volt logic and the Arduino uses 5 volt logic. So we need to use a level shifter to allow communication between the two devices. If the level shifter gets a 3.3 volt signal on the low side, it sends out a 5 volt signal on the high side. If it gets a 5 volt signal on the high side it sends out a 3.3 volt signal on the low side. Pretty simple, right? Wire it up as shown below:

wiring for serial between an rPi and an Arduino

Wiring for serial between a Raspberry Pi and an Arduino


Top of page


The Code!


Okay, enough talk. Let’s get into the code. I’m going present the code for the Raspberry Pi side first, and follow it up with the Arduino code. The complete code for the Raspberry Pi can be found here and the Arduino code can be found here.

Raspberry Pi Code

Opening the Serial connection

First, we need to open a serial connection with the Arduino. This is handled by the following function, which I adapted from this extremely helpful site. Check out that site if you want details on how all of this works.

void SerialBot::openSerial()
{
	serialFd_ = open("/dev/serial0", O_RDWR);
																						// to allow blocking read
	if (serialFd_ == -1)
	{
		cerr << "Error - unable to open uart" << endl;
		exit(-1);
	}	
	
	struct termios options;
	tcgetattr(serialFd_, &options);
	options.c_cflag = B9600 | CS8 | CLOCAL | CREAD;
	options.c_iflag = IGNPAR;
	options.c_oflag = 0;
	options.c_lflag = 0;
	tcflush(serialFd_, TCIFLUSH);
	tcsetattr(serialFd_, TCSANOW, &options);
}

There is one important difference between my function above and the one it’s adapted from: I dropped the O_NOCTTY and O_NDELAY flags from the open command in line 3. This means my serial connection will be blocking. In other words, when I call the read() function the program execution stops until there is data to read in the serial buffer. In other words, my program will wait for a response from the Arduino before continuing.

Sending Commands

Sending a command works as follows:

int SerialBot::transmit(char* commandPacket)
{
	int result = -1;
	if (serialFd_ != -1) 
	{
		result = write(serialFd_, commandPacket, commandPacketSize);
	}
	return result;
}

And, in case you’re wondering, the function that assembles the commandPacket is below.

void SerialBot::makeCommandPacket(char* commandPacket)
{
	int16_t intAngular = (int)(angular_ * 1000.0);
	commandPacket[0] = (char)(translational_ & 0xFF);
	commandPacket[1] = (char)((translational_ >> 8) & 0xFF);
	commandPacket[2] = (char)(intAngular & 0xFF);
	commandPacket[3] = (char)((intAngular >> 8) & 0xFF);
}

Note that angular_ is a double representing Colin’s commanded angular velocity. The size and representation of doubles is inconsistent, however, so it’s difficult to break them up and reassemble them on a different machine. Int representations are very consistent, however, so I just multiply angular_ by 1000 to save the first three decimal places and cast it to an int. The loss of accuracy is pretty negligible for our purposes.

Receiving Data

The function below is how we receive data from the Arduino. Note that I’ve set the read to time out after 0.25 seconds. The Raspberry Pi expects to get a response from the Arduino after every command is sent. If it doesn’t receive a response before it’s time to send the next command, it throws an error.

int SerialBot::receive(char* sensorPacket)
{
	memset(sensorPacket, '\0', sensorPacketSize_);
	int rxBytes;
	if (serialFd_ != -1)
	{
		// set up blocking read with timeout at .25 seconds
		fd_set set;
		FD_ZERO(&set); // clear the file descriptor set
		FD_SET(serialFd_, &set); // add serial file descriptor to the set
		struct timeval timeout;
		timeout.tv_sec = 0;
		timeout.tv_usec = 250000;
		
		// wait for serial to become available
		int selectResult = select(serialFd_ + 1, &set, NULL, NULL, &timeout);
		if (selectResult < 0)
		{
			cerr << "blocking read failed" << endl;
			return -1;
		}
		else if (selectResult == 0)
		{
			cerr << "read failed: timeout occurred" << endl;
			return 0;
		}
		else
		{
			rxBytes = read(serialFd_, sensorPacket, numSonar_ + numPoseVariables);
		}
	}
	return rxBytes;
}

Once we’ve read data from the Arduino, we need to parse it:

int SerialBot::parseSensorPacket(char* sensorPacket)
{
	int16_t firstByte;
	int16_t secondByte;
	int16_t inValues[numSonar_ + numPoseVariables];
	for (int i = 0; i < numSonar_ + numPoseVariables; i++)
	{
		firstByte = sensorPacket[2 * i];
		secondByte = sensorPacket[(2 * i) + 1];
		inValues[i] = (secondByte << 8) | firstByte;
	}

	for (int i = 0; i < numSonar_; i++)
	{
		distances_[i] = inValues[i];
	}
	
	x_ = inValues[8];
	y_ = inValues[9];
	theta_ = ((double)inValues[10]) / 1000.0;
}

Note again that Colin’s heading, theta_ , is a double. To save some bother in programming, the double value is multiplied by 1000 and casted to an int before it’s sent. So it needs to be casted to a double and divided by 1000 after it’s received.

Putting It All Together

Okay, last thing: we’ll put all of these things together in a communication function that runs every 0.25 seconds in its own thread:

void SerialBot::commThreadFunction()
{
	while (true) 
	{
		char commandPacket[commandPacketSize];
		makeCommandPacket(commandPacket);
		if (transmit(commandPacket) < 1)
			cerr << "command packet transmission failed" << endl;
		char sensorPacket[sensorPacketSize_];
		memset(sensorPacket, '\0', sensorPacketSize_);
		int receiveResult = receive(sensorPacket);
		if (receiveResult < 1)
		{
			cerr << "sensor packet not received" << endl;
		}
		else if (receiveResult < commandPacketSize)
		{
			cerr << "incomplete sensor packet received" << endl;
		}
		else
		{
			parseSensorPacket(sensorPacket);
		}
		usleep(readPeriod_);
	}
}

Top of page


The Arduino Code

Are you still with me? That took a while, but we got one side of it done. So we just have the Arduino code left to deal with.

Receiving

Let’s start with receiving a command from the Raspberry Pi:

void readCommandPacket()
{
  byte buffer[4];
  int result = Serial.readBytes((char*)buffer, 4);

  if (result == 4) // if the correct number of bytes has been received
  {
    int commands[2];
    
    // assemble 16 bit ints from the received bytes in the buffer
    for (int i = 0; i < 2; i++)
    {
      int firstByte = buffer[2 * i];
      int secondByte = buffer[(2 * i) + 1];
      commands[i] = (secondByte << 8) | firstByte;
    }
    translational = commands[0]; 
    angular = (double)commands[1] / 1000.0; // convert received int to double angular velocity
    colin.drive(translational, angular); // set Colin's speeds
    commandReceived = true; // note that a command has been received
    lastCommandTime = millis();
  }
  else if (result > 0)
  {
    Serial.println("incomplete command");
  }
  // else do nothing and try again later
}

Note that I’m using the Serial.readBytes() function rather than the more common Serial.read() function. There’s a couple of reasons for this. First, Serial.read() only reads a char at a time, but we know we need 4 bytes. Serial.readBytes() also blocks the program’s execution until it receives the requested number of bytes. This is perfect, since it means we’ll get a complete packet, instead of just receiving part of one.

Transmitting

The transmit function first puts all the data that needs to be sent into an array, buffer. Then the buffer is sent to the Raspberry Pi using Serial.write() . Note that I’m not using Serial.print() because it automatically converts int values to characters, and we want to send the bytes exactly as-is.

void sendSensorPacket()
{
  colin.getPosition(x, y, theta); // updates Colin's position
  byte buffer[22];
  addDistances(buffer); // adds sonar readings to buffer
  int sendX = (int)x;
  int sendY = (int)y;
  int sendTheta = (int)(theta * 1000.0);
  buffer[16] = (byte)(sendX & 0xFF);
  buffer[17] = (byte)((sendX >> 8) & 0xFF);
  buffer[18] = (byte)(sendY & 0xFF);
  buffer[19] = (byte)((sendY >> 8) & 0xFF);
  buffer[20] = (byte)(sendTheta & 0xFF);
  buffer[21] = (byte)((sendTheta >> 8) & 0xFF);
  Serial.write(buffer, 22);
}

void addDistances(byte* buffer)
{
  for (int i = 0; i < NUM_SONAR; i++)
  {
    buffer[2 * i] = (byte)(sonarDistances[i] & 0xFF);
    buffer[(2 * i) + 1] = (byte)((sonarDistances[i] >> 8) & 0xFF);
  }
}

Bringing It All Together

The loop() function below brings everything together. It checks to see if there is data in the serial buffer and, if so, attempts to interpret it as a command. If it successfully reads a command, it requests an update from the sonar controller. After the Arduino gets updated sensor readings it assembles a response packet and sends it back to the Raspberry Pi.

Lastly, the Arduino checks to see if more than 1 second has passed since the last command was received. If so, it assumes that a communication fault has occurred and it stops Colin.

void loop() 
{ 
  // check if a command packet is available to read
  readCommandPacket();
  
  // request a sensor update if a command has been received
  if (commandReceived)
  {
    commandReceived = false;
    requestSonarUpdate(SONAR_ADDRESS);
  }
  // send sensor packet if sonar has finished updating
  if (distancesRead)
  {
    distancesRead = false; 
    sendSensorPacket();
  }
  currentTime = millis();
  
  // stop colin if a command packet has not been received for 1 second
  if (currentTime - lastCommandTime > 1000)
  {
    Serial.println("command not received for 1 second");
    lastCommandTime = millis();
    colin.drive(0, 0.0);
  }
}

Top of page


Where Do We Go From Here?

Now we have a good way to communicate between Colin’s Raspberry Pi and Arduino. Colin doesn’t have a way to perceive the world around him, however, so that’s our next step. I designed an independent controller to read Colin’s sonar sensors and relay the information to the Arduino via I2C. My next post will cover the finer details of my sonar controller and the associated communication protocol.

After we sort these details out and get the system working like we want, we can get to programming higher level behaviors. For example, I’m currently working on a wall-following program. I’m hoping it will be ready to present in a couple of weeks! Lots of good stuff to come, stay tuned!

Top of page

PID Motor Control

Months ago I wrote a post on the use of motor encoders with Arduino and promised to follow up with a post on speed control. In fact at the time I already had a mostly functional speed control class but I decided I had to do some major cleanup on it before presenting here. Then I started school I had no time for Colin or this blog for nine months.

Now, that I’m done with school I’ve had time to write a brand new motor control library! You can find the source code and an example sketch at github.com/1988kramer/motor_control.

In this post I’m going to briefly explain how my motor control program is structured, then give some background on the control theory that went into it, then I’ll explain how the theory is implemented in the actual code.


Program Structure

I structured my program to mirror the hardware as closely as possible so I made two classes, Motor and Encoder, to handle the low level functions of the motor and encoder. The Motor and Encoder classes are used by the SpeedControl class to precisely control the motor’s speed. See the diagram below for a detailed explanation of the the library’s structure.

GDE Error: Error retrieving file - if necessary turn off error checking (404:Not Found)

You’ll notice two classes in the diagram above that use the functionality of SpeedControl: PositionControl and DifferentialDrive. PositionControl uses an instance of SpeedControl to provide position control functions and DifferentialDrive uses two instances of PositionControl to provide high level control for a differential drive robot like Colin.

I’m just going to be discussing SpeedControl class today, but you can expect a post on PositionControl and DifferentialDrive in the next few weeks.


Control Loop Basics

In my earlier post on motor control I explained that simply using analogWrite does not actually set the speed of a motor. It only sets the voltage to the motor. Varying loads will cause a motor to run at different speeds with the same input voltage. How do we get a motor to run at a specific speed? This sounds like a job for a closed-loop controller!

In a closed-loop controller, we tell the controller how fast we want the motor to run. We call this the set point. The controller then measures the actual speed of the motor and calculates the difference between the actual speed and the set point. This is the error. The controller then adjusts the voltage to the motor to reduce the error.

Proportional Control

This sounds simple enough, but how much should the controller adjust for a given error? One could make the adjustment directly proportional to the error, i.e. large errors get large adjustments and small errors get small adjustments. The adjustment would be defined as

A(t)=K_{{p}}e(t)

where A(t) is the adjustment, e(t) is the error, and K_{p} is a constant referred to as the gain. This is a decent solution but it has some problems. If K_{p} is large the controller will be quick to respond to disturbances but it will tend to over-correct (known as overshooting). If K_{p} is very large the motor speed will oscillate around the set point and never stabilize. If K_{p} is small enough to avoid overshooting, however, the controller will respond very slowly to disturbances.

Integral and Derivative Terms

To compensate for this we can add a couple more terms to the adjustment equation. A term proportional to both the error and duration of the error, called the integral term, will correct for persistent small errors that won’t be corrected by the proportional term. It will also make the controller faster to reach the set point. However, if the integral term is too large it will cause overshoot and oscillation. The integral term is defined as

I_{out}=K_{i}\int_{0}^{t}e(\tau)d\tau

Lastly, the derivative (slope) of the error over time can be used to predict the system’s future behavior. For instance, a large derivative indicates the next error will likely be far away from the last whereas a small derivative indicates the next error will likely be close to the last. A term calculated from the derivative of past errors helps to minimize overshoot and improve the stability of the controller. The derivative term is defined as

D_{out}=K_{d}\frac{de(t)}{dt}

PID Adjustment Equation

So our final adjustment is the sum of the proportional, integral, and derivative terms:

A(t)=P_{out}+I_{out}+D_{out}=K_{p}e(t)+K_{i}\int_{0}^{t}e(\tau)d\tau+K_{d}\frac{de(t)}{dt}

Every time the controller adjusts the voltage output to the motor it must find the current speed of the motor, calculate the error between the current speed and the set point, then use the above equation to determine how much the output voltage to the motor should be adjusted to compensate for the error. In Colin’s case this is done 20 times per second.


The Code

Because posting my motor control code in its entirety would take far too much space and most of it isn’t very interesting anyway, I’m just going to post and explain the update function. It calculates the required adjustment to the PWM output to the Motor to maintain consistent speed.

My code draws heavily on Brett Beagregard’s PID library. I would encourage you to check out his blog for a more in-depth explanation of PID control. Thanks for your work, Brett!

// adjusts motor's PWM level to correct for any error between the
// set point and the actual speed of the motor's output shaft
// MUST be called regularly on the same deltaT used to calculate
// motor speed in the Encoder object
void SpeedControl::adjustPWM()
{
	int speed = _encoder->getSpeed(); // motor control returns vector speed
	if (speed < 0) speed *= -1;  // convert speed to scalar
	int error = _setPoint - speed;  // calculate error
	_iTerm += (_kI * (double)error); // calculate integral term
	double dInput = speed - _lastSpeed; // calculate derivative
	int adjustment = (_kP * (double)error) + _iTerm - (_kD * dInput);
	_pwm += adjustment;
	constrainPWM(); // limit _pwm to the range 0-255
	_motor->setPWM(_pwm);
	_lastSpeed = speed;
}

The code above basically performs the calculation discussed in the preceding section. In line 7 it fetches the current speed of the motor and calculates the error in line 9. It then calculates the proportional, integral, and derivative terms and uses them to calculate the adjustment in lines 10 through 12.

Derivative Kick

There is one deviation from the equations presented in the previous section: in lines 11 and 12 we’re using the speed, not the error to calculate the derivative term. Also, we’re subtracting it from the adjustment. What gives?

Basically, when the user changes the set point there is an instantaneous change in the error e(t). This means the derivative of the error \frac{de(t)}{dt} is infinite at this point. So in theory the derivative term becomes infinite whenever the set point changes, a phenomenon called derivative kick. Because the code above calculates the approximate numerical derivative the phenomenon is not as pronounced, but it should still be addressed. Making the change above corrects for derivative kick. See Brett’s blog for a more in-depth explanation.


Example Sketch

So how do we actually implement this to control a motor? You could spend weeks or possibly months of your free time developing a full speed control program, or you could try using the one that I spent weeks of my free time on! The example sketch below demonstrates the use of my SpeedControl class, available on github.

#include <TimerOne.h>
#include <PositionControl.h>

const int dir1 = 9, dir 2 = 8, pwm = 6;
const int encoderA = 3, encoderB = 5;
const int deltaT = 50000;
const int encoderTicksPerRev = 309;
const double kP = 0.025, kI = 0.0008, kD = 0.0005;

Motor rhMotor(dir1, dir2, pwm);
Encoder rhEncoder(encoderA, encoderB, deltaT, encoderTicksPerRev);
SpeedControl rhSpeedControl(&rhMotor, &rhEncoder);

  
void setup() {
  Serial.begin(9600);
  Timer1.initialize(deltaT);
  Timer1.attachInterrupt(adjustPWM);
  attachInterrupt(1, readRHEncoder, CHANGE);
  rhSpeedControl.setGains(kP, kI, kD);
}

void loop() {
  
  rhSpeedControl.setSpeed(180);
  delay(5000);
  rhSpeedControl.setSpeed(0);
  Serial.println(rhSpeedControl.getDistance());
  delay(2000);
}

void readRHEncoder()
{
  rhEncoder.updateCount();
}

void adjustPWM()
{
  rhSpeedControl.adjustPWM();
}

Implementation Notes

A couple of notes on note on the sketch above:

First, the PID gains, kP, kI, and kD used in the example sketch work well for Colin but will need to be tuned for your robot. PID tuning is something of a black art; I haven’t been able to find much info on it online, anyway. The method presented on this wikipedia page worked fairly well for me though.

Second, the SpeedControl.adjustPWM() function must be called every deltaT microseconds using a timer interrupt. See this Arduino playground page for more info on timer interrupts. The deltaT value needs to be tuned just like the PID gains. A deltaT value that’s too large will result in the controller not adjusting frequently enough to accurately control the motor’s speed. A value that’s too small, will limit the accuracy of the speed measurement because fewer encoder ticks can elapse in a smaller amount of time. The maximum accuracy for a given deltaT, by the way, is

Accuracy=\frac{D}{\Delta t}

Where D is the number of degrees per encoder tick. I found a deltaT of 50 milliseconds works well for Colin, but you should play around with it for yourself.

That’s all I have for now! Stay tuned for a post on my PositionControl and DifferentialDrive classes in the next few weeks!

I’m Learning Multithreading!

Since this quarter started I’ve been doing a weekly interview practice session with friends from school. Last week we spent most of our hour on this:

Rotate Matrix: Given an image represented by an NxN matrix, where each pixel in the image is 4 bytes, write a method to rotate the image by 90 degrees. Can you do this in place?

For those interested, it’s exercise 1.7 in the sixth edition of Cracking the Coding Interview by Gayle Laakmann McDowell. It’s a great interview prep book that’s been really helpful since I started job hunting. I had a lot of fun with this one so I’d like to do a short write up on our thought process.


Row-to-Column Method

The obvious thing to do here is create a new NxN square matrix and transpose the rows of the old matrix into the columns of the new matrix as in the graphic below:

MatrixRotateRtC

The above method takes O(n) time and O(n) space where n is the number of entries in the matrix. Coding this up should be pretty simple so I’m going to skip ahead to the thing I’m actually interested in: the in-place method.


In-Place Method

We can’t get below O(n) time to rotate a matrix since every entry must be moved and there are n entries. However, it is possible to cut the space usage down from O(n) to O(1) by doing the rotation in place. See the graphic below for an illustration of the method:

MatrixRotateIP

How do we implement this in code? It’s a little difficult but the key insight is that it’s possible to find the destination index for any given entry in the matrix from its starting index. In general terms the destination index can be found using this relation

Destination for { i, j } = { j, n – i }

where n is the last index in the matrix. The really cool thing about this is that it works for any starting index. The graphic above only shows a rotation of the outer layer of the matrix but the above method works for any index in any size of square matrix.

The code I post on this blog has been getting longer and longer lately, so I’m only going to post the relevant functions on the site. If you want to see the whole program you can find it on my gitHub repo.

The program is called matrixRotate.cpp. It takes a .txt file containing the matrix to rotate as a command line argument. The text file should have one int  representing the size of the matrix on the first line. Each successive line should have a space delimited string of ints representing each row of the matrix. It prints the rotated matrix to the console.

The function below will take care of a single step in the matrix rotation. After the function completes 4 matrix entries are moved.

// accepts three ints, the starting row and column indices
// and the highest index in the matrix, NOT the size
// rotates 4 matrix entries by 90 degrees counterclockwise
void rotateLeftHelper(int row, int column, int n) 
{
  int temp = matrix[row][column];
  for (int i = 0; i < 3; i++) 
  {
    int nextRow = column;
    int nextColumn = n - row;
    matrix[row][column] = matrix[nextRow][nextColumn];
    row = nextRow;
    column = nextColumn;
  }
  matrix[row][column] = temp;
}

To rotate the whole matrix we need to run the above function on every entry in the first row from { 1, 1 } to { 1, n-1 }. Then every entry in the second row from { 2, 2 } to { 2, n-1 }. This needs to be repeated until we get to the starting index: { n/2, n/2 }. The function below will do this for us:

// accepts an int, the matrix size
// rotates the matrix by 90 degrees counterclockwise
void rotateLeft(int n) 
{
  int level = n - 1; // switch n from matrix size to highest index
  for (int i = 0; i < n/2; i++) 
  {
    for (int j = i; j < level; j++) 
    {
      rotateLeftHelper(i, j, n - 1);
    }
    level--;
  }
}

So when they work in concert, the above functions will rotate a matrix by 90 degrees counterclockwise and it’ll do it in place. That’s pretty cool, right? If you’ll recall though, the problem from Cracking the Coding Interview says the matrix represents an image. If it’s a raw image from a run-of-the-mill camera it will have 20 million entries. Doing each rotation in series will tie up the CPU for a long time. Fortunately, using the above  rotateLeftHelper() function, one instance of the function doesn’t affect the others. They never operate on the same part of the matrix! Do you know what that means? They can run in parallel and asynchronously!


Multi-Threaded Version

Really all we need to do here is create a new thread for every call to the rotateLeftHelper() function instead of running them in series. Cool, right?

The function below implements the in-place matrix rotation method using pthreads. See matrixRotate.cpp on my gitHub repo to see the whole program.

// accepts a pointer to a threadParams struct containing three ints, 
// the starting row and column indices
// and the highest index in the matrix, NOT the size
void* rotateLeftHelper(void* start) 
{
  threadParams* startState = (threadParams*) start;
  int row = startState->row;
  int column = startState->column;
  // cout << "starting thread on {" << row <<  ", " << column << "}" << endl;
  int n = startState->n;
  int temp = matrix[row][column];
  for (int i = 0; i < 3; i++) 
  {
    int nextRow = column;
    int nextColumn = n - row;
    matrix[row][column] = matrix[nextRow][nextColumn];
    row = nextRow;
    column = nextColumn;
  }
  matrix[row][column] = temp;
}

// accepts an int, the matrix size
// rotates a matrix by 90 degrees counterclockwise
void rotateLeft(int n) 
{
  int level = n - 1; // switch n from matrix size to highest index
  // calculate number of threads
  int numThreads = 0;
  for (int i = level; i >= 1; i -= 2) {
    numThreads += i;
  }
  pthread_t *threads;
  threadParams *parameters;
  threads = new pthread_t[numThreads];
  parameters = new threadParams[numThreads];
  int curThread = 0;
  for (int i = 0; i < n/2; i++) 
  {
    for (int j = i; j < level; j++) 
    {
      parameters[curThread].row = i;
      parameters[curThread].column = j;
      parameters[curThread].n = n - 1;
      pthread_create(&threads[curThread], NULL, 
		     &rotateLeftHelper, (void*) &parameters[curThread]);
      curThread++;
    }
    level--;
  }
  // join all threads
  for (int i = 0; i < numThreads; i++) 
  {
    pthread_join(threads[i], NULL);
  }
  delete[] threads;
  delete[] parameters;
}

Further Improvements

So the above code will make use of parallel processing to speed up the matrix rotation. However, creating and destroying a thread for each instance of the rotateLeftHelper() function involves an unnecessary amount of overhead. We could avoid that overhead by using a thread pool.

Also, we’re still running the threads on the CPU. In the best case scenario the CPU has 8 cores, so we’ll be getting a speedup factor of much less than 8. That’s good but not great. Also, we’re tying up the CPU to do a lot of extremely simple tasks. It would be better if we had a much larger number of much worse processors to do this task. Does that sound familiar? It’s called a graphics card and it’s what image processing programs on computers actually use to do tasks like this.

I think the best thing I could do for this program would be to figure out how to create threads that run on my graphics card using openGL.

That’s all for now though! Feel free to email me with questions.