Thursday, June 15, 2017

Self-driving Car ND B3, localization

GPS only has a precision of 1~50 meters, the target is to get an accuracy of 0.1 m.
SLAM, simultaneous localization and mapping, is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent’s location within it.
A much simpler version is pure localization posterior.
Markov Assumption, implement in a recursive structure.

11.2 Posterior distribution

bel(x_t) = p(x_t|z_{1:t},u_{1:t},m)
x is state, z is observation/measurement, u is control/motion, m is map
6 hours lidar data is 430 GB!

11.6 Code structure of input data

Be familiar with the input data: map, motion and observation.
several data files:
  • “data/map_1d.txt”
  • in_file_name_ctr: data/example01/control_data.txt
  • in_file_name_obs:data/example01/observations/observations_000001.txt there are 14 files, but empty inside.
  • in_file_name_gt: data/example01/gt_example01.txt

11.19 implement motion model

no observation data yet, only practice motion update.
for (int i=0; i< bel_x.size(); ++i){ //100
    // motion posterior:
    float posterior_motion = 0.0f;
    //loop over state space x_t-1 (convolution):
    for (int j=0; j< bel_x.size(); ++j){
        float distance_ij = i-j;
        //transition probabilities: normalized distribution
        float transition_prob = helpers.normpdf (distance_ij, controls.delta_x_f, control_std) ;
        posterior_motion +=transition_prob * bel_x_init[j];
    }
    //update believe
    bel_x[i] = posterior_motion;
};
//normalize:
bel_x = helpers.normalize_vector(bel_x);
bel_x_init = bel_x;

11.25 observation update

1D Markov Localization, Kalman filter and particle filters are realization of the Bayes Filter.
for (int i=0; i< bel_x.size(); ++i){ //100
// motion update:
    float posterior_motion = 0.0f;
    for (int j=0; j< bel_x.size(); ++j){
        float distance_ij = i-j;
        float transition_prob = helpers.normpdf (distance_ij, controls.delta_x_f, control_std) ;
        posterior_motion +=transition_prob * bel_x_init[j];
    }
// observation upate:  
  std::vector<float> pseudo_ranges ;
  for (unsigned int l=0; l< map_1d.landmark_list.size(); ++l){ float range_l = map_1d.landmark_list[l].x_f - pose_i;
      if(range_l > 0.0f)
      pseudo_ranges.push_back(range_l) ;
  }
  sort(pseudo_ranges.begin(), pseudo_ranges.end());
  //define observation posterior:
  float posterior_obs = 1.0f ;
  //run over current observation vector:
  for (int z=0; z< observations.distance_f.size(); ++z){
      float pseudo_range_min;
      if(pseudo_ranges.size() > 0){
          pseudo_range_min = pseudo_ranges[0];
          pseudo_ranges.erase(pseudo_ranges.begin());
      }
      else
          pseudo_range_min = 100 ; //max range
      //estimate the posterior for observation model: 
      posterior_obs*= helpers.normpdf (observations. distance_f[z], pseudo_range_min, observation_std); 
  }
  //update = observation_update* motion_model
  bel_x[i] = posterior_obs*posterior_motion ;
};
There are several interesting differences:
  1. you have only one controls but multiple observations.
  2. posterior_motion initialize with 0 but posterior_obs initializes with 1
  3. motion propablility is additive, observation propability is multiplicity.

13 particle filters

Particle filter is like a abudant version of sigma points. You randomly generate many states as your starting points, then use measurement/sense to update your believe. To reduce the randomness, gps signal can be used to have a rough location.
class robot:
    def __init__(self):
        self.x = random.random() * world_size
        self.y = random.random() * world_size
        self.orientation = random.random() * 2.0 * pi
        self.forward_noise = 0.0;
        self.turn_noise    = 0.0;
        self.sense_noise   = 0.0;
    def sense(self):
        # a list of distance to landmarks
        Z = []
        for i in range(len(landmarks)):
            x0, y0 = landmarks[i]
            dist = sqrt((self.x - x0)**2 + (self.y-y0)**2)
            dist += random.gauss(0.0, self.sense_noise)
            Z.append(dist)
        return Z
    def Gaussian(self, mu, sigma, x):
        # mean, variance, value
        # calculates the probability of x for 1-dim Gaussian with mean mu and var. sigma
        return exp(- ((mu - x) ** 2) / (sigma ** 2) / 2.0) / sqrt(2.0 * pi * (sigma ** 2))

    def measurement_prob(self, measurement):     
        # calculates how likely a measurement should be
        prob = 1.0;
        for i in range(len(landmarks)):
            x0, y0 = landmarks[i]
            dist = sqrt((self.x - x0) ** 2 + (self.y - y0) ** 2)
            prob *= self.Gaussian(dist, self.sense_noise, measurement[i])
        return prob

    def __repr__(self):
        return '[x=%.6s y=%.6s orient=%.6s]' % (str(self.x), str(self.y), str(self.orientation))
Importance weight and resampling wheel
myrobot = robot()
Z = myrobot.sense() # use as measurement data
N = 1000
p = []  # store each particle
for i in range(N):
    x = robot()
    x.set_noise(0.05, 0.05, 5.0)
    p.append(x)
w = [i.measurement_prob(Z) for i in p]  # get weights  
index = 0
beta = 0
mw = max(w)
p_w =[]  # store particle by its importance
for i in range(N):
    beta += random.random()*2*mw
    while beta > w[index]:
        beta -= w[index]
        index =(index+1)%N
    p_w.append(p[index])

14.4 gaussian sampling

#include <random> // Need this for sampling from distributions
#include <iostream>
using namespace std;
int main() {
    default_random_engine gen;
    double std_x = 1;
    normal_distribution<double> dist_x(0, std_x);//define function
    for (int i=0; i<5;i++)
        cout << dist_x(gen) <<endl; //not so random
    return 0;
}

Project

Self-Driving Car Project Q&A | Kidnapped Vehicle
15:30 shows how to resample
26:38 shows how multiplier is used
30:33 shows about multiplier and associations

sebastian Q&A

2017-6-13
There’s no universal definition for AI?
  • There’s no universal definition for every concept.
Difference between AI, ML, DL and Data science?
  • There is more and more overlap these days.
Advice on original research?
  • The research for me is a conjunction of solving a problem and discovering the problem you’re trying to solve. In bachelor, someone gives you a problem. In Ph.D., no one gives you a problem. The professor shows up smiles at you and says doing something interesting. If you start working on something complete, which you always have to do, otherwise you’ll fail. You find the solution not quite the answer to the problem you posed.
  • Every piece of research I have done, I started out building something. We didn’t quite know what questions were answering. It took us a while to know the most difficult part of research is to understand what question are you really asking?
the process from an AI idea to a working prototype?
  • drop everything you know. never be arrogant that your methods are the right methods. Question that.
  • build the simplest system you could imagine. Don’t do complicated stuff. Do something simple and see how far you get and then understand why you fail