Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art.
Specifically, students will be competing against each other in the Kitti Object Detection Evaluation Benchmark
New datasets for both testing and training will be released in a format that adheres to the Kitti standard,
Udacity Open Source Self-Driving Car: https://www.udacity.com/self-driving-car
Round 1 - Vehicles
2017.3.22-4.21. Top 50 qualified teams will be announced on 5.1 to next round.
The first round will provide data collected from sensors on a moving car, and competitors must identify position as well as dimensions of multiple stationary and moving obstacles.
Round 2 - Vehicles, Pedestrians
The second round will also challenge participants to identify estimated orientation, in addition to added cyclists and pedestrians.
Round 1: Single Vehicle Obstacles
Datasets for Round 1 feature a single vehicle obstacle that competitors will need to locate in 3D space using camera, radar, and LIDAR data. Download:
I noticed in the forum partipants threw an conspiracy theory on the deadline changes:
- there are organizations that live by their rules, and organizations that make up the rules as they go along.
- With 2000 teams and 4000 people.
- It is clear they will pay you less than you are worth, hire more cheap labor than they can manage,
responses by david:
- These mistakes are not a reflection on Didi. They are a reflection on Udacity. We’re running the program, and any mistakes are our responsibility.
- The Voyage team, which was running this competition, spun out of Udacity in the middle of the competition, which left us in a difficult position. They are good people and continuing to help us get the competition into a good state, but it’s requiring extra time.
starter code or tutorial
http://ronny.rest/blog/ point cloud, lidar data, ROS and ROS bags
You can either build ROS nodes to process messages as they are played back from a bag, or you can process bags directly with the rosbag API
I decided to give up after I downloaded the 30 GB dataset. This dataset (50GB after unzipped) has 12 .bag files and a READ.ME, which says Udacity is developing a new dataset production approach that enables datasets to be released immediately after they are recorded. That’s why release a 2nd 30GB dataset.
Hengck’s starter_kit.pptx is the place to start. However, I just don’t have time to learn ROS and point cloud. It’s very interesting though. I may come back to indulge myself on this after I secure a job.