Tuesday, April 4, 2017

Self-driving car as in 2017 April

In 2012 Google founder Sergey Brin stated that Google Self-Driving car will be available for the general public in 2017.
In 2014 this schedule was updated by project director Chris Urmson to indicate a possible release from 2017 to 2020.
In 2016 Chris Urmson left google and founded Aurora Innovation aiming at relevant technologies.
Baidu teamed up with China-based car manufacturer BAIC to test Level 3 self-driving cars on public roads in 2017. Level 3 autonomy means the car can handle a variety of tasks but still requires human supervision. In Level 4 or Level 5 autonomous vehicle passengers can sit back and completely relinquish control of the car.Baidu plans to produce self-driving cars for a public, shuttle service in 2018, and to mass produce the vehicles in 2021.
Today I came to understand why Sebastian, the founder of google self-driving car, left and found Udacity. As he said, he could not find enough qualified people to do the job.
So calm down. Don’t be too optimistic. Dance with the newest technology and enjoy the excitement.

Disngagement: The numbers don’t lie

disengagement” – when the automated system forces the human driver/passenger to take over control of the vehicle. It is not a scientific measure of the complexity and operating characteristics of these vehicles. says Bryan Reimer, who studies autonomous driving at MIT. “They’re just one very interesting data point.”
They’re unscientific because each disengagement involves all sorts of variables, which the reports log inconsistently. They don’t reveal the impact of weather, or where exactly these “problems” occurred. They don’t note if the cars are following detailed maps, or exploring an area for the first time. They don’t account for the proclivities of human operators, who likely have different thresholds for when they’ll take over. (Blame California law, which requires that the report includes certain details but doesn’t specify how they should be presented, or in what context.)
  • Waymo, Google’s sibling company: one disengagement every 5,128 miles.
  • Cruise, GM: one per 400 miles in 2016, and 5 in 2015
  • Robocars, Nissan: one disengagement every 247 miles in 2016, 14 in 2015.
The 11 reports come in 11 formats, and team with vague language like “technology evaluation management” (Mercedes-Benz), “follower output invalid” (Tesla), and “planned test to address planning” (Cruise/General Motors).
The Waymo is accused of stealing intellectual property after Uber acquired a self-driving truck company, Otto, which had been founded by Anthony Levandowski, a former Waymo employee. The technology in question is the design of the lidar array, the light-based imaging system that sits on the top of self-driving cars to help them see the world around them.

lidar

Alphabet-owned company Waymo has slashed the price of lidar, a key component of self-driving cars that helps them see the world, by 90%, Waymo CEO John Krafcik said during a keynote address to kick off the Detroit Auto Show on Sunday.
Lidar is the most expensive component of self-driving cars. As Krafcik noted in his address, a single unit cost $75,000 just a few years ago. A lidar sensor is attached to the top of a car where it spins and shoots out lasers to create high-resolution maps of the car’s surroundings.
Waymo originally used lidar manufactured by Velodyne in its early prototype cars. Since its collaboration with Waymo, Velodyne has reduced the price of its lidar to range from $8,000 to $30,000, depending on how many lasers it shoots out. Ford and China-based internet company Baidu have both invested $150 million in Velodyne for their self-driving car efforts.
The $75,000 Velodyne HDL-64E is $75,000 because it uses 64 lasers and 64 photodiodes to scan the world (a laser/photodiode pair is a “channel” in LIDAR parlance). This results in 64 “lines” of data output, which you can see in the image above. The $8000 Velodyne Puck only has 16 channels, so while it is cheaper and less complex, you’re also getting a much lower resolution view of the world. Eventually you can cut a LIDAR system down to something cheap enough to fit in sub-$1000 consumer devices like the LIDAR-powered Neato Botvac robotic vacuum, which uses a single-laser system for a 2D view of the world. At what point does the system become too low resolution to be useful for a self-driving car, though?
One of the most powerful parts of our self-driving technology is our custom-built LiDAR — or “Light Detection and Ranging.” LiDAR works by bouncing millions of laser beams off surrounding objects and measuring how long it takes for the light to reflect, painting a 3D picture of the world. LiDAR is critical to detecting and measuring the shape, speed and movement of objects like cyclists, vehicles and pedestrians.
Hundreds of Waymo engineers have spent thousands of hours, and our company has invested millions of dollars to design a highly specialized and unique LiDAR system. Waymo engineers have driven down the cost of LiDAR dramatically even as we’ve improved the quality and reliability of its performance. The configuration and specifications of our LiDAR sensors are unique to Waymo. Misappropriating this technology is akin to stealing a secret recipe from a beverage company.
Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.
We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.
Uber has been described by insiders as having an “asshole culture”,[259]#citenote-blackmark-259)[[260]](https://en.wikipedia.org/wiki/Uber(company)#citenote-260) with a corporate culture in which employees are lauded for bringing incomplete and unreliable solutions[[259]](https://en.wikipedia.org/wiki/Uber(company)#citenote-blackmark-259) to market in order for Uber to appear to be innovators and winners. Likened to A Game of Thrones, the company encourages aggression[[261]](https://en.wikipedia.org/wiki/Uber(company)#citenote-Inside-261) and back stabbing,[[259]](https://en.wikipedia.org/wiki/Uber(company)#citenote-blackmark-259)[[261]](https://en.wikipedia.org/wiki/Uber(company)#citenote-Inside-261) in which peers undermine each other and their direct superiors in order to climb the corporate ladder.[[259]](https://en.wikipedia.org/wiki/Uber(company)#citenote-blackmark-259) Human resource managers in the software industry see Uber as a black mark on the resumes of ex-Uber employees, with one industry manager saying “If you did well in that environment upholding those values, I probably don’t want to work with you.”[[259]](https://en.wikipedia.org/wiki/Uber(company)#cite_note-blackmark-259)

Waymo

Waymo originally was pursuing fully driverless cars without a steering wheel, brake, or gas pedals. However in December 2016, Waymo said that driver controls will not be removed due to the regulatory environment. Waymo has also said it is not in the business of physically building cars, and that it intends to instead supply self-driving tech to other companies. Bringing its hardware efforts in-house instead of outsourcing from Velodyne, as well as reducing the price of components, is in-line with that strategy.
Waymo, which stands for “Way forward in mobility,” has the mission of making “it safe and easy for people and things to move around,”
Google. On October 20, 2015, we completed the world’s first fully-self driven car ride
This ride was possible because our cars can now handle the most difficult driving tasks, such as detecting and responding to emergency vehicles, mastering multi-lane four-way stops, and anticipating what unpredictable humans will do on the road.
principle software engineer: nathaneil fairfield

the law for autonomous cars

In October 2010, an attorney for the California Department of Motor Vehicles raised concerns that “the technology is ahead of the law in many areas,” citing state laws that “all presume to have a human being operating the vehicle”.
Nevada passed a law in June 2011 concerning the operation of autonomous cars in Nevada,[8][9][10] which went into effect on March 1, 2012.[11] A Toyota Prius modified with Google’s experimental driverless technology was licensed by the Nevada Department of Motor Vehicles (DMV) in May 2012. This was the first license issue in the United States for a self-driven car.[11]License plates issued in Nevada for autonomous cars will have a red background and feature an infinity symbol (∞) on the left side because, according to the DMV Director, “…using the infinity symbol was the best way to represent the ‘car of the future’.”[12]

Hidden Obstacles for Google’s Self-Driving Cars

it is important for Google to be open about what its cars can and cannot do. “This is a very early-stage technology, which makes asking these kinds of questions all the more justified.”
Would you buy a self-driving car that couldn’t drive itself in 99 percent of the country? Or that knew nearly nothing about parking, couldn’t be taken out in snow or heavy rain, and would drive straight over a gaping pothole?
the car clearly isn’t ready yet, as evidenced by the list of things it can’t currently do—volunteered by Chris Urmson:
  • scenarios that may be beyond the capabilities of current sensors, such as making a left turn into a high-speed stream of oncoming traffic.
  • the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.
  • The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either.
  • Pedestrians are detected simply as moving, column-shaped blurs of pixels—meaning, Urmson agrees, that the car wouldn’t be able to spot a police officer at the side of the road frantically waving for traffic to stop.
  • The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light.
  • Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages.
  • Google says that its cars can identify almost all unmapped stop signs, and would remain safe if they miss a sign because the vehicles are always looking out for traffic, pedestrians and other obstacles.
  • Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.
  • Google’s self-driving car can “see” moving objects like other cars in real time. But only a pre-made map lets it know about the presence of certain stationary objects, like traffic lights.

中文资讯

马陆(phd in computer vision, SLAM)在知乎的回答:

个人研究无人驾驶?

我说在实验室一直从事无人驾驶研究,最多人的时候有9个phd,4个博士后,我们做了3年,在slam上做出了许多成果,发表了不少论文,然而这只是涉及了无人驾驶的一小部分而已。
然而搞了这么多年,去了真正从事无人驾驶的公司,才发现之前搞的虽然技术水平很高,但是根本应用不了。无人驾驶涉及车辆定位,物体检测,跟踪,路径规划等等一系列东西。现在60人的团队忙死忙活,才能按部就班。而且团队里哪一个不是诸如斯坦福mit的phd毕业。
总之,真正无人驾驶,没有雄厚的财力和技术,弄不了的
康费: 需要学习kalman filter,particle filter,路径规划,避障算法等等。但是非常不建议自己研究无人驾驶技术,工程非常复杂且危险。而且需要大量的资金。可以考虑从一些点切入,比如路标识别,全景视觉。

激光雷达的缺陷?

对于机器人说,如何感知三维空间,或者说如何感知深度,是个很关键的问题。
目前,感知深度有许多技术手段。无论是单目、双目、雷达、Structure light还是超声波,毫米波雷达,都可以给出一定的深度信息。然而在众多的解决方案里,激光测距,或者说激光雷达,能够提供其他sensor无法匹敌的高精度点云信息。通过使用高精度的点云信息,使得车辆定位定位、高精度地图绘制、障碍物检测、跟踪等众多自动驾驶上的问题,难度大大减小。
然而,激光雷达缺点也有不少。首先,激光雷达不是全天候工作的sensor,遇到恶劣天气的时候无法工作。其次,目前的激光雷达价格昂贵,点云分辨率较低,使得相关算法在研究上依然存在不少难点。另外,激光雷达的工作距离有限,60m左右点云会变得非常稀疏,难以单独胜任自动驾驶在物体检测任务上的要求。最后,激光雷达无法感知彩色信息,在对于交通信号识别等任务上,有心无力。

SLAM在VR/AR领域重要吗?

SLAM是AR的重中之重。为什么大家谈VR/AR技术时都不聊slam技术呢?个人以为主要是目前国内做AR的几乎没有,大家经常谈及的也是VR。VR的话门槛较低,目前也不怎么用SLAM。
AR的话全世界做的也不多,主要都集中在美国,英国这边。主要是因为SLAM也是近10年发展起来的,近5年发展的比较迅速(主要托了GPU计算的福,realtime系统不再是梦)。即使是在美国英国,做SLAM上了规模的实验室也不过10来个。至于SLAM在AR这边的应用,也不过是刚刚开始而已。
SLAM的应用其实早就进入千家万户了。市面上的扫地机器人大多使用了SLAM的原理(机器人首先构建房间的平面图,有了房间的平面图后,机器人只要在房间到处跑,覆盖整个房间所有地方,即完成了扫地这一任务)。另外,SLAM也是无人机,无人车的重要组成部分。

如何评价百度无人驾驶车完成路测?

无人车从技术角度来看,涉及学科很广。简单来说包括perception, control, planning, hardware等几部分。我本身比较了解的是perception这一块,从目前了解到的一点信息来看,百度无人车系统从sensor calibration, mapping, localization, object detection, tracking等方面来讲,和比谷歌无人车平台类似,属于硬货。 这套方案目前来说比较成熟,能实现一定程度上的无人驾驶。但是门槛也比较高,需要投入很大,并且实现起来极其繁琐复杂。非常佩服百度有魄力从事这方面研究,并且能够在这么短时间内取得不错的成果。不过真正实现无人驾驶,还有很多多如牛毛的问题需要解决。
因此传统汽车厂商,包括奔驰特斯拉等,都没有使用这个Sensor(消费者根本买不起。。)。 相反,传统车厂目前的辅助驾驶技术(LEVEL 3)基于相机,雷达(成本低)。这一技术目前还存在许多问题,精度普遍不高。单单从获取场景深度点云这一步来说,LIDAR直接给出结果(100m半径),相机还要做stereo Vision(半径30米左右)。基于LiDAR可以给城市绘制3D地图,并用3D地图进行车载定位(精度10厘米级别)。而基于相机的技术根本就没有这一块,纯粹使用GPS(精度几米)。从学术角度来说,目前基于相机的无人驾驶有许多难题,没有个十年八年(甚至更长时间)根本无法完全攻克并满足无人驾驶的需求。这也是为什么特斯拉的自动驾驶会归为辅助驾驶(LEVEL 3),因为目前还无法过渡到Level 4。
Generally speaking, the workflow for these 2 approaches are:
lidar based:
  1. preparation: collecting 3d data, build 3d map, label lanes, traffic sign, etc
  2. drive: gps signal to get rough location, locate itself by comparing with pre-built map, recognize traffic sign and objects by 3d point cloud converting from lidar, use radar to keep distance with others.
  3. commercialize in 5 years to level 4.
camera+radar based:
  1. preparation: collecting image data and human driver habits, train classifier using cnn deep learning/ behavior cloning
  2. drive: gps signal to get rough location, use classifier to detect lane lines, use radar to keep distance with others
  3. commercialize in 10-15 years to level 4.