2 terms, 3 months/term. 1st cohort begins at 2017-5-8
time dedication: 15 hours/week
hiring partner: Bosch, Kuka, Lockheed Martin, iRobot, Uber ATG, and Alphabet’s Moonshot Factory, X
Term 1:
- search and sample return
- turlesim in ROS, kinematics mechanics to pick and place. https://www.amazonrobotics.com/#/roboticschallenge
- 3D perception: point cloud library, random sample consensus, PR2 simulator, MoveIt packages
- controls
- deep learning: follow me
Term 2:
- Motion planning
- Localization
- hardware integration
Robot:
- perception
- decision making
- action/actuation
The first project is a quick demo of these 3 steps, in which perception is the major focus. You will use computer vision to segment image into grounds, obstacles, and rocks by using color threshold.
The major challenge is to spend some time to digest the codes that connect each step together.
NASA Centennial Challenge
NASA space competition inducement prize contests for non-government funded technological achievements by American teams.
The prize contests are named “Centennial” in honor of the 100 years since the Wright brothers‘ first flight in 1903.
A key advantage of prizes over traditional grants is that money is only paid when the goal is achieved.
One of the challenges is sample return robot challenge, partner with WPI. It has hosted 5 annual events from 2012 to 2016. Team Mountaineers from West Virginia University led by Yu Gu completed both level 1 and level 2 challenges.
Yu Gu, WVU Mechanical, and Aerospace Engineering. The project was mechanical, electrical, programming, computer vision, navigation, a lot of different things.
Project 1: autonomous navigation and mapping
Unity simulator, more photorealistic image quality
- move and steer: arrow key/awsd
- change perspective: Tab
- zoom: mouse scroll
- pick up: right mouse/ enter
- grid: G key. A grid squares on the ground = 1 meter square
- record; r key
Gazebo simulator, powerful physics engine, seamless integration with ROS.
# show 2 images side by side
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(21, 7), sharey=True)
f.tight_layout()
ax1.imshow(image)
ax1.set_title('Original Image', fontsize=40)
ax2.imshow(colorsel, cmap='gray')
ax2.set_title('Your Result', fontsize=40)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
#plt.show() # Uncomment if running on your local machine
RoboND starterKit: https://github.com/udacity/RoboND-Python-StarterKit.git
The remote server has indicated you are using invalid credentials for this channel.
anaconda logout
conda env create -f environment.yml
create a
RoboND
env with 1.38 GB
github material: https://github.com/udacity/RoboND-Rover-Project
walk-through: https://www.youtube.com/watch?v=oJA6QHDPdQw
python drive_rover.py
which will be the back-end artificial intelligence to run the simulator in “autonomous mode”.
Details is found here: https://github.com/jychstar/NanoDegreeProject/tree/master/RoboND/p1_mapping
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.