I came to realize that Bitfusion built its business on AWS marketplace.
While our computers today are often extremely fast, most applications aren’t optimized for the hardware platform they are running on. Bitfusion, which debuted in May 2015 at TechCrunch Disrupt NY, wanted to automate all of this for developers. The company was founded by three former Intel employees in Austin, with a $1.45 million seed funding.
They have 3 business model: Software, appliance (with hardware accelerators), and the accelerated RackSpace Cloud. Why aren’t you going after large-scale enterprises? Large enterprises can build their own hardware and have the skills to do this for their specialized applications.
GPUs can speed up training times, but managing both the infrastructure and software for GPUs creates huge productivity challenges. Bitfusion provides a GPU virtualization and application management platform that accelerates applications and training time with no code changes, and makes it easy to efficiently manage production GPU clusters with high availability, team multi-tenancy, and parallel job execution.
- create key pair
- use Bitfusion Ubuntu 14 TensorFlow AMI.
- Region: Oregon. Instance: t2.small. Security: have 8888 port open. Key pair
- wait for ready. use “connect” button to get required commands like
- open new browser window with public DNS, plus port
:8888for jupyter. Password is instance ID. I don’t know the difference between DNS and IP right now. It seems
http://<Public IP>:8888also works.
- Then you see the 6 familar tensorflow/udacity notebooks
A script version of “getting started” is here.
After ssh link:
scp -i /path/to/your/pem/file path/to/file ubuntu@public_ip_address:~/. # transfer files from local python ~/tensorflow/tensorflow/models/image/mnist/convolutional.py python ~/tensorflow/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py --num_gpus=4
- Using a t2.nano, t2.micro, or t2.small? No Bitfusion software fee. Only AWS charges: 0.023/hr
- Using a p2.8xlarge, p2.16xlarge, m4.16xlarge, x1.16xlarge, x1.32xlarge, i2.4xlarge, i2.8xlarge, or d2.8xlarge? $0.297/hour is your new, lower Bitfusion software fee.
- g2.2xlarge is 0.65*1.1 = 0.715/hr
AWS recently announced their next generation GPU P2 instances. This new generation provides up to 16 NVIDIA K80 GPUs, 64 vCPUs and 732 GiB of host memory. In previous releases of our Bitfusion Tensorflow AMI, we included updated NVIDIA drivers, the CUDA toolkit, and CUDNN support, allowing you to tap into these new powerful instances.
By the way, their recent blog posts did a good job smoothing you on TensorFlow.