On 2017.2.16 at Dev Summit 2007, TensorFlow 1.0 is announced! It has been 15 months since it was open sourced.
Tensorflow is also working on high-level APIs for better user experience.
tf.layersis already available but without further detail of how to train.
tf.keraswill be available around TensorFlow 1.2.
I love what Lily Peng said about the career-changing:
In a previous life I was a doctor, and I’ve been repurposed as a product manager at google.
how to use Tensorboard
In Dev Summit, Dandelion demonstrated the magic of TensorBoard. The highlighted codes in the slides are very impressive. video, source code and slides
The use of tensorboard is actually 2 steps:
- use a
tf.summary.FileWriter(folder_name)object to add everything you want to show, which will be stored in a folder in lcoal disk.
- In terminal,
tensorboard --logdir=folder_namewhich will output data to something like “0.0.0.0:6006”. Open it in a browser.
So the major work in 1st step. Example code is here.
with tf.name_scope()to name a group of tensors or operations.
name=to name a single tensor
writer = tf.summary.FileWriter( folder_name)to create writer
writer.add_graph(sess.graph)to add graph. Note: if you revise the graph, remember to reset it to avoid ghost graph.
writer.add_summary(scalar/histogram/image tensor,step)to add a point for the plotting data. Each tensor will be wrapped by
tf.summary.scalar/histogram/imageand evaluated at
tf.summary.merge_all()is supposed to simplified the previous codes. but it is currently buggy.
- model saving/restoring is 2 lines of code:
saver = tf.train.Saver() saver.restore(sess, "mymodel.ckpt") # after session begin saver.save(sess, "mymodel.ckpt") # before session ends
similarly, you can save/restore dataset in 2 lines of code
from sklearn.externals import joblib joblib.dump(data, 'dataset.pkl') data = joblib.load('dataset.pkl')