TensorFlow came out today, and like the rest of the ML world, I buried myself with it. I have never been more excited about a new open source code. There are actionable tutorials etc. on the home page that’s worth checking out, but I wanted to know if this was yet another computational graph framework — we already have Theano and CGT (CGT is fast; Theano is most popular).
There’s a detailed white paper outlining TensorFlow, which I highly recommend reading, but summarizing items that caught my eye.
- Apache 2.0 license. Well done Google!
- Google quality code, and reasonably good docs for day 0
- Out of the box multiple device and distributed execution
- Fancy placement algorithms for multi-node scheduling
- Autograds (like Theano and Torch), but possibly faster (need to profile that yet).
- Fault tolerance & checkpointing (ability to resume interrupted execution)
- Finer grained control on concurrency
- Support for multiple devices (mobile to GPU arrays) and multiple language interop
- Fancypants optimizations of the computational graph
The end result of these efforts resulted in a 6-fold speed improvement in training time over DistBelief
- TensorBoard tool to visualize computation graphs and monitor network parameters during training. This is indispensable. During most of my DL work for clients, I have spent a lot of time instrumenting existing code to get the monitoring stats. This by itself feels like a major boon.
This is a lot from a single release. If you were using Theano, upgrading to TensorFlow should feel like moving from a Honda Civic to a Ferrari. What excites me most is, all this work improves not only DL but also most other ML algorithms that can be implemented atop of TensorFlow. I’m getting back to playing with mine.
PS: Seriously, read the white paper. A great read by itself. This paper also provides great advice on porting ML platforms.