Author Archive | Delip

Synthetic Gradients .. Cool or Meh?

Synthetic what now? DeepMind recently published about Synthetic Gradients. This post is about that — what they are, and does it make sense for your average Deep Joe to use it. A Computational Graph is the best data structure to represent deep networks. (D)NN training and inference algorithms are examples of data flow algorithms, and […]

Continue Reading 12

Turi Acquisition

(This post is about Turi, but I will occasionally refer to it by its older names Dato and Graphlab.) Turi got acquired by Apple for $200M (or so it is rumored). Reactions on the internet go from hearty congratulations to folks saying this is another example of the “AI bubble”. And of course, there are ill-informed, get-rich-quick […]

Continue Reading 0

Gradient Noise Injection Is Not So Strange After All

Yesterday, I wrote about a gradient noise injection result at ICLR 2016, and noted the authors of the paper, despite detailed experimentation, were very wishy washy in their explanation of why it works. Fortunately, my Twitter friends, particularly Tim Vieira and Shubhendu Trivedi, grounded this much better than the authors themselves! Shubhendu pointed out Rong Ge (of MSR) […]

Continue Reading 0

Should you get the new NVIDIA DGX-1 for your startup/lab?

NVIDIA announced DGX-1, their new “GPU supercomputer”. The spec is impressive. Performance, even more so (training AlexNet in 2 hours with 1 node). Costs $129K. Running this would take around 3KW. That’s like keeping an oven going. The cheapest (per hour) best config you can currently get from AWS is g2.8xlarge: So for $129K you […]

Continue Reading 0

© 2016 Delip Rao. All Rights Reserved.