Archive | Deep Learning

Synthetic Gradients .. Cool or Meh?

Synthetic what now? DeepMind recently published about Synthetic Gradients. This post is about that — what they are, and does it make sense for your average Deep Joe to use it. A Computational Graph is the best data structure to represent deep networks. (D)NN training and inference algorithms are examples of data flow algorithms, and […]

Continue Reading 1

Gradient Noise Injection Is Not So Strange After All

Yesterday, I wrote about a gradient noise injection result at ICLR 2016, and noted the authors of the paper, despite detailed experimentation, were very wishy washy in their explanation of why it works. Fortunately, my Twitter friends, particularly Tim Vieira and Shubhendu Trivedi, grounded this much better than the authors themselves! Shubhendu pointed out Rong Ge (of MSR) […]

Continue Reading 0

Should you get the new NVIDIA DGX-1 for your startup/lab?

NVIDIA announced DGX-1, their new “GPU supercomputer”. The spec is impressive. Performance, even more so (training AlexNet in 2 hours with 1 node). Costs $129K. Running this would take around 3KW. That’s like keeping an oven going. The cheapest (per hour) best config you can currently get from AWS is g2.8xlarge: So for $129K you […]

Continue Reading 0

Universal Function Approximation using TensorFlow

A multilayered neural network with even a single hidden layer can learn any function. This universal function approximation property of multilayer perceptrons was first noted by Cybenko (1989) and Hornik (1991). In this post, I will use TensorFlow to implement a multilayer neural network (also known as a multilayer perceptron) to learn arbitrary Python lambda expressions. (more…)

Continue Reading 4

© 2016 Delip Rao. All Rights Reserved.