The Two Tribes of Language Researchers

TL;DR not-a-rant rant When I talk to friends who work on human language (#nlproc), I notice two tribes of people. These are folks who do Natural Language Processing and folks who do Computational Linguistics. This distinction is not mine and is blurry, but I think it explains some of the differences in values different researchers […]

Continue Reading 0

A Billion Words and The Limits of Language Modeling

In this post, I will talk about Language Models, when (and when not) to use LSTMs for language modeling, and some state of the art results. While I mostly discuss the “Exploring Limits” paper, I’m adding a few things elementary (for some) here for completeness sake. The Exploring Limits paper is not new, but I think it’s a good illustration […]

Continue Reading 1

Is BackPropagation Necessary?

In the previous post, we saw how the backprop algorithm itself is a bottleneck in training, and how the Synthetic Gradient approach proposed by DeepMind reduces/avoids network locking during training. While very clever, there is something unsettling about the solution. It seems very contrived, and definitely resource intensive.  For example, a simple feed forward network under the […]

Continue Reading 4

Synthetic Gradients .. Cool or Meh?

Synthetic what now? DeepMind recently published about Synthetic Gradients. This post is about that — what they are, and does it make sense for your average Deep Joe to use it. A Computational Graph is the best data structure to represent deep networks. (D)NN training and inference algorithms are examples of data flow algorithms, and […]

Continue Reading 5

Turi Acquisition

(This post is about Turi, but I will occasionally refer to it by its older names Dato and Graphlab.) Turi got acquired by Apple for $200M (or so it is rumored). Reactions on the internet go from hearty congratulations to folks saying this is another example of the “AI bubble”. And of course, there are ill-informed, get-rich-quick […]

Continue Reading 0

Gradient Noise Injection Is Not So Strange After All

Yesterday, I wrote about a gradient noise injection result at ICLR 2016, and noted the authors of the paper, despite detailed experimentation, were very wishy washy in their explanation of why it works. Fortunately, my Twitter friends, particularly Tim Vieira and Shubhendu Trivedi, grounded this much better than the authors themselves! Shubhendu pointed out Rong Ge (of MSR) […]

Continue Reading 0

Should you get the new NVIDIA DGX-1 for your startup/lab?

NVIDIA announced DGX-1, their new “GPU supercomputer”. The spec is impressive. Performance, even more so (training AlexNet in 2 hours with 1 node). Costs $129K. Running this would take around 3KW. That’s like keeping an oven going. The cheapest (per hour) best config you can currently get from AWS is g2.8xlarge: So for $129K you […]

Continue Reading 0

© 2016 Delip Rao. All Rights Reserved.