Author Archive | Delip

Differentiable Dynamic Programs and SparseMAP Inference

Two exciting NLP papers at ICML 2018! ICML 2018 accepts are out, and I am excited about two papers that I will briefly outline here. I think both papers are phenomenally good and will bring back structured prediction in NLP to modern deep learning architectures. Differentiable Dynamic Programming for Structured Prediction and Attention Arthur Mensch […]

Continue Reading 0

Everything is a Model

TLDR: I review a recent systems paper from Google, why it is a wake-up call to the industry, and the recipe it provides for nonlinear product thinking. Here, I will be enumerating my main takeaways from a recent paper, “The Case for Learned Index Structures” by Tim Kraska, Alex Beutel, Ed Chi, Jeffrey Dean, and […]

Continue Reading 0

The Two Tribes of Language Researchers

TL;DR not-a-rant rant When I talk to friends who work on human language (#nlproc), I notice two tribes of people. These are folks who do Natural Language Processing and folks who do Computational Linguistics. This distinction is not mine and is blurry, but I think it explains some of the differences in values different researchers […]

Continue Reading 0

A Billion Words and The Limits of Language Modeling

In this post, I will talk about Language Models, when (and when not) to use LSTMs for language modeling, and some state of the art results. While I mostly discuss the “Exploring Limits” paper, I’m adding a few things elementary (for some) here for completeness sake. The Exploring Limits paper is not new, but I think it’s a good illustration […]

Continue Reading 1

Is BackPropagation Necessary?

In the previous post, we saw how the backprop algorithm itself is a bottleneck in training, and how the Synthetic Gradient approach proposed by DeepMind reduces/avoids network locking during training. While very clever, there is something unsettling about the solution. It seems very contrived, and definitely resource intensive.  For example, a simple feed forward network under the […]

Continue Reading 6

Synthetic Gradients .. Cool or Meh?

Synthetic what now? DeepMind recently published about Synthetic Gradients. This post is about that — what they are, and does it make sense for your average Deep Joe to use it. A Computational Graph is the best data structure to represent deep networks. (D)NN training and inference algorithms are examples of data flow algorithms, and […]

Continue Reading 12

© 2016 Delip Rao. All Rights Reserved.