Author Archive | Delip

Everything is a Model

TLDR: I review a recent systems paper from Google, why it is a wake-up call to the industry, and the recipe it provides for nonlinear product thinking. Here, I will be enumerating my main takeaways from a recent paper, “The Case for Learned Index Structures” by Tim Kraska, Alex Beutel, Ed Chi, Jeffrey Dean, and […]

Continue Reading 0

R7 Speech Sciences: A New Chapter

I founded Joostware 3 years ago with the goal of building something like an artist collective, but for independent AI researchers. A lot of amazing things happened at Joostware. Despite keeping a low profile and identity, we shipped tons of code, models, and above all, we had great fun doing that (like “work from beach/mountain” […]

Continue Reading 0

The Two Tribes of Language Researchers

TL;DR not-a-rant rant When I talk to friends who work on human language (#nlproc), I notice two tribes of people. These are folks who do Natural Language Processing and folks who do Computational Linguistics. This distinction is not mine and is blurry, but I think it explains some of the differences in values different researchers […]

Continue Reading 0

A Billion Words and The Limits of Language Modeling

In this post, I will talk about Language Models, when (and when not) to use LSTMs for language modeling, and some state of the art results. While I mostly discuss the “Exploring Limits” paper, I’m adding a few things elementary (for some) here for completeness sake. The Exploring Limits paper is not new, but I think it’s a good illustration […]

Continue Reading 1

Is BackPropagation Necessary?

In the previous post, we saw how the backprop algorithm itself is a bottleneck in training, and how the Synthetic Gradient approach proposed by DeepMind reduces/avoids network locking during training. While very clever, there is something unsettling about the solution. It seems very contrived, and definitely resource intensive.  For example, a simple feed forward network under the […]

Continue Reading 5

Synthetic Gradients .. Cool or Meh?

Synthetic what now? DeepMind recently published about Synthetic Gradients. This post is about that — what they are, and does it make sense for your average Deep Joe to use it. A Computational Graph is the best data structure to represent deep networks. (D)NN training and inference algorithms are examples of data flow algorithms, and […]

Continue Reading 10

Turi Acquisition

(This post is about Turi, but I will occasionally refer to it by its older names Dato and Graphlab.) Turi got acquired by Apple for $200M (or so it is rumored). Reactions on the internet go from hearty congratulations to folks saying this is another example of the “AI bubble”. And of course, there are ill-informed, get-rich-quick […]

Continue Reading 0

© 2016 Delip Rao. All Rights Reserved.