Archive | Deep Learning

Unstoppable AI Flywheels and the Making of the New Goliaths

TL;DR: AI creates engines for relentless optimization at all levels. Read the article to figure out how and its consequences to what you’re doing. Some time ago I wrote about how everything is a model when reviewing a paper from Kraska et al (2017) where they show how traditional CS data structures like B-Tree indexes, […]

Continue Reading 0

Differentiable Dynamic Programs and SparseMAP Inference

Two exciting NLP papers at ICML 2018! ICML 2018 accepts are out, and I am excited about two papers that I will briefly outline here. I think both papers are phenomenally good and will bring back structured prediction in NLP to modern deep learning architectures. Differentiable Dynamic Programming for Structured Prediction and Attention Arthur Mensch […]

Continue Reading 0

Everything is a Model

TLDR: I review a recent systems paper from Google, why it is a wake-up call to the industry, and the recipe it provides for nonlinear product thinking. Here, I will be enumerating my main takeaways from a recent paper, “The Case for Learned Index Structures” by Tim Kraska, Alex Beutel, Ed Chi, Jeffrey Dean, and […]

Continue Reading 0

A Billion Words and The Limits of Language Modeling

In this post, I will talk about Language Models, when (and when not) to use LSTMs for language modeling, and some state of the art results. While I mostly discuss the “Exploring Limits” paper, I’m adding a few things elementary (for some) here for completeness sake. The Exploring Limits paper is not new, but I think it’s a good illustration […]

Continue Reading 1

© 2016 Delip Rao. All Rights Reserved.