At Joostware, I build ML and NLP products. Joostware is an entirely bootstrapped product development
studio garage, and consulting is a big part of the business. I work with clients (mostly early startups) on their next big idea, and help them mature those ideas and bring them into reality. One amazing thing I notice, increasingly, is machine learning is everywhere in the Valley tech conscious. Today, no startup worth its domain name, wants to admit it doesn’t use/want to use ML/NLP/Vision or “AI”.
In one such consulting deal, I felt particularly worried about what I saw in the data and the production model outputs. ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission… the list goes on.
So what can you do about it?
To co-opt a related paper’s title, we can only ensure fairness through awareness. I first wrote to Suresh Venkatasubramanian (@geomblog), who I knew had done some work on this and, eventually, I went down the literature rabbit hole on unfairness and discrimination in machine learning.
I must thank Suresh for providing some of the seed material for my research. I have also been very influenced by the work of Cynthia Dwork, Moritz Hardt (@mrtz), and Cathy O’Neill. Standing on their shoulders, I shared some of my learning with friends from The Data Guild recently. I have detailed notes for some of these slides. If you would like to follow those, try going directly to Google Slides.
I look forward to more such conversations in the startup lands everywhere (I mostly refer to startups because I work a lot with them, and they are usually the ones who are incentivized to move fast and break things).