Machine learning techniques are based on the assumption that the future will resemble the past, with some allowance for uncertainty and yet-unobserved events. That assumption has held up well in machine learning applications like advertising, weather, and self driving cars. But what about applications that predict a person’s future actions and use that prediction to make a big decision about that person’s life? What if we train our machine learning systems on data containing human biases we do not want to reinforce in the future? This talk first draws on the use of machine learning classification techniques in criminal sentencing to explore questions about how machine learning systems learn human biases. It will then look to promising work at MIT that demonstrates removing unwanted biases from Google’s word2vec, a vector space representing word meanings. Many machine learning techniques, including the classification used in the sentencing case, start with building a vector space to mathematically represent our observations of the past. If we can, as MIT researchers have, ‘de-bias’ our vector spaces, perhaps we can use machine learning to create the future we want.