Posts


From Deep Mixtures to Deep Quantiles - Part 1 - 2019-02-16

In which we learn everything about $y$ and (ab)use Keras to optimize anything

What is the error of your latest deep learning regression model? Well, since you had a well-defined objective function - say the MSE - you already know the answer. But you are asking yourself (or, more likely, your boss is asking you): can we do better?

The answer depends on whether the error is due to model errors, a.k.a accuracy, or intrinsic randomness in the target variable, a.k.a. precision.

And if the un-predictable randomness dominates the error, there is hardly anything we can do to improve on it. Or can we? What if instead of learning to predict a single value, we could capture the probability distribution, i.e. everything there is to know about the target variable?

more...

Deep Christmas-Tree-Based Learning - 2018-12-24

A novel technique applied to a novel dataset 🎄🎄🎄

Merry Christmas!

more...

Better function transformers in ML pipelines - 2018-11-21

A transformer factory using metaprogramming

One of the most convenient features in scikit-learn is the ability to build complex models by chaining transformers and estimators into pipelines.

Importantly, all (hyper-)parameters of each transformer remain accessible and tunable. The simplicity suffers somewhat once we need to add custom preprocessing functions into the pipeline. The “standard” approach using sklearn.preprocessing.FunctionTransformer felt decidedly unsatisfactory once I tried to define some parameter search spaces, so I looked into implementing a more usable alternative:

Beautiful is better than ugly!

more...