DeepGUM presented at ECCV 2018

Our paper on robust regression was presented at ECCV’18 in Munich [1]. This paper presents a methodological framework for robust regression combining the representation power of deep architectures with the outlier detection capabilities of probabilistic models, in particular of a Gaussian-Uniform mixture (GUM). The code is available at https://github.com/Stephlat/DeepGUM.

Abstract: In this paper we address the problem of how to robustly train a ConvNet for regression, or deep robust regression. Traditionally, deep regression employ the L2 loss function, known to be sensitive to outliers, i.e. samples that either lie at an abnormal distance away from the majority of the training samples, or that correspond to wrongly annotated targets. This means that, during back-propagation, outliers may bias the training process due to the high magnitude of their gradient. In this paper, we propose DeepGUM: a deep regression model that is robust to outliers thanks to the use of a Gaussian-uniform mixture model. We derive an optimization algorithm that alternates between the unsupervised detection of outliers using expectation-maximization, and the supervised training with cleaned samples using stochastic gradient descent. DeepGUM is able to adapt to a continuously evolving outlier distribution, avoiding to manually impose any threshold on the proportion of outliers in the training set. Extensive experimental evaluations on four different tasks (facial and fashion landmark detection, age and head pose estimation) lead us to conclude that our novel robust technique provides reliability in the presence of various types of noise and protection against a high percentage of outliers.

[1] S. Lathuilière, P Mesejo, X Alameda-Pineda, R Horaud, DeepGUM : Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model, In European Conference on Computer Vision (ECCV), 2018

Leave a Reply

Your email address will not be published. Required fields are marked *