On the Convergence of IRLS and Its Variants in Outlier-Robust Estimation

Abstract

Outlier-robust estimation involves estimating some parameters (e.g., 3D rotations) from data samples in the presence of outliers, and is typically formulated as a non-convex and non-smooth problem. For this problem, the classical method called iteratively reweighted least-squares (IRLS) and its variants have shown impressive performance. This paper aims to understand why these algorithms work so well, and towards this we make several contributions. First, we incorporate majorization and graduated non-convexity (GNC) into the IRLS framework, and prove that the resulting IRLS variant is a convergent method for the outlier-robust estimation problem. Moreover, in the robust regression context with a constant fraction of outliers, we prove that this IRLS variant converges to the ground truth at a global linear and local quadratic rate, for a random Gaussian feature matrix with high probability. Experiments corroborate our theory and show that the proposed IRLS variant converges within 5-10 iterations for typical problem instances of outlier-robust estimation, while state-of-the-art methods need at least 30 iterations.

Publication
In Conference on Computer Vision and Pattern Recognition (CVPR 2023). Highlighted paper (top 2.5% of submitted papers)