Preprint / Version 0

Measuring Uncertainty Calibration

Authors

  • Kamil Ciosek
  • Nicolò Felicioni
  • Sina Ghiassian
  • Juan Elenter Litwin
  • Francesco Tonolini
  • David Gustaffson
  • Eva Garcia Martin
  • Carmen Barcena Gonzales
  • Raphaëlle Bertrand-Lalo

Abstract

We make two contributions to the problem of estimating the $L_1$ calibration error of a binary classifier from a finite dataset. First, we provide an upper bound for any classifier where the calibration function has bounded variation. Second, we provide a method of modifying any classifier so that its calibration error can be upper bounded efficiently without significantly impacting classifier performance and without any restrictive assumptions. All our results are non-asymptotic and distribution-free. We conclude by providing advice on how to measure calibration error in practice. Our methods yield practical procedures that can be run on real-world datasets with modest overhead.

References

Downloads

Posted

2025-12-15