Preprint / Version 0

Look everywhere effects in anomaly detection

Authors

  • Marie Hein
  • Benjamin Nachman
  • David Shih

Abstract

Machine learning-based anomaly detection methods are able to search high-dimensional spaces for hints of new physics with much less theory bias than traditional searches. However, by searching in many directions all at once, the statistical power of these search strategies is diluted by a variant of the look elsewhere effect. We examine this challenge in detail, focusing on weakly supervised methods. We find that training and testing on the same data results in badly miscalibrated $p$-values due to the anomaly detector searching everywhere in the data and overfitting on statistical fluctuations. However, if these $p$-values can be calibrated, they may offer the best sensitivity to anomalies, since this approach uses all of the data. Conversely, training on half of the data and testing on the other half results in perfectly calibrated $p$-values, but at the cost of reduced sensitivity to anomalies. Similarly, regularization methods such as early stopping can help with $p$-value calibration but also possibly at the expense of sensitivity. Finally, we find that k-folding strikes an effective balance between calibration and sensitivity. Our findings are supported by numerical studies with Gaussian random variables as well as from collider physics using the LHC Olympics benchmark anomaly detection dataset.

References

Downloads

Posted

2025-12-15