There are two main uses of the term calibration in statistics that denote special types of statistical inference problems. Calibration can mean
- a reverse process to regression, where instead of a future dependent variable being predicted from known explanatory variables, a known observation of the dependent variables is used to predict a corresponding explanatory variable;[1]
- procedures in statistical classification to determine class membership probabilities which assess the uncertainty of a given new observation belonging to each of the already established classes.
In addition, calibration is used in statistics with the usual general meaning of calibration. For example, model calibration can be also used to refer to Bayesian inference about the value of a model's parameters, given some data set, or more generally to any type of fitting of a statistical model. As Philip Dawid puts it, "a forecaster is well calibrated if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent."[2]
In classification
editCalibration in classification means transforming classifier scores into class membership probabilities. An overview of calibration methods for two-class and multi-class classification tasks is given by Gebel (2009).[3] A classifier might separate the classes well, but be poorly calibrated, meaning that the estimated class probabilities are far from the true class probabilities. In this case, a calibration step may help improve the estimated probabilities. A variety of metrics exist that are aimed to measure the extent to which a classifier produces well-calibrated probabilities. Foundational work includes the Expected Calibration Error (ECE).[4] Into the 2020s, variants include the Adaptive Calibration Error (ACE) and the Test-based Calibration Error (TCE), which address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1] range.[5][6]
A 2020s advancement in calibration assessment is the introduction of the Estimated Calibration Index (ECI).[7] The ECI extends the concepts of the Expected Calibration Error (ECE) to provide a more nuanced measure of a model's calibration, particularly addressing overconfidence and underconfidence tendencies. Originally formulated for binary settings, the ECI has been adapted for multiclass settings, offering both local and global insights into model calibration. This framework aims to overcome some of the theoretical and interpretative limitations of existing calibration metrics. Through a series of experiments, Famiglini et al. demonstrate the framework's effectiveness in delivering a more accurate understanding of model calibration levels and discuss strategies for mitigating biases in calibration assessment. An online tool has been proposed to compute both ECE and ECI.[8] The following univariate calibration methods exist for transforming classifier scores into class membership probabilities in the two-class case:
- Assignment value approach, see Garczarek (2002)[9]
- Bayes approach, see Bennett (2002)[10]
- Isotonic regression, see Zadrozny and Elkan (2002)[11]
- Platt scaling (a form of logistic regression), see Lewis and Gale (1994)[12] and Platt (1999)[13]
- Bayesian Binning into Quantiles (BBQ) calibration, see Naeini, Cooper, Hauskrecht (2015)[14]
- Beta calibration, see Kull, Filho, Flach (2017)[15]
In probability prediction and forecasting
editIn prediction and forecasting, a Brier score is sometimes used to assess prediction accuracy of a set of predictions, specifically that the magnitude of the assigned probabilities track the relative frequency of the observed outcomes. Philip E. Tetlock employs the term "calibration" in this sense in his 2015 book Superforecasting.[16] This differs from accuracy and precision. For example, as expressed by Daniel Kahneman, "if you give all events that happen a probability of .6 and all the events that don't happen a probability of .4, your calibration is perfect but your discrimination is miserable".[16] In meteorology, in particular, as concerns weather forecasting, a related mode of assessment is known as forecast skill.
In regression
editThis article may require cleanup to meet Wikipedia's quality standards. The specific problem is: unclear what it does. (September 2023) |
The calibration problem in regression is the use of known data on the observed relationship between a dependent variable and an independent variable to make estimates of other values of the independent variable from new observations of the dependent variable.[17][18][19] This can be known as "inverse regression";[20] there is also sliced inverse regression. The following multivariate calibration methods exist for transforming classifier scores into class membership probabilities in the case with classes count greater than two:
- Reduction to binary tasks and subsequent pairwise coupling, see Hastie and Tibshirani (1998)[21]
- Dirichlet calibration, see Gebel (2009)[3]
Example
editOne example is that of dating objects, using observable evidence such as tree rings for dendrochronology or carbon-14 for radiometric dating. The observation is caused by the age of the object being dated, rather than the reverse, and the aim is to use the method for estimating dates based on new observations. The problem is whether the model used for relating known ages with observations should aim to minimise the error in the observation, or minimise the error in the date. The two approaches will produce different results, and the difference will increase if the model is then used for extrapolation at some distance from the known results.
See also
edit- Calibration – Check on the accuracy of measurement devices
- Calibrated probability assessment – Subjective probabilities assigned in a way that historically represents their uncertainty
- Conformal prediction
References
edit- ^ Cook, Ian; Upton, Graham (2006). Oxford Dictionary of Statistics. Oxford: Oxford University Press. ISBN 978-0-19-954145-4.
- ^ Dawid, A. P (1982). "The Well-Calibrated Bayesian". Journal of the American Statistical Association. 77 (379): 605–610. doi:10.1080/01621459.1982.10477856.
- ^ a b Gebel, Martin (2009). Multivariate calibration of classifier scores into the probability space (PDF) (PhD thesis). University of Dortmund.
- ^ M.P. Naeini, G. Cooper, and M. Hauskrecht, Obtaining well calibrated probabilities using bayesian binning. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2015.
- ^ J. Nixon, M.W. Dusenberry, L. Zhang, G. Jerfel, & D. Tran. Measuring Calibration in Deep Learning. In: CVPR workshops (Vol. 2, No. 7), 2019.
- ^ T. Matsubara, N. Tax, R. Mudd, & I. Guy. TCE: A Test-Based Approach to Measuring Calibration Error. In: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), PMLR, 2023.
- ^ Famiglini, Lorenzo, Andrea Campagner, and Federico Cabitza. "Towards a Rigorous Calibration Assessment Framework: Advancements in Metrics, Methods, and Use." ECAI 2023. IOS Press, 2023. 645-652. Doi 10.3233/FAIA230327
- ^ Famiglini, Lorenzo; Campagner, Andrea; Cabitza, Federico (2023), "Towards a Rigorous Calibration Assessment Framework: Advancements in Metrics, Methods, and Use", ECAI 2023, IOS Press, pp. 645–652, doi:10.3233/faia230327, hdl:10281/456604, retrieved 25 March 2024
- ^ U. M. Garczarek "[1] Archived 2004-11-23 at the Wayback Machine," Classification Rules in Standardized Partition Spaces, Dissertation, Universität Dortmund, 2002
- ^ P. N. Bennett, Using asymmetric distributions to improve text classifier probability estimates: A comparison of new and standard parametric methods, Technical Report CMU-CS-02-126, Carnegie Mellon, School of Computer Science, 2002.
- ^ B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining, 694–699, Edmonton, ACM Press, 2002.
- ^ D. D. Lewis and W. A. Gale, A Sequential Algorithm for Training Text classifiers. In: W. B. Croft and C. J. van Rijsbergen (eds.), Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '94), 3–12. New York, Springer-Verlag, 1994.
- ^ J. C. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: A. J. Smola, P. Bartlett, B. Schölkopf and D. Schuurmans (eds.), Advances in Large Margin Classiers, 61–74. Cambridge, MIT Press, 1999.
- ^ Naeini MP, Cooper GF, Hauskrecht M. Obtaining Well Calibrated Probabilities Using Bayesian Binning. Proceedings of the . AAAI Conference on Artificial Intelligence AAAI Conference on Artificial Intelligence. 2015;2015:2901-2907.
- ^ Meelis Kull, Telmo Silva Filho, Peter Flach; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:623-631, 2017.
- ^ a b
"Edge Master Class 2015: A Short Course in Superforecasting, Class II". edge.org. Edge Foundation. 24 August 2015. Retrieved 13 April 2018.
Calibration is when I say there's a 70 percent likelihood of something happening, things happen 70 percent of time.
- ^ Brown, P.J. (1994) Measurement, Regression and Calibration, OUP. ISBN 0-19-852245-2
- ^ Ng, K. H., Pooi, A. H. (2008) "Calibration Intervals in Linear Regression Models", Communications in Statistics - Theory and Methods, 37 (11), 1688–1696. [2]
- ^ Hardin, J. W., Schmiediche, H., Carroll, R. J. (2003) "The regression-calibration method for fitting generalized linear models with additive measurement error", Stata Journal, 3 (4), 361–372. link, pdf
- ^ Draper, N.L., Smith, H. (1998) Applied Regression analysis, 3rd Edition, Wiley. ISBN 0-471-17082-8
- ^ T. Hastie and R. Tibshirani, "[3]," Classification by pairwise coupling. In: M. I. Jordan, M. J. Kearns and S. A. Solla (eds.), Advances in Neural Information Processing Systems, volume 10, Cambridge, MIT Press, 1998.