Sangam: A Confluence of Knowledge Streams

Statistical Assessment of Forecast Calibration

Show simple item record

dc.contributor Ferro, Christopher
dc.contributor Stephenson, David
dc.creator Bashaykh, H
dc.date 2022-09-12T12:42:59Z
dc.date 2022-09-12
dc.date 2022-09-12T12:04:23Z
dc.date 2022-09-12T12:42:59Z
dc.date.accessioned 2023-02-23T12:16:30Z
dc.date.available 2023-02-23T12:16:30Z
dc.identifier http://hdl.handle.net/10871/130795
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/258629
dc.description This work is concerned with evaluating the performance of forecasts. Various types of forecast are studied: probabilistic forecasts, which take the form of a predictive distribution, and point forecasts. We focus on assessing the calibration of forecasts, which refers to the statistical compatibility between forecasts and realised observations. We generalise the definition of the calibration of continuous predictive distributions. Our generalisation includes existing modes of calibration, such as marginal calibration and probabilistic calibration, as special cases, and introduces new calibration modes. In addition, our generalisation allows us to assess calibration conditional on interesting random variables such as different situations, regions, and seasons. This provides more details about calibration in subsets of forecasts than assessing calibration for the entire set of forecasts. We introduce measures of calibration in our generalisation by decomposing proper scores. Our decompositions provide novel measures of marginal calibration and probabilistic calibration as special cases. In addition to considering probabilistic forecasts, we consider the calibration of point forecasts in terms of functionals of the predictive distribution. We define the calibration of functionals of predictive distributions and we propose a general approach to producing criteria for assessing conditional calibration of identifiable functionals such as moments and quantiles. For non-identifiable functionals, we consider only the variance, which is a representation of forecast uncertainty. We derive criteria for the conditional calibration of the variance that do not require assuming a calibrated mean or correcting the bias in the mean. To assess the calibration of forecast means, we also produce a diagnostic graph using local linear regression. We suggest a novel bootstrap approach to construct- ing confidence intervals of the conditional mean that takes heteroscedasticity and autocorrelation into account, and conduct a simulation study to investigate the empirical coverage of these intervals. We use a real data example of forecasts of El Nino–Southern Oscillation to illustrate our methods.
dc.publisher University of Exeter
dc.publisher Mathematics
dc.rights http://www.rioxx.net/licenses/all-rights-reserved
dc.title Statistical Assessment of Forecast Calibration
dc.type Thesis or dissertation
dc.type PhD in Mathematics
dc.type Doctoral
dc.type Doctoral Thesis


Files in this item

Files Size Format View
BashaykhH.pdf 2.346Mb application/pdf View/Open

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse