Ensemble optimisation, multiple constraints and overconfidence: a case study with future Australian precipitation change
13 Climate Action
550
330
anzsrc-for: 0405 Oceanography
anzsrc-for: 0406 Physical Geography and Environmental Geoscience
anzsrc-for: 3702 Climate change science
0207 environmental engineering
37 Earth Sciences
anzsrc-for: 37 Earth Sciences
02 engineering and technology
anzsrc-for: 3708 Oceanography
Multi-objective optimisation; Pareto optimality; Constraint; Multi-model ensemble; Prediction; Model-as-truth experiments
anzsrc-for: 0401 Atmospheric Sciences
13. Climate action
3701 Atmospheric Sciences
Generic health relevance
anzsrc-for: 3701 Atmospheric Sciences
DOI:
10.1007/s00382-019-04690-8
Publication Date:
2019-04-01T11:25:16Z
AUTHORS (6)
ABSTRACT
Future climate is typically projected using multi-model ensembles, but the ensemble mean is unlikely to be optimal if models’ skill at reproducing historical climate is not considered. Moreover, individual climate models are not independent. Here, we examine the interplay between the benefits of optimising an ensemble for the performance of its mean and the the effect this has on ensemble spread as an uncertainty estimate. Using future Australian precipitation change as a case study, we perform optimal subset selection based on present-day precipitation, sea surface temperature and/or 500 hPa eastward wind climatologies. We use either one, two, or all three variables as predictors. Out-of-sample projection skill is assessed using a model-as-truth approach (rather than observations). For multiple variables, multi-objective optimisation is used to obtain Pareto-optimal subsets (an ensemble of model subsets), to gauge the uncertainty in optimisation arising from the multiple constraints. We find that the spread of climate model subset averages typically under-represents the true projection uncertainty (overconfidence), but that the situation can be significantly improved using mixture distributions for uncertainty estimation. The single best predictor, present-day precipitation, gives the most accurate results but is still overconfident—a consequence of calibrating too specifically. It is only when all three constraints are used that projection skill is improved and overconfidence is eliminated, but at the cost of a poorer best estimate relative to one predictor. We thus identify an important trade-off between accuracy and precision, depending on the number of predictors, which is likely relevant for any subset selection or weighting strategy.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (36)
CITATIONS (19)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....