Logo image
Relative model score: a scoring rule for evaluating ensemble simulations with application to microbial soil respiration modeling
Journal article   Peer reviewed

Relative model score: a scoring rule for evaluating ensemble simulations with application to microbial soil respiration modeling

Ahmed S. Elshall, Ming Ye, Yongzhen Pei, Fan Zhang, Guo-Yue Niu, Greg A. Barron-Gafford and Florida State Univ., Tallahassee, FL (United States)
Stochastic environmental research and risk assessment, Vol.32(10), pp.2809-2819
10-01-2018

Abstract

Engineering Engineering, Civil Engineering, Environmental Environmental Sciences Environmental Sciences & Ecology Life Sciences & Biomedicine Mathematics Physical Sciences Science & Technology Statistics & Probability Technology Water Resources
This paper defines a new scoring rule, namely relative model score (RMS), for evaluating ensemble simulations of environmental models. RMS implicitly incorporates the measures of ensemble mean accuracy, prediction interval precision, and prediction interval reliability for evaluating the overall model predictive performance. RMS is numerically evaluated from the probability density functions of ensemble simulations given by individual models or several models via model averaging. We demonstrate the advantages of using RMS through an example of soil respiration modeling. The example considers two alternative models with different fidelity, and for each model Bayesian inverse modeling is conducted using two different likelihood functions. This gives four single-model ensembles of model simulations. For each likelihood function, Bayesian model averaging is applied to the ensemble simulations of the two models, resulting in two multi-model prediction ensembles. Predictive performance for these ensembles is evaluated using various scoring rules. Results show that RMS outperforms the commonly used scoring rules of log-score, pseudo Bayes factor based on Bayesian model evidence (BME), and continuous ranked probability score (CRPS). RMS avoids the problem of rounding error specific to log-score. Being applicable to any likelihood functions, RMS has broader applicability than BME that is only applicable to the same likelihood function of multiple models. By directly considering the relative score of candidate models at each cross-validation datum, RMS results in more plausible model ranking than CRPS. Therefore, RMS is considered as a robust scoring rule for evaluating predictive performance of single-model and multi-model prediction ensembles.

Metrics

Details

Logo image